WorldWideScience

Sample records for robust targeted maximum

  1. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  2. A Maximum Entropy Method for a Robust Portfolio Problem

    Directory of Open Access Journals (Sweden)

    Yingying Xu

    2014-06-01

    Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.

  3. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  4. Robust optimum design with maximum entropy method; Saidai entropy ho mochiita robust sei saitekika sekkeiho

    Energy Technology Data Exchange (ETDEWEB)

    Kawaguchi, K; Egashira, Y; Watanabe, G [Mazda Motor Corp., Hiroshima (Japan)

    1997-10-01

    Vehicle and unit performance change according to not only external causes represented by the environment such as temperature or weather, but also internal causes which are dispersion of component characteristics and manufacturing processes or aged deteriorations. We developed the design method to estimate thus performance distributions with maximum entropy method and to calculate specifications with high performance robustness using Fuzzy theory. This paper describes the details of these methods and examples applied to power window system. 3 refs., 7 figs., 4 tabs.

  5. Robust Deep Network with Maximum Correntropy Criterion for Seizure Detection

    Directory of Open Access Journals (Sweden)

    Yu Qi

    2014-01-01

    Full Text Available Effective seizure detection from long-term EEG is highly important for seizure diagnosis. Existing methods usually design the feature and classifier individually, while little work has been done for the simultaneous optimization of the two parts. This work proposes a deep network to jointly learn a feature and a classifier so that they could help each other to make the whole system optimal. To deal with the challenge of the impulsive noises and outliers caused by EMG artifacts in EEG signals, we formulate a robust stacked autoencoder (R-SAE as a part of the network to learn an effective feature. In R-SAE, the maximum correntropy criterion (MCC is proposed to reduce the effect of noise/outliers. Unlike the mean square error (MSE, the output of the new kernel MCC increases more slowly than that of MSE when the input goes away from the center. Thus, the effect of those noises/outliers positioned far away from the center can be suppressed. The proposed method is evaluated on six patients of 33.6 hours of scalp EEG data. Our method achieves a sensitivity of 100% and a specificity of 99%, which is promising for clinical applications.

  6. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  7. Optimum design of exploding pusher target to produce maximum neutrons

    International Nuclear Information System (INIS)

    Kitagawa, Y.; Miyanaga, N.; Kato, Y.; Nakatsuka, M.; Nishiguchi, A.; Yabe, T.; Yamanaka, C.

    1985-03-01

    Exploding pusher target experiments have been conducted with the 1.052-μm GEKKO MII two-beam glass laser system to design an optimum target, which couples to the incident laser light most effectively to produce the maximum neutrons. Since hot electrons preheat the shell entirely in spite of strongly nonuniform irradiation, a simple model can design the optimum target, of which the shell/fuel interface is accelerated to 0.5 to 0.7 times the initial radius within a laser pulse. A 2-dimensional computer simulation supports this target design. The scaling of the neutron yield N with the laser power P is N ∝ P 2.4±0.4 . (author)

  8. Robust H∞ Control for Spacecraft Rendezvous with a Noncooperative Target

    Directory of Open Access Journals (Sweden)

    Shu-Nan Wu

    2013-01-01

    Full Text Available The robust H∞ control for spacecraft rendezvous with a noncooperative target is addressed in this paper. The relative motion of chaser and noncooperative target is firstly modeled as the uncertain system, which contains uncertain orbit parameter and mass. Then the H∞ performance and finite time performance are proposed, and a robust H∞ controller is developed to drive the chaser to rendezvous with the non-cooperative target in the presence of control input saturation, measurement error, and thrust error. The linear matrix inequality technology is used to derive the sufficient condition of the proposed controller. An illustrative example is finally provided to demonstrate the effectiveness of the controller.

  9. Robust Controller to Extract the Maximum Power of a Photovoltaic System

    Directory of Open Access Journals (Sweden)

    OULD CHERCHALI Noureddine

    2014-05-01

    Full Text Available This paper proposes a technique of intelligent control to track the maximum power point (MPPT of a photovoltaic system . The PV system is non-linear and it is exposed to external perturbations like temperature and solar irradiation. Fuzzy logic control is known for its stability and robustness. FLC is adopted in this work for the improvement and optimization of control performance of a photovoltaic system. Another technique called perturb and observe (P & O is studied and compared with the FLC technique. The PV system is constituted of a photovoltaic panel (PV, a DC-DC converter (Boost and a battery like a load. The simulation results are developed in MATLAB / Simulink software. The results show that the controller based on fuzzy logic is better and faster than the conventional controller perturb and observe (P & O and gives a good maximum power of a photovoltaic generator under different changes of weather conditions.

  10. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    International Nuclear Information System (INIS)

    Stsepankou, D; Arns, A; Hesser, J; Ng, S K; Zygmanski, P

    2012-01-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone–beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system. (paper)

  11. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  12. Robustness of Dengue Complex Network under Targeted versus Random Attack

    Directory of Open Access Journals (Sweden)

    Hafiz Abid Mahmood Malik

    2017-01-01

    Full Text Available Dengue virus infection is one of those epidemic diseases that require much consideration in order to save the humankind from its unsafe impacts. According to the World Health Organization (WHO, 3.6 billion individuals are at risk because of the dengue virus sickness. Researchers are striving to comprehend the dengue threat. This study is a little commitment to those endeavors. To observe the robustness of the dengue network, we uprooted the links between nodes randomly and targeted by utilizing different centrality measures. The outcomes demonstrated that 5% targeted attack is equivalent to the result of 65% random assault, which showed the topology of this complex network validated a scale-free network instead of random network. Four centrality measures (Degree, Closeness, Betweenness, and Eigenvector have been ascertained to look for focal hubs. It has been observed through the results in this study that robustness of a node and links depends on topology of the network. The dengue epidemic network presented robust behaviour under random attack, and this network turned out to be more vulnerable when the hubs of higher degree have higher probability to fail. Moreover, representation of this network has been projected, and hub removal impact has been shown on the real map of Gombak (Malaysia.

  13. Robust cell tracking in epithelial tissues through identification of maximum common subgraphs.

    Science.gov (United States)

    Kursawe, Jochen; Bardenet, Rémi; Zartman, Jeremiah J; Baker, Ruth E; Fletcher, Alexander G

    2016-11-01

    Tracking of cells in live-imaging microscopy videos of epithelial sheets is a powerful tool for investigating fundamental processes in embryonic development. Characterizing cell growth, proliferation, intercalation and apoptosis in epithelia helps us to understand how morphogenetic processes such as tissue invagination and extension are locally regulated and controlled. Accurate cell tracking requires correctly resolving cells entering or leaving the field of view between frames, cell neighbour exchanges, cell removals and cell divisions. However, current tracking methods for epithelial sheets are not robust to large morphogenetic deformations and require significant manual interventions. Here, we present a novel algorithm for epithelial cell tracking, exploiting the graph-theoretic concept of a 'maximum common subgraph' to track cells between frames of a video. Our algorithm does not require the adjustment of tissue-specific parameters, and scales in sub-quadratic time with tissue size. It does not rely on precise positional information, permitting large cell movements between frames and enabling tracking in datasets acquired at low temporal resolution due to experimental constraints such as phototoxicity. To demonstrate the method, we perform tracking on the Drosophila embryonic epidermis and compare cell-cell rearrangements to previous studies in other tissues. Our implementation is open source and generally applicable to epithelial tissues. © 2016 The Authors.

  14. On-orbit real-time robust cooperative target identification in complex background

    Directory of Open Access Journals (Sweden)

    Wen Zhuoman

    2015-10-01

    Full Text Available Cooperative target identification is the prerequisite for the relative position and orientation measurement between the space robot arm and the to-be-arrested object. We propose an on-orbit real-time robust algorithm for cooperative target identification in complex background using the features of circle and lines. It first extracts only the interested edges in the target image using an adaptive threshold and refines them to about single-pixel-width with improved non-maximum suppression. Adapting a novel tracking approach, edge segments changing smoothly in tangential directions are obtained. With a small amount of calculation, large numbers of invalid edges are removed. From the few remained edges, valid circular arcs are extracted and reassembled to obtain circles according to a reliable criterion. Finally, the target is identified if there are certain numbers of straight lines whose relative positions with the circle match the known target pattern. Experiments demonstrate that the proposed algorithm accurately identifies the cooperative target within the range of 0.3–1.5 m under complex background at the speed of 8 frames per second, regardless of lighting condition and target attitude. The proposed algorithm is very suitable for real-time visual measurement of space robot arm because of its robustness and small memory requirement.

  15. Shock ignition targets: gain and robustness vs ignition threshold factor

    Science.gov (United States)

    Atzeni, Stefano; Antonelli, Luca; Schiavi, Angelo; Picone, Silvia; Volponi, Gian Marco; Marocchino, Alberto

    2017-10-01

    Shock ignition is a laser direct-drive inertial confinement fusion scheme, in which the stages of compression and hot spot formation are partly separated. The hot spot is created at the end of the implosion by a converging shock driven by a final ``spike'' of the laser pulse. Several shock-ignition target concepts have been proposed and relevant gain curves computed (see, e.g.). Here, we consider both pure-DT targets and more facility-relevant targets with plastic ablator. The investigation is conducted with 1D and 2D hydrodynamic simulations. We determine ignition threshold factors ITF's (and their dependence on laser pulse parameters) by means of 1D simulations. 2D simulations indicate that robustness to long-scale perturbations increases with ITF. Gain curves (gain vs laser energy), for different ITF's, are generated using 1D simulations. Work partially supported by Sapienza Project C26A15YTMA, Sapienza 2016 (n. 257584), Eurofusion Project AWP17-ENR-IFE-CEA-01.

  16. Robust maximum power point tracker using sliding mode controller for the three-phase grid-connected photovoltaic system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Il-Song [LG Chem. Ltd./Research park, Mobile Energy R and D, 104-1 Moonji-Dong, Yuseong-Gu, Daejeon 305-380 (Korea)

    2007-03-15

    A robust maximum power point tracker (MPPT) using sliding mode controller for the three-phase grid-connected photovoltaic system has been proposed in this paper. Contrary to the previous controller, the proposed system consists of MPPT controller and current controller for tight regulation of the current. The proposed MPPT controller generates current reference directly from the solar array power information and the current controller uses the integral sliding mode for the tight control of current. The proposed system can prevent the current overshoot and provide optimal design for the system components. The structure of the proposed system is simple, and it shows robust tracking property against modeling uncertainties and parameter variations. Mathematical modeling is developed and the experimental results verify the validity of the proposed controller. (author)

  17. Robustness studies of ignition targets for the National Ignition Facility in two dimensions

    International Nuclear Information System (INIS)

    Clark, Daniel S.; Haan, Steven W.; Salmonson, Jay D.

    2008-01-01

    Inertial confinement fusion capsules are critically dependent on the integrity of their hot spots to ignite. At the time of ignition, only a certain fractional perturbation of the nominally spherical hot spot boundary can be tolerated and the capsule still achieve ignition. The degree to which the expected hot spot perturbation in any given capsule design is less than this maximum tolerable perturbation is a measure of the ignition margin or robustness of that design. Moreover, since there will inevitably be uncertainties in the initial character and implosion dynamics of any given capsule, all of which can contribute to the eventual hot spot perturbation, quantifying the robustness of that capsule against a range of parameter variations is an important consideration in the capsule design. Here, the robustness of the 300 eV indirect drive target design for the National Ignition Facility [Lindl et al., Phys. Plasmas 11, 339 (2004)] is studied in the parameter space of inner ice roughness, implosion velocity, and capsule scale. A suite of 2000 two-dimensional simulations, run with the radiation hydrodynamics code LASNEX, is used as the data base for the study. For each scale, an ignition region in the two remaining variables is identified and the ignition cliff is mapped. In accordance with the theoretical arguments of Levedahl and Lindl [Nucl. Fusion 37, 165 (1997)] and Kishony and Shvarts [Phys. Plasmas 8, 4925 (2001)], the location of this cliff is fitted to a power law of the capsule implosion velocity and scale. It is found that the cliff can be quite well represented in this power law form, and, using this scaling law, an assessment of the overall (one- and two-dimensional) ignition margin of the design can be made. The effect on the ignition margin of an increase or decrease in the density of the target fill gas is also assessed

  18. Robust Bayesian Algorithm for Targeted Compound Screening in Forensic Toxicology

    NARCIS (Netherlands)

    Woldegebriel, M.; Gonsalves, J.; van Asten, A.; Vivó-Truyols, G.

    2016-01-01

    As part of forensic toxicological investigation of cases involving unexpected death of an individual, targeted or untargeted xenobiotic screening of post-mortem samples is normally conducted. To this end, liquid chromatography (LC) coupled to high-resolution mass spectrometry (MS) is typically

  19. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  20. Robust infrared target tracking using discriminative and generative approaches

    Science.gov (United States)

    Asha, C. S.; Narasimhadhan, A. V.

    2017-09-01

    The process of designing an efficient tracker for thermal infrared imagery is one of the most challenging tasks in computer vision. Although a lot of advancement has been achieved in RGB videos over the decades, textureless and colorless properties of objects in thermal imagery pose hard constraints in the design of an efficient tracker. Tracking of an object using a single feature or a technique often fails to achieve greater accuracy. Here, we propose an effective method to track an object in infrared imagery based on a combination of discriminative and generative approaches. The discriminative technique makes use of two complementary methods such as kernelized correlation filter with spatial feature and AdaBoost classifier with pixel intesity features to operate in parallel. After obtaining optimized locations through discriminative approaches, the generative technique is applied to determine the best target location using a linear search method. Unlike the baseline algorithms, the proposed method estimates the scale of the target by Lucas-Kanade homography estimation. To evaluate the proposed method, extensive experiments are conducted on 17 challenging infrared image sequences obtained from LTIR dataset and a significant improvement of mean distance precision and mean overlap precision is accomplished as compared with the existing trackers. Further, a quantitative and qualitative assessment of the proposed approach with the state-of-the-art trackers is illustrated to clearly demonstrate an overall increase in performance.

  1. Maximum entropy restoration of laser fusion target x-ray photographs

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.

    1976-01-01

    Maximum entropy principles were used to analyze the microdensitometer traces of a laser-fusion target photograph. The object is a glowing laser-fusion target microsphere 0.95 cm from a pinhole of radius 2 x 10 -4 cm, the image is 7.2 cm from the pinhole and the photon wavelength is likely to be 6.2 x 10 -8 cm. Some computational aspects of the problem are also considered

  2. Robust Bayesian Algorithm for Targeted Compound Screening in Forensic Toxicology.

    Science.gov (United States)

    Woldegebriel, Michael; Gonsalves, John; van Asten, Arian; Vivó-Truyols, Gabriel

    2016-02-16

    As part of forensic toxicological investigation of cases involving unexpected death of an individual, targeted or untargeted xenobiotic screening of post-mortem samples is normally conducted. To this end, liquid chromatography (LC) coupled to high-resolution mass spectrometry (MS) is typically employed. For data analysis, almost all commonly applied algorithms are threshold-based (frequentist). These algorithms examine the value of a certain measurement (e.g., peak height) to decide whether a certain xenobiotic of interest (XOI) is present/absent, yielding a binary output. Frequentist methods pose a problem when several sources of information [e.g., shape of the chromatographic peak, isotopic distribution, estimated mass-to-charge ratio (m/z), adduct, etc.] need to be combined, requiring the approach to make arbitrary decisions at substep levels of data analysis. We hereby introduce a novel Bayesian probabilistic algorithm for toxicological screening. The method tackles the problem with a different strategy. It is not aimed at reaching a final conclusion regarding the presence of the XOI, but it estimates its probability. The algorithm effectively and efficiently combines all possible pieces of evidence from the chromatogram and calculates the posterior probability of the presence/absence of XOI features. This way, the model can accommodate more information by updating the probability if extra evidence is acquired. The final probabilistic result assists the end user to make a final decision with respect to the presence/absence of the xenobiotic. The Bayesian method was validated and found to perform better (in terms of false positives and false negatives) than the vendor-supplied software package.

  3. Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar

    Directory of Open Access Journals (Sweden)

    Zhenxin Cao

    2018-02-01

    Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.

  4. Robust H ∞ Control for Spacecraft Rendezvous with a Noncooperative Target

    Science.gov (United States)

    Wu, Shu-Nan; Zhou, Wen-Ya; Tan, Shu-Jun; Wu, Guo-Qiang

    2013-01-01

    The robust H ∞ control for spacecraft rendezvous with a noncooperative target is addressed in this paper. The relative motion of chaser and noncooperative target is firstly modeled as the uncertain system, which contains uncertain orbit parameter and mass. Then the H ∞ performance and finite time performance are proposed, and a robust H ∞ controller is developed to drive the chaser to rendezvous with the non-cooperative target in the presence of control input saturation, measurement error, and thrust error. The linear matrix inequality technology is used to derive the sufficient condition of the proposed controller. An illustrative example is finally provided to demonstrate the effectiveness of the controller. PMID:24027446

  5. Confidence from uncertainty - A multi-target drug screening method from robust control theory

    Directory of Open Access Journals (Sweden)

    Petzold Linda R

    2010-11-01

    Full Text Available Abstract Background Robustness is a recognized feature of biological systems that evolved as a defence to environmental variability. Complex diseases such as diabetes, cancer, bacterial and viral infections, exploit the same mechanisms that allow for robust behaviour in healthy conditions to ensure their own continuance. Single drug therapies, while generally potent regulators of their specific protein/gene targets, often fail to counter the robustness of the disease in question. Multi-drug therapies offer a powerful means to restore disrupted biological networks, by targeting the subsystem of interest while preventing the diseased network from reconciling through available, redundant mechanisms. Modelling techniques are needed to manage the high number of combinatorial possibilities arising in multi-drug therapeutic design, and identify synergistic targets that are robust to system uncertainty. Results We present the application of a method from robust control theory, Structured Singular Value or μ- analysis, to identify highly effective multi-drug therapies by using robustness in the face of uncertainty as a new means of target discrimination. We illustrate the method by means of a case study of a negative feedback network motif subject to parametric uncertainty. Conclusions The paper contributes to the development of effective methods for drug screening in the context of network modelling affected by parametric uncertainty. The results have wide applicability for the analysis of different sources of uncertainty like noise experienced in the data, neglected dynamics, or intrinsic biological variability.

  6. Measurement of the Barkas effect around the stopping-power maximum for light and heavy targets

    International Nuclear Information System (INIS)

    Moeller, S.P.; Knudsen, H.; Mikkelsen, U.; Paludan, K.; Morenzoni, E.

    1997-01-01

    The first direct measurements of antiproton stopping powers around the stopping power maximum are presented. The LEAR antiproton-beam of 5.9 MeV is degraded to 50-700 keV, and the energy-loss is found by measuring the antiproton velocity before and after the target. The antiproton stopping powers of Si and Au are found to be reduced by 30 and 40% near the electronic stopping power maximum as compared to the equivalent proton stopping power. The Barkas effect, that is the stopping power difference between protons and antiprotons, is extracted and compared to theoretical estimates. (orig.)

  7. Electron spin resonance and its implication on the maximum nuclear polarization of deuterated solid target materials

    International Nuclear Information System (INIS)

    Heckmann, J.; Meyer, W.; Radtke, E.; Reicherz, G.; Goertz, S.

    2006-01-01

    ESR spectroscopy is an important tool in polarized solid target material research, since it allows us to study the paramagnetic centers, which are used for the dynamic nuclear polarization (DNP). The polarization behavior of the different target materials is strongly affected by the properties of these centers, which are added to the diamagnetic materials by chemical doping or irradiation. In particular, the ESR linewidth of the paramagnetic centers is a very important parameter, especially concerning the deuterated target materials. In this paper, the results of the first precise ESR measurements of the deuterated target materials at a DNP-relevant magnetic field of 2.5 T are presented. Moreover, these results allowed us to experimentally study the correlation between ESR linewidth and maximum deuteron polarization, as given by the spin-temperature theory

  8. Robust Small Target Co-Detection from Airborne Infrared Image Sequences.

    Science.gov (United States)

    Gao, Jingli; Wen, Chenglin; Liu, Meiqin

    2017-09-29

    In this paper, a novel infrared target co-detection model combining the self-correlation features of backgrounds and the commonality features of targets in the spatio-temporal domain is proposed to detect small targets in a sequence of infrared images with complex backgrounds. Firstly, a dense target extraction model based on nonlinear weights is proposed, which can better suppress background of images and enhance small targets than weights of singular values. Secondly, a sparse target extraction model based on entry-wise weighted robust principal component analysis is proposed. The entry-wise weight adaptively incorporates structural prior in terms of local weighted entropy, thus, it can extract real targets accurately and suppress background clutters efficiently. Finally, the commonality of targets in the spatio-temporal domain are used to construct target refinement model for false alarms suppression and target confirmation. Since real targets could appear in both of the dense and sparse reconstruction maps of a single frame, and form trajectories after tracklet association of consecutive frames, the location correlation of the dense and sparse reconstruction maps for a single frame and tracklet association of the location correlation maps for successive frames have strong ability to discriminate between small targets and background clutters. Experimental results demonstrate that the proposed small target co-detection method can not only suppress background clutters effectively, but also detect targets accurately even if with target-like interference.

  9. Robust Target Tracking with Multi-Static Sensors under Insufficient TDOA Information.

    Science.gov (United States)

    Shin, Hyunhak; Ku, Bonhwa; Nelson, Jill K; Ko, Hanseok

    2018-05-08

    This paper focuses on underwater target tracking based on a multi-static sonar network composed of passive sonobuoys and an active ping. In the multi-static sonar network, the location of the target can be estimated using TDOA (Time Difference of Arrival) measurements. However, since the sensor network may obtain insufficient and inaccurate TDOA measurements due to ambient noise and other harsh underwater conditions, target tracking performance can be significantly degraded. We propose a robust target tracking algorithm designed to operate in such a scenario. First, track management with track splitting is applied to reduce performance degradation caused by insufficient measurements. Second, a target location is estimated by a fusion of multiple TDOA measurements using a Gaussian Mixture Model (GMM). In addition, the target trajectory is refined by conducting a stack-based data association method based on multiple-frames measurements in order to more accurately estimate target trajectory. The effectiveness of the proposed method is verified through simulations.

  10. SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Y; Yu, J; Xiao, Y [Thomas Jefferson University Hospital, Philadelphia, PA (United States)

    2015-06-15

    Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant.

  11. SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume

    International Nuclear Information System (INIS)

    Gong, Y; Yu, J; Xiao, Y

    2015-01-01

    Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant

  12. Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics

    International Nuclear Information System (INIS)

    Prix, Reinhard; Krishnan, Badri

    2009-01-01

    We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as F-statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('B-statistic') using the Bayes factor with a more natural amplitude prior, namely an isotropic probability distribution for the orientation of GW sources. Monte Carlo simulations of targeted searches show that the resulting Bayesian B-statistic is more powerful in the Neyman-Pearson sense (i.e., has a higher expected detection probability at equal false-alarm probability) than the frequentist F-statistic.

  13. Robust H(∞) control for spacecraft rendezvous with a noncooperative target.

    Science.gov (United States)

    Wu, Shu-Nan; Zhou, Wen-Ya; Tan, Shu-Jun; Wu, Guo-Qiang

    2013-01-01

    The robust H(∞) control for spacecraft rendezvous with a noncooperative target is addressed in this paper. The relative motion of chaser and noncooperative target is firstly modeled as the uncertain system, which contains uncertain orbit parameter and mass. Then the H(∞) performance and finite time performance are proposed, and a robust H(∞) controller is developed to drive the chaser to rendezvous with the non-cooperative target in the presence of control input saturation, measurement error, and thrust error. The linear matrix inequality technology is used to derive the sufficient condition of the proposed controller. An illustrative example is finally provided to demonstrate the effectiveness of the controller.

  14. Targeting the maximum heat recovery for systems with heat losses and heat gains

    International Nuclear Information System (INIS)

    Wan Alwi, Sharifah Rafidah; Lee, Carmen Kar Mun; Lee, Kim Yau; Abd Manan, Zainuddin; Fraser, Duncan M.

    2014-01-01

    Graphical abstract: Illustration of heat gains and losses from process streams. - Highlights: • Maximising energy savings through heat losses or gains. • Identifying location where insulation can be avoided. • Heuristics to maximise heat losses or gains. • Targeting heat losses or gains using the extended STEP technique and HEAT diagram. - Abstract: Process Integration using the Pinch Analysis technique has been widely used as a tool for the optimal design of heat exchanger networks (HENs). The Composite Curves and the Stream Temperature versus Enthalpy Plot (STEP) are among the graphical tools used to target the maximum heat recovery for a HEN. However, these tools assume that heat losses and heat gains are negligible. This work presents an approach that considers heat losses and heat gains during the establishment of the minimum utility targets. The STEP method, which is plotted based on the individual, as opposed to the composite streams, has been extended to consider the effect of heat losses and heat gains during stream matching. Several rules to guide the proper location of pipe insulation, and the appropriate procedure for stream shifting have been introduced in order to minimise the heat losses and maximise the heat gains. Application of the method on two case studies shows that considering heat losses and heat gains yield more realistic utility targets and help reduce both the insulation capital cost and utility cost of a HEN

  15. Robust Detection of Moving Human Target in Foliage-Penetration Environment Based on Hough Transform

    Directory of Open Access Journals (Sweden)

    P. Lei

    2014-04-01

    Full Text Available Attention has been focused on the robust moving human target detection in foliage-penetration environment, which presents a formidable task in a radar system because foliage is a rich scattering environment with complex multipath propagation and time-varying clutter. Generally, multiple-bounce returns and clutter are additionally superposed to direct-scatter echoes. They obscure true target echo and lead to poor visual quality time-range image, making target detection particular difficult. Consequently, an innovative approach is proposed to suppress clutter and mitigate multipath effects. In particular, a clutter suppression technique based on range alignment is firstly applied to suppress the time-varying clutter and the instable antenna coupling. Then entropy weighted coherent integration (EWCI algorithm is adopted to mitigate the multipath effects. In consequence, the proposed method effectively reduces the clutter and ghosting artifacts considerably. Based on the high visual quality image, the target trajectory is detected robustly and the radial velocity is estimated accurately with the Hough transform (HT. Real data used in the experimental results are provided to verify the proposed method.

  16. Influence of Different Coupling Modes on the Robustness of Smart Grid under Targeted Attack

    Directory of Open Access Journals (Sweden)

    WenJie Kang

    2018-05-01

    Full Text Available Many previous works only focused on the cascading failure of global coupling of one-to-one structures in interdependent networks, but the local coupling of dual coupling structures has rarely been studied due to its complex structure. This will result in a serious consequence that many conclusions of the one-to-one structure may be incorrect in the dual coupling network and do not apply to the smart grid. Therefore, it is very necessary to subdivide the dual coupling link into a top-down coupling link and a bottom-up coupling link in order to study their influence on network robustness by combining with different coupling modes. Additionally, the power flow of the power grid can cause the load of a failed node to be allocated to its neighboring nodes and trigger a new round of load distribution when the load of these nodes exceeds their capacity. This means that the robustness of smart grids may be affected by four factors, i.e., load redistribution, local coupling, dual coupling link and coupling mode; however, the research on the influence of those factors on the network robustness is missing. In this paper, firstly, we construct the smart grid as a two-layer network with a dual coupling link and divide the power grid and communication network into many subnets based on the geographical location of their nodes. Secondly, we define node importance ( N I as an evaluation index to access the impact of nodes on the cyber or physical network and propose three types of coupling modes based on N I of nodes in the cyber and physical subnets, i.e., Assortative Coupling in Subnets (ACIS, Disassortative Coupling in Subnets (DCIS, and Random Coupling in Subnets (RCIS. Thirdly, a cascading failure model is proposed for studying the effect of local coupling of dual coupling link in combination with ACIS, DCIS, and RCIS on the robustness of the smart grid against a targeted attack, and the survival rate of functional nodes is used to assess the robustness of

  17. Influence of Different Coupling Modes on the Robustness of Smart Grid under Targeted Attack.

    Science.gov (United States)

    Kang, WenJie; Hu, Gang; Zhu, PeiDong; Liu, Qiang; Hang, Zhi; Liu, Xin

    2018-05-24

    Many previous works only focused on the cascading failure of global coupling of one-to-one structures in interdependent networks, but the local coupling of dual coupling structures has rarely been studied due to its complex structure. This will result in a serious consequence that many conclusions of the one-to-one structure may be incorrect in the dual coupling network and do not apply to the smart grid. Therefore, it is very necessary to subdivide the dual coupling link into a top-down coupling link and a bottom-up coupling link in order to study their influence on network robustness by combining with different coupling modes. Additionally, the power flow of the power grid can cause the load of a failed node to be allocated to its neighboring nodes and trigger a new round of load distribution when the load of these nodes exceeds their capacity. This means that the robustness of smart grids may be affected by four factors, i.e., load redistribution, local coupling, dual coupling link and coupling mode; however, the research on the influence of those factors on the network robustness is missing. In this paper, firstly, we construct the smart grid as a two-layer network with a dual coupling link and divide the power grid and communication network into many subnets based on the geographical location of their nodes. Secondly, we define node importance ( N I ) as an evaluation index to access the impact of nodes on the cyber or physical network and propose three types of coupling modes based on N I of nodes in the cyber and physical subnets, i.e., Assortative Coupling in Subnets (ACIS), Disassortative Coupling in Subnets (DCIS), and Random Coupling in Subnets (RCIS). Thirdly, a cascading failure model is proposed for studying the effect of local coupling of dual coupling link in combination with ACIS, DCIS, and RCIS on the robustness of the smart grid against a targeted attack, and the survival rate of functional nodes is used to assess the robustness of the smart grid

  18. Use of Maximum Intensity Projections (MIPs) for target outlining in 4DCT radiotherapy planning.

    Science.gov (United States)

    Muirhead, Rebecca; McNee, Stuart G; Featherstone, Carrie; Moore, Karen; Muscat, Sarah

    2008-12-01

    Four-dimensional computed tomography (4DCT) is currently being introduced to radiotherapy centers worldwide, for use in radical radiotherapy planning for non-small cell lung cancer (NSCLC). A significant drawback is the time required to delineate 10 individual CT scans for each patient. Every department will hence ask the question if the single Maximum Intensity Projection (MIP) scan can be used as an alternative. Although the problems regarding the use of the MIP in node-positive disease have been discussed in the literature, a comprehensive study assessing its use has not been published. We compared an internal target volume (ITV) created using the MIP to an ITV created from the composite volume of 10 clinical target volumes (CTVs) delineated on the 10 phases of the 4DCT. 4DCT data was collected from 14 patients with NSCLC. In each patient, the ITV was delineated on the MIP image (ITV_MIP) and a composite ITV created from the 10 CTVs delineated on each of the 10 scans in the dataset. The structures were compared by assessment of volumes of overlap and exclusion. There was a median of 19.0% (range, 5.5-35.4%) of the volume of ITV_10phase not enclosed by the ITV_MIP, demonstrating that the use of the MIP could result in under-treatment of disease. In contrast only a very small amount of the ITV_MIP was not enclosed by the ITV_10phase (median of 2.3%, range, 0.4-9.8%), indicating the ITV_10phase covers almost all of the tumor tissue as identified by MIP. Although there were only two Stage I patients, both demonstrated very similar ITV_10phase and ITV_MIP volumes. These findings suggest that Stage I NSCLC tumors could be outlined on the MIP alone. In Stage II and III tumors the ITV_10phase would be more reliable. To prevent under-treatment of disease, the MIP image can only be used for delineation in Stage I tumors.

  19. Shock ignition: a brief overview and progress in the design of robust targets

    International Nuclear Information System (INIS)

    Atzeni, S; Marocchino, A; Schiavi, A

    2015-01-01

    Shock ignition is a laser direct-drive inertial confinement fusion (ICF) scheme in which the stages of compression and hot spot formation are partly separated. The fuel is first imploded at a lower velocity than in conventional ICF, reducing the threats due to Rayleigh–Taylor instability. Close to stagnation, an intense laser spike drives a strong converging shock, which contributes to hot spot formation. This paper starts with a brief overview of the theoretical studies, target design and experimental results on shock ignition. The second part of the paper illustrates original work aiming at the design of robust targets and computation of the relevant gain curves. Following Chang et al (2010 Phys. Rev. Lett. 104 135002) a safety factor for high gain, ITF* (analogous to the ignition threshold factor ITF introduced by Clark et al (2008 Phys. Plasmas 15 056305)), is evaluated by means of parametric 1D simulations with artificially reduced reactivity. SI designs scaled as in Atzeni et al (2013 New J. Phys. 15 045004) are found to have nearly the same ITF*. For a given target, such ITF* increases with implosion velocity and laser spike power. A gain curve with a prescribed ITF* can then be simply generated by upscaling a reference target with that value of ITF*. An interesting option is scaling in size by reducing the implosion velocity to keep the ratio of implosion velocity to self-ignition velocity constant. At a given total laser energy, targets with higher ITF* are driven to higher implosion velocity and achieve a somewhat lower gain. However, a 1D gain higher than 100 is achieved at an (incident) energy below 1 MJ, an implosion velocity below 300 km s −1 and a peak incident power below 400 TW. 2D simulations of mispositioned targets show that targets with a higher ITF* indeed tolerate larger displacements. (paper)

  20. Setting maximum sustainable yield targets when yield of one species affects that of other species

    DEFF Research Database (Denmark)

    Rindorf, Anna; Reid, David; Mackinson, Steve

    2012-01-01

    species. But how should we prioritize and identify most appropriate targets? Do we prefer to maximize by focusing on total yield in biomass across species, or are other measures targeting maximization of profits or preserving high living qualities more relevant? And how do we ensure that targets remain...

  1. Progress towards a high-gain and robust target design for heavy ion fusion

    Energy Technology Data Exchange (ETDEWEB)

    Henestroza, Enrique; Grant Logan, B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States)

    2012-07-15

    would not reach the ignition zone in time to affect the burning process. Also, preliminary HYDRA calculations, using a higher resolution mesh to study the shear flow of the DT fuel along the X-target walls, indicate that metal-mixed fuel produced near the walls would not be transferred to the DT ignition zone (at maximum {rho}R) located at the vertex of the X-target.

  2. A new robustness analysis for climate policy evaluations: A CGE application for the EU 2020 targets

    International Nuclear Information System (INIS)

    Hermeling, Claudia; Löschel, Andreas; Mennel, Tim

    2013-01-01

    This paper introduces a new method for stochastic sensitivity analysis for computable general equilibrium (CGE) model based on Gauss Quadrature and applies it to check the robustness of a large-scale climate policy evaluation. The revised version of the Gauss-quadrature approach to sensitivity analysis reduces computations considerably vis-à-vis the commonly applied Monte-Carlo methods; this allows for a stochastic sensitivity analysis also for large scale models and multi-dimensional changes of parameters. In the application, an impact assessment of EU2020 climate policy, we focus on sectoral elasticities that are part of the basic parameters of the model and have been recently determined by econometric estimation, alongside with standard errors. The impact assessment is based on the large scale CGE model PACE. We show the applicability of the Gauss-quadrature approach and confirm the robustness of the impact assessment with the PACE model. The variance of the central model outcomes is smaller than their mean by order four to eight, depending on the aggregation level (i.e. aggregate variables such as GDP show a smaller variance than sectoral output). - Highlights: ► New, simplified method for stochastic sensitivity analysis for CGE analysis. ► Gauss quadrature with orthogonal polynomials. ► Application to climate policy—the case of the EU 2020 targets

  3. Robust Automatic Target Recognition via HRRP Sequence Based on Scatterer Matching

    Directory of Open Access Journals (Sweden)

    Yuan Jiang

    2018-02-01

    Full Text Available High resolution range profile (HRRP plays an important role in wideband radar automatic target recognition (ATR. In order to alleviate the sensitivity to clutter and target aspect, employing a sequence of HRRP is a promising approach to enhance the ATR performance. In this paper, a novel HRRP sequence-matching method based on singular value decomposition (SVD is proposed. First, the HRRP sequence is decoupled into the angle space and the range space via SVD, which correspond to the span of the left and the right singular vectors, respectively. Second, atomic norm minimization (ANM is utilized to estimate dominant scatterers in the range space and the Hausdorff distance is employed to measure the scatter similarity between the test and training data. Next, the angle space similarity between the test and training data is evaluated based on the left singular vector correlations. Finally, the range space matching result and the angle space correlation are fused with the singular values as weights. Simulation and outfield experimental results demonstrate that the proposed matching metric is a robust similarity measure for HRRP sequence recognition.

  4. Can Targeted Intervention Mitigate Early Emotional and Behavioral Problems?: Generating Robust Evidence within Randomized Controlled Trials.

    Directory of Open Access Journals (Sweden)

    Orla Doyle

    Full Text Available This study examined the impact of a targeted Irish early intervention program on children's emotional and behavioral development using multiple methods to test the robustness of the results. Data on 164 Preparing for Life participants who were randomly assigned into an intervention group, involving home visits from pregnancy onwards, or a control group, was used to test the impact of the intervention on Child Behavior Checklist scores at 24-months. Using inverse probability weighting to account for differential attrition, permutation testing to address small sample size, and quantile regression to characterize the distributional impact of the intervention, we found that the few treatment effects were largely concentrated among boys most at risk of developing emotional and behavioral problems. The average treatment effect identified a 13% reduction in the likelihood of falling into the borderline clinical threshold for Total Problems. The interaction and subgroup analysis found that this main effect was driven by boys. The distributional analysis identified a 10-point reduction in the Externalizing Problems score for boys at the 90th percentile. No effects were observed for girls or for the continuous measures of Total, Internalizing, and Externalizing problems. These findings suggest that the impact of this prenatally commencing home visiting program may be limited to boys experiencing the most difficulties. Further adoption of the statistical methods applied here may help to improve the internal validity of randomized controlled trials and contribute to the field of evaluation science more generally.ISRCTN Registry ISRCTN04631728.

  5. Combination of surface and borehole seismic data for robust target-oriented imaging

    Science.gov (United States)

    Liu, Yi; van der Neut, Joost; Arntsen, Børge; Wapenaar, Kees

    2016-05-01

    A novel application of seismic interferometry (SI) and Marchenko imaging using both surface and borehole data is presented. A series of redatuming schemes is proposed to combine both data sets for robust deep local imaging in the presence of velocity uncertainties. The redatuming schemes create a virtual acquisition geometry where both sources and receivers lie at the horizontal borehole level, thus only a local velocity model near the borehole is needed for imaging, and erroneous velocities in the shallow area have no effect on imaging around the borehole level. By joining the advantages of SI and Marchenko imaging, a macrovelocity model is no longer required and the proposed schemes use only single-component data. Furthermore, the schemes result in a set of virtual data that have fewer spurious events and internal multiples than previous virtual source redatuming methods. Two numerical examples are shown to illustrate the workflow and to demonstrate the benefits of the method. One is a synthetic model and the other is a realistic model of a field in the North Sea. In both tests, improved local images near the boreholes are obtained using the redatumed data without accurate velocities, because the redatumed data are close to the target.

  6. Maximum credibly yield for deuteriuim-filled double shell imaging targets meeting requirements for yield bin Category A

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Douglas Carl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Loomis, Eric Nicholas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-17

    We are anticipating our first NIF double shell shot using an aluminum ablator and a glass inner shell filled with deuterium shown in figure 1. The expected yield is between a few 1010 to a few 1011 dd neutrons. The maximum credible yield is 5e+13. This memo describes why, and what would be expected with variations on the target. This memo evaluates the maximum credible yield for deuterium filled double shell capsule targets with an aluminum ablator shell and a glass inner shell in yield Category A (< 1014 neutrons). It also pertains to fills of gas diluted with hydrogen, helium (3He or 4He), or any other fuel except tritium. This memo does not apply to lower z ablator dopants, such as beryllium, as this would increase the ablation efficiency. This evaluation is for 5.75 scale hohlraum targets of either gold or uranium with helium gas fills with density between 0 and 1.6 mg/cc. It could be extended to other hohlraum sizes and shapes with slight modifications. At present only laser pulse energies up to 1.5 MJ were considered with a single step laser pulse of arbitrary shape. Since yield decreases with laser energy for this target, the memo could be extended to higher laser energies if desired. These maximum laser parameters of pulses addressed here are near the edge of NIF’s capability, and constitute the operating envelope for experiments covered by this memo. We have not considered multiple step pulses, would probably create no advantages in performance, and are not planned for double shell capsules. The main target variables are summarized in Table 1 and explained in detail in the memo. Predicted neutron yields are based on 1D and 2D clean simulations.

  7. Maximum flow approach to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv from protein-protein interaction network.

    Science.gov (United States)

    Melak, Tilahun; Gakkhar, Sunita

    2015-12-01

    In spite of the implementations of several strategies, tuberculosis (TB) is overwhelmingly a serious global public health problem causing millions of infections and deaths every year. This is mainly due to the emergence of drug-resistance varieties of TB. The current treatment strategies for the drug-resistance TB are of longer duration, more expensive and have side effects. This highlights the importance of identification and prioritization of targets for new drugs. This study has been carried out to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv based on their flow to resistance genes. The weighted proteome interaction network of the pathogen was constructed using a dataset from STRING database. Only a subset of the dataset with interactions that have a combined score value ≥770 was considered. Maximum flow approach has been used to prioritize potential drug targets. The potential drug targets were obtained through comparative genome and network centrality analysis. The curated set of resistance genes was retrieved from literatures. Detail literature review and additional assessment of the method were also carried out for validation. A list of 537 proteins which are essential to the pathogen and non-homologous with human was obtained from the comparative genome analysis. Through network centrality measures, 131 of them were found within the close neighborhood of the centre of gravity of the proteome network. These proteins were further prioritized based on their maximum flow value to resistance genes and they are proposed as reliable drug targets of the pathogen. Proteins which interact with the host were also identified in order to understand the infection mechanism. Potential drug targets of Mycobacterium tuberculosis H37Rv were successfully prioritized based on their flow to resistance genes of existing drugs which is believed to increase the druggability of the targets since inhibition of a protein that has a maximum flow to

  8. Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor.

    Science.gov (United States)

    Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun

    2016-05-09

    The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers' misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors.

  9. Influence of the Target Vessel on the Location and Area of Maximum Skin Dose during Percutaneous Coronary Intervention

    International Nuclear Information System (INIS)

    Chida, K.; Fuda, K.; Kagaya, Y.; Saito, H.; Takai, Y.; Kohzuki, M.; Takahash i, S.; Yamada, S.; Zuguchi, M.

    2007-01-01

    Background: A number of cases involving radiation-associated patient skin injury attributable to percutaneous coronary intervention (PCI) have been reported. Knowledge of the location and area of the patient's maximum skin dose (MSD) in PCI is necessary to reduce the risk of skin injury. Purpose: To determine the location and area of the MSD in PCI, and separately analyze the effects of different target vessels. Material and Methods: 197 consecutive PCI procedures were studied, and the location and area of the MSD were calculated by a skin-dose mapping software program: Caregraph. The target vessels of the PCI procedures were divided into four groups based on the American Heart Association (AHA) classification. Results: The sites of the MSD for AHA no.1-3, AHA no.4, and AHA no.11-15 were located mainly on the right back skin, the lower right or center back skin, and the upper back skin areas, respectively, whereas the MSD sites for the AHA no. 5-10 PCI were widely spread. The MSD area for the AHA no. 4 PCI was larger than that for the AHA no. 11-15 PCI (P<0.0001). Conclusion: Although the radiation associated with PCI can be widely spread and variable, we observed a tendency regarding the location and area of the MSD when we separately analyzed the data for different target vessels. We recommend the use of a smaller radiation field size and the elimination of overlapping fields during PCI

  10. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Sungho Kim

    2016-07-01

    Full Text Available Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR images or infrared (IR images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter and an asymmetric morphological closing filter (AMCF, post-filter into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic

  11. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Science.gov (United States)

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  12. Robust, fast and accurate vision-based localization of a cooperative target used for space robotic arm

    Science.gov (United States)

    Wen, Zhuoman; Wang, Yanjie; Luo, Jun; Kuijper, Arjan; Di, Nan; Jin, Minghe

    2017-07-01

    When a space robotic arm deploys a payload, usually the pose between the cooperative target fixed on the payload and the hand-eye camera installed on the arm is calculated in real-time. A high-precision robust visual cooperative target localization method is proposed. Combing a circle, a line and dots as markers, a target that guarantees high detection rates is designed. Given an image, single-pixel-width smooth edges are drawn by a novel linking method. Circles are then quickly extracted using isophotes curvature. Around each circle, a square boundary in a pre-calculated proportion to the circle radius is set. In the boundary, the target is identified if certain numbers of lines exist. Based on the circle, the lines, and the target foreground and background intensities, markers are localized. Finally, the target pose is calculated by the Point-3-Perspective algorithm. The algorithm processes 8 frames per second with the target distance ranging from 0.3m to 1.5 m. It generated high-precision poses of above 97.5% on over 100,000 images regardless of camera background, target pose, illumination and motion blur. At 0.3 m, the rotation and translation errors were less than 0.015° and 0.2 mm. The proposed algorithm is very suitable for real-time visual measurement that requires high precision in aerospace.

  13. A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure.

    Science.gov (United States)

    Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L

    2018-01-01

    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.

  14. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    Science.gov (United States)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  15. Efficient and robust identification of cortical targets in concurrent TMS-fMRI experiments

    Science.gov (United States)

    Yau, Jeffrey M.; Hua, Jun; Liao, Diana A.; Desmond, John E.

    2014-01-01

    Transcranial magnetic stimulation (TMS) can be delivered during fMRI scans to evoke BOLD responses in distributed brain networks. While concurrent TMS-fMRI offers a potentially powerful tool for non-invasively investigating functional human neuroanatomy, the technique is currently limited by the lack of methods to rapidly and precisely localize targeted brain regions – a reliable procedure is necessary for validly relating stimulation targets to BOLD activation patterns, especially for cortical targets outside of motor and visual regions. Here we describe a convenient and practical method for visualizing coil position (in the scanner) and identifying the cortical location of TMS targets without requiring any calibration or any particular coil-mounting device. We quantified the precision and reliability of the target position estimates by testing the marker processing procedure on data from 9 scan sessions: Rigorous testing of the localization procedure revealed minimal variability in coil and target position estimates. We validated the marker processing procedure in concurrent TMS-fMRI experiments characterizing motor network connectivity. Together, these results indicate that our efficient method accurately and reliably identifies TMS targets in the MR scanner, which can be useful during scan sessions for optimizing coil placement and also for post-scan outlier identification. Notably, this method can be used generally to identify the position and orientation of MR-compatible hardware placed near the head in the MR scanner. PMID:23507384

  16. Robust through-the-wall radar image classification using a target-model alignment procedure.

    Science.gov (United States)

    Smith, Graeme E; Mobasseri, Bijan G

    2012-02-01

    A through-the-wall radar image (TWRI) bears little resemblance to the equivalent optical image, making it difficult to interpret. To maximize the intelligence that may be obtained, it is desirable to automate the classification of targets in the image to support human operators. This paper presents a technique for classifying stationary targets based on the high-range resolution profile (HRRP) extracted from 3-D TWRIs. The dependence of the image on the target location is discussed using a system point spread function (PSF) approach. It is shown that the position dependence will cause a classifier to fail, unless the image to be classified is aligned to a classifier-training location. A target image alignment technique based on deconvolution of the image with the system PSF is proposed. Comparison of the aligned target images with measured images shows the alignment process introducing normalized mean squared error (NMSE) ≤ 9%. The HRRP extracted from aligned target images are classified using a naive Bayesian classifier supported by principal component analysis. The classifier is tested using a real TWRI of canonical targets behind a concrete wall and shown to obtain correct classification rates ≥ 97%. © 2011 IEEE

  17. Validation of a 4D-PET Maximum Intensity Projection for Delineation of an Internal Target Volume

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, Jason, E-mail: jason.callahan@petermac.org [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Kron, Tomas [Department of Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia); Schneider-Kolsky, Michal [Department of Medical Imaging and Radiation Science, Monash University, Clayton, Victoria (Australia); Dunn, Leon [Department of Applied Physics, RMIT University, Melbourne (Australia); Thompson, Mick [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Siva, Shankar [Department of Radiation Oncology, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Aarons, Yolanda [Department of Radiation Oncology, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia); Binns, David [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Hicks, Rodney J. [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia)

    2013-07-15

    Purpose: The delineation of internal target volumes (ITVs) in radiation therapy of lung tumors is currently performed by use of either free-breathing (FB) {sup 18}F-fluorodeoxyglucose-positron emission tomography-computed tomography (FDG-PET/CT) or 4-dimensional (4D)-CT maximum intensity projection (MIP). In this report we validate the use of 4D-PET-MIP for the delineation of target volumes in both a phantom and in patients. Methods and Materials: A phantom with 3 hollow spheres was prepared surrounded by air then water. The spheres and water background were filled with a mixture of {sup 18}F and radiographic contrast medium. A 4D-PET/CT scan was performed of the phantom while moving in 4 different breathing patterns using a programmable motion device. Nine patients with an FDG-avid lung tumor who underwent FB and 4D-PET/CT and >5 mm of tumor motion were included for analysis. The 3 spheres and patient lesions were contoured by 2 contouring methods (40% of maximum and PET edge) on the FB-PET, FB-CT, 4D-PET, 4D-PET-MIP, and 4D-CT-MIP. The concordance between the different contoured volumes was calculated using a Dice coefficient (DC). The difference in lung tumor volumes between FB-PET and 4D-PET volumes was also measured. Results: The average DC in the phantom using 40% and PET edge, respectively, was lowest for FB-PET/CT (DCAir = 0.72/0.67, DCBackground 0.63/0.62) and highest for 4D-PET/CT-MIP (DCAir = 0.84/0.83, DCBackground = 0.78/0.73). The average DC in the 9 patients using 40% and PET edge, respectively, was also lowest for FB-PET/CT (DC = 0.45/0.44) and highest for 4D-PET/CT-MIP (DC = 0.72/0.73). In the 9 lesions, the target volumes of the FB-PET using 40% and PET edge, respectively, were on average 40% and 45% smaller than the 4D-PET-MIP. Conclusion: A 4D-PET-MIP produces volumes with the highest concordance with 4D-CT-MIP across multiple breathing patterns and lesion sizes in both a phantom and among patients. Freebreathing PET/CT consistently

  18. Validation of a 4D-PET Maximum Intensity Projection for Delineation of an Internal Target Volume

    International Nuclear Information System (INIS)

    Callahan, Jason; Kron, Tomas; Schneider-Kolsky, Michal; Dunn, Leon; Thompson, Mick; Siva, Shankar; Aarons, Yolanda; Binns, David; Hicks, Rodney J.

    2013-01-01

    Purpose: The delineation of internal target volumes (ITVs) in radiation therapy of lung tumors is currently performed by use of either free-breathing (FB) 18 F-fluorodeoxyglucose-positron emission tomography-computed tomography (FDG-PET/CT) or 4-dimensional (4D)-CT maximum intensity projection (MIP). In this report we validate the use of 4D-PET-MIP for the delineation of target volumes in both a phantom and in patients. Methods and Materials: A phantom with 3 hollow spheres was prepared surrounded by air then water. The spheres and water background were filled with a mixture of 18 F and radiographic contrast medium. A 4D-PET/CT scan was performed of the phantom while moving in 4 different breathing patterns using a programmable motion device. Nine patients with an FDG-avid lung tumor who underwent FB and 4D-PET/CT and >5 mm of tumor motion were included for analysis. The 3 spheres and patient lesions were contoured by 2 contouring methods (40% of maximum and PET edge) on the FB-PET, FB-CT, 4D-PET, 4D-PET-MIP, and 4D-CT-MIP. The concordance between the different contoured volumes was calculated using a Dice coefficient (DC). The difference in lung tumor volumes between FB-PET and 4D-PET volumes was also measured. Results: The average DC in the phantom using 40% and PET edge, respectively, was lowest for FB-PET/CT (DCAir = 0.72/0.67, DCBackground 0.63/0.62) and highest for 4D-PET/CT-MIP (DCAir = 0.84/0.83, DCBackground = 0.78/0.73). The average DC in the 9 patients using 40% and PET edge, respectively, was also lowest for FB-PET/CT (DC = 0.45/0.44) and highest for 4D-PET/CT-MIP (DC = 0.72/0.73). In the 9 lesions, the target volumes of the FB-PET using 40% and PET edge, respectively, were on average 40% and 45% smaller than the 4D-PET-MIP. Conclusion: A 4D-PET-MIP produces volumes with the highest concordance with 4D-CT-MIP across multiple breathing patterns and lesion sizes in both a phantom and among patients. Freebreathing PET/CT consistently underestimates ITV

  19. Studies on the robustness of shock-ignited laser fusion targets

    International Nuclear Information System (INIS)

    Atzeni, S; Schiavi, A; Marocchino, A

    2011-01-01

    Several aspects of the sensitivity of a shock-ignited inertial fusion target to variation of parameters and errors or imperfections are studied by means of one-dimensional and two-dimensional numerical simulations. The study refers to a simple all-DT target, initially proposed for fast ignition (Atzeni et al 2007 Phys. Plasmas 7 052702) and subsequently shown to be also suitable for shock ignition (Ribeyre et al 2009 Plasma Phys. Control. Fusion 51 015013). It is shown that the growth of both Richtmyer-Meshkov and Rayleigh-Taylor instability (RTI) at the ablation front is reduced by laser pulses with an adiabat-shaping picket. An operating window for the parameters of the ignition laser spike is described; the threshold power depends on beam focusing and synchronization with the compression pulse. The time window for spike launch widens with beam power, while the minimum spike energy is independent of spike power. A large parametric scan indicates good tolerance (at the level of a few percent) to target mass and laser power errors. 2D simulations indicate that the strong igniting shock wave plays an important role in reducing deceleration-phase RTI growth. Instead, the high hot-spot convergence ratio (ratio of initial target radius to hot-spot radius at ignition) makes ignition highly sensitive to target mispositioning.

  20. Robust aptamer–polydopamine-functionalized M-PLGA–TPGS nanoparticles for targeted delivery of docetaxel and enhanced cervical cancer therapy

    Directory of Open Access Journals (Sweden)

    Xu GJ

    2016-06-01

    Full Text Available Guojun Xu,1–3,* Xinghua Yu,2,* Jinxie Zhang,1,2,* Yingchao Sheng,4 Gan Liu,2 Wei Tao,1,2 Lin Mei1,2 1School of Life Sciences, Tsinghua University, Beijing, 2Graduate School at Shenzhen, Tsinghua University, Shenzhen, 3School of Materials Science and Engineering, Tsinghua University, Beijing, 4Department of Orthopedic Surgery, Changshu Hospital of TCM, Changshu, People’s Republic of China *These authors contributed equally to this work Abstract: One limitation of current biodegradable polymeric nanoparticles (NPs is the contradiction between functional modification and maintaining formerly excellent bioproperties with simple procedures. Here, we reported a robust aptamer–polydopamine-functionalized mannitol-functionalized poly(lactide-co-glycolide (M-PLGA–D-α-tocopheryl polyethylene glycol 1000 succinate (TPGS nanoformulation (Apt-pD-NPs for the delivery of docetaxel (DTX with enhanced cervical cancer therapy effects. The novel DTX-loaded Apt-pD-NPs possess satisfactory advantages: 1 increased drug loading content and encapsulation efficiency induced by star-shaped copolymer M-PLGA–TPGS; 2 significant active targeting effect caused by conjugated AS1411 aptamers; and 3 excellent long-term compatibility by incorporation of TPGS. Therefore, with simple preparation procedures and excellent bioproperties, the new functionalized Apt-pD-NPs could maximally increase the local effective drug concentration on tumor sites, achieving enhanced treatment effectiveness and minimizing side effects. In a word, the robust DTX-loaded Apt-pD-NPs could be used as potential nanotherapeutics for cervical cancer treatment, and the aptamer–polydopamine modification strategy could be a promising method for active targeting of cancer therapy with simple procedures. Keywords: dopamine, AS1411 aptamer, active targeting, polymeric NPs, enhanced cervical chemotherapy

  1. Neural networks, cellular automata, and robust approach applications for vertex localization in the opera target tracker detector

    International Nuclear Information System (INIS)

    Dmitrievskij, S.G.; Gornushkin, Yu.A.; Ososkov, G.A.

    2005-01-01

    A neural-network (NN) approach for neutrino interaction vertex reconstruction in the OPERA experiment with the help of the Target Tracker (TT) detector is described. A feed-forward NN with the standard back propagation option is used. The energy functional minimization of the network is performed by the method of conjugate gradients. Data preprocessing by means of cellular automaton algorithm is performed. The Hough transform is applied for muon track determination and the robust fitting method is used for shower axis reconstruction. A comparison of the proposed approach with earlier studies, based on the use of the neural network package SNNS, shows their similar performance. The further development of the approach is underway

  2. Building a Robust Tumor Profiling Program: Synergy between Next-Generation Sequencing and Targeted Single-Gene Testing.

    Directory of Open Access Journals (Sweden)

    Matthew C Hiemenz

    Full Text Available Next-generation sequencing (NGS is a powerful platform for identifying cancer mutations. Routine clinical adoption of NGS requires optimized quality control metrics to ensure accurate results. To assess the robustness of our clinical NGS pipeline, we analyzed the results of 304 solid tumor and hematologic malignancy specimens tested simultaneously by NGS and one or more targeted single-gene tests (EGFR, KRAS, BRAF, NPM1, FLT3, and JAK2. For samples that passed our validated tumor percentage and DNA quality and quantity thresholds, there was perfect concordance between NGS and targeted single-gene tests with the exception of two FLT3 internal tandem duplications that fell below the stringent pre-established reporting threshold but were readily detected by manual inspection. In addition, NGS identified clinically significant mutations not covered by single-gene tests. These findings confirm NGS as a reliable platform for routine clinical use when appropriate quality control metrics, such as tumor percentage and DNA quality cutoffs, are in place. Based on our findings, we suggest a simple workflow that should facilitate adoption of clinical oncologic NGS services at other institutions.

  3. Verification of maximum radial power peaking factor due to insertion of FPM-LEU target in the core of RSG-GAS reactor

    Energy Technology Data Exchange (ETDEWEB)

    Setyawan, Daddy, E-mail: d.setyawan@bapeten.go.id [Center for Assessment of Regulatory System and Technology for Nuclear Installations and Materials, Indonesian Nuclear Energy Regulatory Agency (BAPETEN), Jl. Gajah Mada No. 8 Jakarta 10120 (Indonesia); Rohman, Budi [Licensing Directorate for Nuclear Installations and Materials, Indonesian Nuclear Energy Regulatory Agency (BAPETEN), Jl. Gajah Mada No. 8 Jakarta 10120 (Indonesia)

    2014-09-30

    Verification of Maximum Radial Power Peaking Factor due to insertion of FPM-LEU target in the core of RSG-GAS Reactor. Radial Power Peaking Factor in RSG-GAS Reactor is a very important parameter for the safety of RSG-GAS reactor during operation. Data of radial power peaking factor due to the insertion of Fission Product Molybdenum with Low Enriched Uranium (FPM-LEU) was reported by PRSG to BAPETEN through the Safety Analysis Report RSG-GAS for FPM-LEU target irradiation. In order to support the evaluation of the Safety Analysis Report incorporated in the submission, the assessment unit of BAPETEN is carrying out independent assessment in order to verify safety related parameters in the SAR including neutronic aspect. The work includes verification to the maximum radial power peaking factor change due to the insertion of FPM-LEU target in RSG-GAS Reactor by computational method using MCNP5and ORIGEN2. From the results of calculations, the new maximum value of the radial power peaking factor due to the insertion of FPM-LEU target is 1.27. The results of calculations in this study showed a smaller value than 1.4 the limit allowed in the SAR.

  4. Fathead minnow steroidogenesis: in silico analyses reveals tradeoffs between nominal target efficacy and robustness to cross-talk

    Directory of Open Access Journals (Sweden)

    Villeneuve Daniel L

    2010-06-01

    elucidation but microarray evidence shows that homeostatic regulation of the steroidogenic network is likely maintained by a mildly sensitive interaction. We hypothesize that effective network elucidation must consider both the sensitivity of the target as well as the target's robustness to biological noise (in this case, to cross-talk when identifying possible points of regulation.

  5. Dynamic-MLC leaf control utilizing on-flight intensity calculations: A robust method for real-time IMRT delivery over moving rigid targets

    International Nuclear Information System (INIS)

    McMahon, Ryan; Papiez, Lech; Rangaraj, Dharanipathy

    2007-01-01

    An algorithm is presented that allows for the control of multileaf collimation (MLC) leaves based entirely on real-time calculations of the intensity delivered over the target. The algorithm is capable of efficiently correcting generalized delivery errors without requiring the interruption of delivery (self-correcting trajectories), where a generalized delivery error represents anything that causes a discrepancy between the delivered and intended intensity profiles. The intensity actually delivered over the target is continually compared to its intended value. For each pair of leaves, these comparisons are used to guide the control of the following leaf and keep this discrepancy below a user-specified value. To demonstrate the basic principles of the algorithm, results of corrected delivery are shown for a leading leaf positional error during dynamic-MLC (DMLC) IMRT delivery over a rigid moving target. It is then shown that, with slight modifications, the algorithm can be used to track moving targets in real time. The primary results of this article indicate that the algorithm is capable of accurately delivering DMLC IMRT over a rigid moving target whose motion is (1) completely unknown prior to delivery and (2) not faster than the maximum MLC leaf velocity over extended periods of time. These capabilities are demonstrated for clinically derived intensity profiles and actual tumor motion data, including situations when the target moves in some instances faster than the maximum admissible MLC leaf velocity. The results show that using the algorithm while calculating the delivered intensity every 50 ms will provide a good level of accuracy when delivering IMRT over a rigid moving target translating along the direction of MLC leaf travel. When the maximum velocities of the MLC leaves and target were 4 and 4.2 cm/s, respectively, the resulting error in the two intensity profiles used was 0.1±3.1% and -0.5±2.8% relative to the maximum of the intensity profiles. For

  6. Particle Filter-Based Target Tracking Algorithm for Magnetic Resonance-Guided Respiratory Compensation : Robustness and Accuracy Assessment

    NARCIS (Netherlands)

    Bourque, Alexandra E; Bedwani, Stéphane; Carrier, Jean-François; Ménard, Cynthia; Borman, Pim; Bos, Clemens; Raaymakers, Bas W; Mickevicius, Nikolai; Paulson, Eric; Tijssen, Rob H N

    PURPOSE: To assess overall robustness and accuracy of a modified particle filter-based tracking algorithm for magnetic resonance (MR)-guided radiation therapy treatments. METHODS AND MATERIALS: An improved particle filter-based tracking algorithm was implemented, which used a normalized

  7. The "Robustness" of Vocabulary Intervention in the Public Schools: Targets and Techniques Employed in Speech-Language Therapy

    Science.gov (United States)

    Justice, Laura M.; Schmitt, Mary Beth; Murphy, Kimberly A.; Pratt, Amy; Biancone, Tricia

    2014-01-01

    This study examined vocabulary intervention--in terms of targets and techniques--for children with language impairment receiving speech-language therapy in public schools (i.e., non-fee-paying schools) in the United States. Vocabulary treatments and targets were examined with respect to their alignment with the empirically validated practice of…

  8. The 'robustness' of vocabulary intervention in the public schools: targets and techniques employed in speech-language therapy.

    Science.gov (United States)

    Justice, Laura M; Schmitt, Mary Beth; Murphy, Kimberly A; Pratt, Amy; Biancone, Tricia

    2014-01-01

    This study examined vocabulary intervention-in terms of targets and techniques-for children with language impairment receiving speech-language therapy in public schools (i.e., non-fee-paying schools) in the United States. Vocabulary treatments and targets were examined with respect to their alignment with the empirically validated practice of rich vocabulary intervention. Participants were forty-eight 5-7-year-old children participating in kindergarten or the first-grade year of school, all of whom had vocabulary-specific goals on their individualized education programmes. Two therapy sessions per child were coded to determine what vocabulary words were being directly targeted and what techniques were used for each. Study findings showed that the majority of words directly targeted during therapy were lower-level basic vocabulary words (87%) and very few (1%) were academically relevant. On average, three techniques were used per word to promote deep understanding. Interpreting findings against empirical descriptions of rich vocabulary intervention indicates that children were exposed to some but not all aspects of this empirically supported practice. © 2013 Royal College of Speech and Language Therapists.

  9. Delivery of TLR7 agonist to monocytes and dendritic cells by DCIR targeted liposomes induces robust production of anti-cancer cytokines

    DEFF Research Database (Denmark)

    Klauber, Thomas Christopher Bogh; Laursen, Janne Marie; Zucker, Daniel

    2017-01-01

    Tumor immune escape is today recognized as an important cancer hallmark and is therefore a major focus area in cancer therapy. Monocytes and dendritic cells (DCs), which are central to creating a robust anti-tumor immune response and establishing an anti-tumorigenic microenvironment, are directly...... targeted by the tumor escape mechanisms to develop immunosuppressive phenotypes. Providing activated monocytes and DCs to the tumor tissue is therefore an attractive way to break the tumor-derived immune suppression and reinstate cancer immune surveillance. To activate monocytes and DCs with high...... as their immune activating potential in blood-derived monocytes, myeloid DCs (mDCs), and plasmacytoid DCs (pDCs). Monocytes and mDCs were targeted with high specificity over lymphocytes, and exhibited potent TLR7-specific secretion of the anti-cancer cytokines IL-12p70, IFN-α 2a, and IFN-γ. This delivery system...

  10. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  11. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    International Nuclear Information System (INIS)

    De Silva, T; Ketcha, M; Siewerdsen, J H; Uneri, A; Reaungamornrat, S; Vogt, S; Kleinszig, G; Lo, S F; Wolinsky, J P; Gokaslan, Z L; Aygun, N

    2015-01-01

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperative mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such

  12. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    Energy Technology Data Exchange (ETDEWEB)

    De Silva, T; Ketcha, M; Siewerdsen, J H [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD (United States); Uneri, A; Reaungamornrat, S [Department of Computer Science, Johns Hopkins University, Baltimore, MD (United States); Vogt, S; Kleinszig, G [Siemens Healthcare XP Division, Erlangen, DE (Germany); Lo, S F; Wolinsky, J P; Gokaslan, Z L [Department of Neurosurgery, The Johns Hopkins Hospital, Baltimore, MD (United States); Aygun, N [Department of Raiology and Radiological Sciences, The Johns Hopkins Hospital, Baltimore, MD (United States)

    2015-06-15

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperative mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such

  13. A Robust Profitability Assessment Tool for Targeting Agricultural Investments in Developing Countries: Modeling Spatial Heterogeneity and Uncertainty

    Science.gov (United States)

    Quinn, J. D.; Zeng, Z.; Shoemaker, C. A.; Woodard, J.

    2014-12-01

    In sub-Saharan Africa, where the majority of the population earns their living from agriculture, government expenditures in many countries are being re-directed to the sector to increase productivity and decrease poverty. However, many of these investments are seeing low returns because they are poorly targeted. A geographic tool that accounts for spatial heterogeneity and temporal variability in the factors of production would allow governments and donors to optimize their investments by directing them to farmers for whom they are most profitable. One application for which this is particularly relevant is fertilizer recommendations. It is well-known that soil fertility in much of sub-Saharan Africa is declining due to insufficient nutrient inputs to replenish those lost through harvest. Since fertilizer application rates in sub-Saharan Africa are several times smaller than in other developing countries, it is often assumed that African farmers are under-applying fertilizer. However, this assumption ignores the risk farmers face in choosing whether or how much fertilizer to apply. Simply calculating the benefit/cost ratio of applying a given level of fertilizer in a particular year over a large, aggregated region (as is often done) overlooks the variability in yield response seen at different sites within the region, and at the same site from year to year. Using Ethiopia as an example, we are developing a 1 km resolution fertilizer distribution tool that provides pre-season fertilizer recommendations throughout the agricultural regions of the country, conditional on seasonal climate forecasts. By accounting for spatial heterogeneity in soil, climate, market and travel conditions, as well as uncertainty in climate and output prices at the time a farmer must purchase fertilizer, this stochastic optimization tool gives better recommendations to governments, fertilizer companies, and aid organizations looking to optimize the welfare benefits achieved by their

  14. Robust diagnosis of Ewing sarcoma by immunohistochemical detection of super-enhancer-driven EWSR1-ETS targets

    Science.gov (United States)

    Marchetto, Aruna; Gerke, Julia S.; Rubio, Rebeca Alba; Kiran, Merve M.; Musa, Julian; Knott, Maximilian M. L.; Ohmura, Shunya; Li, Jing; Akpolat, Nusret; Akatli, Ayse N.; Özen, Özlem; Dirksen, Uta; Hartmann, Wolfgang; de Alava, Enrique; Baumhoer, Daniel; Sannino, Giuseppina; Kirchner, Thomas; Grünewald, Thomas G. P.

    2018-01-01

    Ewing sarcoma is an undifferentiated small-round-cell sarcoma. Although molecular detection of pathognomonic EWSR1-ETS fusions such as EWSR1-FLI1 enables definitive diagnosis, substantial confusion can arise if molecular diagnostics are unavailable. Diagnosis based on the conventional immunohistochemical marker CD99 is unreliable due to its abundant expression in morphological mimics. To identify novel diagnostic immunohistochemical markers for Ewing sarcoma, we performed comparative expression analyses in 768 tumors representing 21 entities including Ewing-like sarcomas, which confirmed that CIC-DUX4-, BCOR-CCNB3-, EWSR1-NFATc2-, and EWSR1-ETS-translocated sarcomas are distinct entities, and revealed that ATP1A1, BCL11B, and GLG1 constitute specific markers for Ewing sarcoma. Their high expression was validated by immunohistochemistry and proved to depend on EWSR1-FLI1-binding to highly active proximal super-enhancers. Automated cut-off-finding and combination-testing in a tissue-microarray comprising 174 samples demonstrated that detection of high BCL11B and/or GLG1 expression is sufficient to reach 96% specificity for Ewing sarcoma. While 88% of tested Ewing-like sarcomas displayed strong CD99-immunoreactivity, none displayed combined strong BCL11B- and GLG1-immunoreactivity. Collectively, we show that ATP1A1, BCL11B, and GLG1 are EWSR1-FLI1 targets, of which BCL11B and GLG1 offer a fast, simple, and cost-efficient way to diagnose Ewing sarcoma by immunohistochemistry. These markers may significantly reduce the number of misdiagnosed patients, and thus improve patient care. PMID:29416716

  15. Audio-Visual Biofeedback Does Not Improve the Reliability of Target Delineation Using Maximum Intensity Projection in 4-Dimensional Computed Tomography Radiation Therapy Planning

    International Nuclear Information System (INIS)

    Lu, Wei; Neuner, Geoffrey A.; George, Rohini; Wang, Zhendong; Sasor, Sarah; Huang, Xuan; Regine, William F.; Feigenberg, Steven J.; D'Souza, Warren D.

    2014-01-01

    Purpose: To investigate whether coaching patients' breathing would improve the match between ITV MIP (internal target volume generated by contouring in the maximum intensity projection scan) and ITV 10 (generated by combining the gross tumor volumes contoured in 10 phases of a 4-dimensional CT [4DCT] scan). Methods and Materials: Eight patients with a thoracic tumor and 5 patients with an abdominal tumor were included in an institutional review board-approved prospective study. Patients underwent 3 4DCT scans with: (1) free breathing (FB); (2) coaching using audio-visual (AV) biofeedback via the Real-Time Position Management system; and (3) coaching via a spirometer system (Active Breathing Coordinator or ABC). One physician contoured all scans to generate the ITV 10 and ITV MIP . The match between ITV MIP and ITV 10 was quantitatively assessed with volume ratio, centroid distance, root mean squared distance, and overlap/Dice coefficient. We investigated whether coaching (AV or ABC) or uniform expansions (1, 2, 3, or 5 mm) of ITV MIP improved the match. Results: Although both AV and ABC coaching techniques improved frequency reproducibility and ABC improved displacement regularity, neither improved the match between ITV MIP and ITV 10 over FB. On average, ITV MIP underestimated ITV 10 by 19%, 19%, and 21%, with centroid distance of 1.9, 2.3, and 1.7 mm and Dice coefficient of 0.87, 0.86, and 0.88 for FB, AV, and ABC, respectively. Separate analyses indicated a better match for lung cancers or tumors not adjacent to high-intensity tissues. Uniform expansions of ITV MIP did not correct for the mismatch between ITV MIP and ITV 10 . Conclusions: In this pilot study, audio-visual biofeedback did not improve the match between ITV MIP and ITV 10 . In general, ITV MIP should be limited to lung cancers, and modification of ITV MIP in each phase of the 4DCT data set is recommended

  16. Estimating the Causal Impact of Proximity to Gold and Copper Mines on Respiratory Diseases in Chilean Children: An Application of Targeted Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Ronald Herrera

    2017-12-01

    Full Text Available In a town located in a desert area of Northern Chile, gold and copper open-pit mining is carried out involving explosive processes. These processes are associated with increased dust exposure, which might affect children’s respiratory health. Therefore, we aimed to quantify the causal attributable risk of living close to the mines on asthma or allergic rhinoconjunctivitis risk burden in children. Data on the prevalence of respiratory diseases and potential confounders were available from a cross-sectional survey carried out in 2009 among 288 (response: 69 % children living in the community. The proximity of the children’s home addresses to the local gold and copper mine was calculated using geographical positioning systems. We applied targeted maximum likelihood estimation to obtain the causal attributable risk (CAR for asthma, rhinoconjunctivitis and both outcomes combined. Children living more than the first quartile away from the mines were used as the unexposed group. Based on the estimated CAR, a hypothetical intervention in which all children lived at least one quartile away from the copper mine would decrease the risk of rhinoconjunctivitis by 4.7 percentage points (CAR: − 4.7 ; 95 % confidence interval ( 95 % CI: − 8.4 ; − 0.11 ; and 4.2 percentage points (CAR: − 4.2 ; 95 % CI: − 7.9 ; − 0.05 for both outcomes combined. Overall, our results suggest that a hypothetical intervention intended to increase the distance between the place of residence of the highest exposed children would reduce the prevalence of respiratory disease in the community by around four percentage points. This approach could help local policymakers in the development of efficient public health strategies.

  17. Estimating the Causal Impact of Proximity to Gold and Copper Mines on Respiratory Diseases in Chilean Children: An Application of Targeted Maximum Likelihood Estimation.

    Science.gov (United States)

    Herrera, Ronald; Berger, Ursula; von Ehrenstein, Ondine S; Díaz, Iván; Huber, Stella; Moraga Muñoz, Daniel; Radon, Katja

    2017-12-27

    In a town located in a desert area of Northern Chile, gold and copper open-pit mining is carried out involving explosive processes. These processes are associated with increased dust exposure, which might affect children's respiratory health. Therefore, we aimed to quantify the causal attributable risk of living close to the mines on asthma or allergic rhinoconjunctivitis risk burden in children. Data on the prevalence of respiratory diseases and potential confounders were available from a cross-sectional survey carried out in 2009 among 288 (response: 69 % ) children living in the community. The proximity of the children's home addresses to the local gold and copper mine was calculated using geographical positioning systems. We applied targeted maximum likelihood estimation to obtain the causal attributable risk (CAR) for asthma, rhinoconjunctivitis and both outcomes combined. Children living more than the first quartile away from the mines were used as the unexposed group. Based on the estimated CAR, a hypothetical intervention in which all children lived at least one quartile away from the copper mine would decrease the risk of rhinoconjunctivitis by 4.7 percentage points (CAR: - 4.7 ; 95 % confidence interval ( 95 % CI): - 8.4 ; - 0.11 ); and 4.2 percentage points (CAR: - 4.2 ; 95 % CI: - 7.9 ; - 0.05 ) for both outcomes combined. Overall, our results suggest that a hypothetical intervention intended to increase the distance between the place of residence of the highest exposed children would reduce the prevalence of respiratory disease in the community by around four percentage points. This approach could help local policymakers in the development of efficient public health strategies.

  18. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    Science.gov (United States)

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Audio-Visual Biofeedback Does Not Improve the Reliability of Target Delineation Using Maximum Intensity Projection in 4-Dimensional Computed Tomography Radiation Therapy Planning

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Wei, E-mail: wlu@umm.edu [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States); Neuner, Geoffrey A.; George, Rohini; Wang, Zhendong; Sasor, Sarah [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States); Huang, Xuan [Research and Development, Care Management Department, Johns Hopkins HealthCare LLC, Glen Burnie, Maryland (United States); Regine, William F.; Feigenberg, Steven J.; D' Souza, Warren D. [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States)

    2014-01-01

    Purpose: To investigate whether coaching patients' breathing would improve the match between ITV{sub MIP} (internal target volume generated by contouring in the maximum intensity projection scan) and ITV{sub 10} (generated by combining the gross tumor volumes contoured in 10 phases of a 4-dimensional CT [4DCT] scan). Methods and Materials: Eight patients with a thoracic tumor and 5 patients with an abdominal tumor were included in an institutional review board-approved prospective study. Patients underwent 3 4DCT scans with: (1) free breathing (FB); (2) coaching using audio-visual (AV) biofeedback via the Real-Time Position Management system; and (3) coaching via a spirometer system (Active Breathing Coordinator or ABC). One physician contoured all scans to generate the ITV{sub 10} and ITV{sub MIP}. The match between ITV{sub MIP} and ITV{sub 10} was quantitatively assessed with volume ratio, centroid distance, root mean squared distance, and overlap/Dice coefficient. We investigated whether coaching (AV or ABC) or uniform expansions (1, 2, 3, or 5 mm) of ITV{sub MIP} improved the match. Results: Although both AV and ABC coaching techniques improved frequency reproducibility and ABC improved displacement regularity, neither improved the match between ITV{sub MIP} and ITV{sub 10} over FB. On average, ITV{sub MIP} underestimated ITV{sub 10} by 19%, 19%, and 21%, with centroid distance of 1.9, 2.3, and 1.7 mm and Dice coefficient of 0.87, 0.86, and 0.88 for FB, AV, and ABC, respectively. Separate analyses indicated a better match for lung cancers or tumors not adjacent to high-intensity tissues. Uniform expansions of ITV{sub MIP} did not correct for the mismatch between ITV{sub MIP} and ITV{sub 10}. Conclusions: In this pilot study, audio-visual biofeedback did not improve the match between ITV{sub MIP} and ITV{sub 10}. In general, ITV{sub MIP} should be limited to lung cancers, and modification of ITV{sub MIP} in each phase of the 4DCT data set is recommended.

  20. Simian Immunodeficiency Virus Targeting of CXCR3+ CD4+ T Cells in Secondary Lymphoid Organs Is Associated with Robust CXCL10 Expression in Monocyte/Macrophage Subsets.

    Science.gov (United States)

    Fujino, Masayuki; Sato, Hirotaka; Okamura, Tomotaka; Uda, Akihiko; Takeda, Satoshi; Ahmed, Nursarat; Shichino, Shigeyuki; Shiino, Teiichiro; Saito, Yohei; Watanabe, Satoru; Sugimoto, Chie; Kuroda, Marcelo J; Ato, Manabu; Nagai, Yoshiyuki; Izumo, Shuji; Matsushima, Kouji; Miyazawa, Masaaki; Ansari, Aftab A; Villinger, Francois; Mori, Kazuyasu

    2017-07-01

    Glycosylation of Env defines pathogenic properties of simian immunodeficiency virus (SIV). We previously demonstrated that pathogenic SIVmac239 and a live-attenuated, quintuple deglycosylated Env mutant (Δ5G) virus target CD4 + T cells residing in different tissues during acute infection. SIVmac239 and Δ5G preferentially infected distinct CD4 + T cells in secondary lymphoid organs (SLOs) and within the lamina propria of the small intestine, respectively (C. Sugimoto et al., J Virol 86:9323-9336, 2012, https://doi.org/10.1128/JVI.00948-12). Here, we studied the host responses relevant to SIV targeting of CXCR3 + CCR5 + CD4 + T cells in SLOs. Genome-wide transcriptome analyses revealed that Th1-polarized inflammatory responses, defined by expression of CXCR3 chemokines, were distinctly induced in the SIVmac239-infected animals. Consistent with robust expression of CXCL10, CXCR3 + T cells were depleted from blood in the SIVmac239-infected animals. We also discovered that elevation of CXCL10 expression in blood and SLOs was secondary to the induction of CD14 + CD16 + monocytes and MAC387 + macrophages, respectively. Since the significantly higher levels of SIV infection in SLOs occurred with a massive accumulation of infiltrated MAC387 + macrophages, T cells, dendritic cells (DCs), and residential macrophages near high endothelial venules, the results highlight critical roles of innate/inflammatory responses in SIVmac239 infection. Restricted infection in SLOs by Δ5G also suggests that glycosylation of Env modulates innate/inflammatory responses elicited by cells of monocyte/macrophage/DC lineages. IMPORTANCE We previously demonstrated that a pathogenic SIVmac239 virus and a live-attenuated, deglycosylated mutant Δ5G virus infected distinct CD4 + T cell subsets in SLOs and the small intestine, respectively (C. Sugimoto et al., J Virol 86:9323-9336, 2012, https://doi.org/10.1128/JVI.00948-12). Accordingly, infections with SIVmac239, but not with Δ5G, deplete CXCR3

  1. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  2. Robustness Envelopes of Networks

    NARCIS (Netherlands)

    Trajanovski, S.; Martín-Hernández, J.; Winterbach, W.; Van Mieghem, P.

    2013-01-01

    We study the robustness of networks under node removal, considering random node failure, as well as targeted node attacks based on network centrality measures. Whilst both of these have been studied in the literature, existing approaches tend to study random failure in terms of average-case

  3. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  4. Robust Manufacturing Control

    CERN Document Server

    2013-01-01

    This contributed volume collects research papers, presented at the CIRP Sponsored Conference Robust Manufacturing Control: Innovative and Interdisciplinary Approaches for Global Networks (RoMaC 2012, Jacobs University, Bremen, Germany, June 18th-20th 2012). These research papers present the latest developments and new ideas focusing on robust manufacturing control for global networks. Today, Global Production Networks (i.e. the nexus of interconnected material and information flows through which products and services are manufactured, assembled and distributed) are confronted with and expected to adapt to: sudden and unpredictable large-scale changes of important parameters which are occurring more and more frequently, event propagation in networks with high degree of interconnectivity which leads to unforeseen fluctuations, and non-equilibrium states which increasingly characterize daily business. These multi-scale changes deeply influence logistic target achievement and call for robust planning and control ...

  5. 3D–2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    International Nuclear Information System (INIS)

    De Silva, T; Ketcha, M D; Siewerdsen, J H; Uneri, A; Reaungamornrat, S; Kleinszig, G; Vogt, S; Aygun, N; Lo, S-F; Wolinsky, J-P

    2016-01-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D–2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D–2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  <  6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1–2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved

  6. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    Science.gov (United States)

    De Silva, T.; Uneri, A.; Ketcha, M. D.; Reaungamornrat, S.; Kleinszig, G.; Vogt, S.; Aygun, N.; Lo, S.-F.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-04-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14% however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric

  7. Geometrical differences in target volumes based on 18F-fluorodeoxyglucose positron emission tomography/computed tomography and four-dimensional computed tomography maximum intensity projection images of primary thoracic esophageal cancer.

    Science.gov (United States)

    Guo, Y; Li, J; Wang, W; Zhang, Y; Wang, J; Duan, Y; Shang, D; Fu, Z

    2014-01-01

    The objective of the study was to compare geometrical differences of target volumes based on four-dimensional computed tomography (4DCT) maximum intensity projection (MIP) and 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) images of primary thoracic esophageal cancer for radiation treatment. Twenty-one patients with thoracic esophageal cancer sequentially underwent contrast-enhanced three-dimensional computed tomography (3DCT), 4DCT, and 18F-FDG PET/CT thoracic simulation scans during normal free breathing. The internal gross target volume defined as IGTVMIP was obtained by contouring on MIP images. The gross target volumes based on PET/CT images (GTVPET ) were determined with nine different standardized uptake value (SUV) thresholds and manual contouring: SUV≥2.0, 2.5, 3.0, 3.5 (SUVn); ≥20%, 25%, 30%, 35%, 40% of the maximum (percentages of SUVmax, SUVn%). The differences in volume ratio (VR), conformity index (CI), and degree of inclusion (DI) between IGTVMIP and GTVPET were investigated. The mean centroid distance between GTVPET and IGTVMIP ranged from 4.98 mm to 6.53 mm. The VR ranged from 0.37 to 1.34, being significantly (P<0.05) closest to 1 at SUV2.5 (0.94), SUV20% (1.07), or manual contouring (1.10). The mean CI ranged from 0.34 to 0.58, being significantly closest to 1 (P<0.05) at SUV2.0 (0.55), SUV2.5 (0.56), SUV20% (0.56), SUV25% (0.53), or manual contouring (0.58). The mean DI of GTVPET in IGTVMIP ranged from 0.61 to 0.91, and the mean DI of IGTVMIP in GTVPET ranged from 0.34 to 0.86. The SUV threshold setting of SUV2.5, SUV20% or manual contouring yields the best tumor VR and CI with internal-gross target volume contoured on MIP of 4DCT dataset, but 3DPET/CT and 4DCT MIP could not replace each other for motion encompassing target volume delineation for radiation treatment. © 2014 International Society for Diseases of the Esophagus.

  8. Methylation of WNT target genes AXIN2 and DKK1 as robust biomarkers for recurrence prediction in stage II colon cancer

    NARCIS (Netherlands)

    Kandimalla, R.; Linnekamp, J. F.; van Hooff, S.; Castells, A.; Llor, X.; Andreu, M.; Jover, R.; Goel, A.; Medema, J. P.

    2017-01-01

    Stage II colon cancer (CC) still remains a clinical challenge with patient stratification for adjuvant therapy (AT) largely relying on clinical parameters. Prognostic biomarkers are urgently needed for better stratification. Previously, we have shown that WNT target genes AXIN2, DKK1, APCDD1, ASCL2

  9. The Crane Robust Control

    Directory of Open Access Journals (Sweden)

    Marek Hicar

    2004-01-01

    Full Text Available The article is about a control design for complete structure of the crane: crab, bridge and crane uplift.The most important unknown parameters for simulations are burden weight and length of hanging rope. We will use robustcontrol for crab and bridge control to ensure adaptivity for burden weight and rope length. Robust control will be designed for current control of the crab and bridge, necessary is to know the range of unknown parameters. Whole robust will be splitto subintervals and after correct identification of unknown parameters the most suitable robust controllers will be chosen.The most important condition at the crab and bridge motion is avoiding from burden swinging in the final position. Crab and bridge drive is designed by asynchronous motor fed from frequency converter. We will use crane uplift with burden weightobserver in combination for uplift, crab and bridge drive with cooperation of their parameters: burden weight, rope length and crab and bridge position. Controllers are designed by state control method. We will use preferably a disturbance observerwhich will identify burden weight as a disturbance. The system will be working in both modes at empty hook as well asat maximum load: burden uplifting and dropping down.

  10. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  11. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  12. Using a network-based approach and targeted maximum likelihood estimation to evaluate the effect of adding pre-exposure prophylaxis to an ongoing test-and-treat trial.

    Science.gov (United States)

    Balzer, Laura; Staples, Patrick; Onnela, Jukka-Pekka; DeGruttola, Victor

    2017-04-01

    Several cluster-randomized trials are underway to investigate the implementation and effectiveness of a universal test-and-treat strategy on the HIV epidemic in sub-Saharan Africa. We consider nesting studies of pre-exposure prophylaxis within these trials. Pre-exposure prophylaxis is a general strategy where high-risk HIV- persons take antiretrovirals daily to reduce their risk of infection from exposure to HIV. We address how to target pre-exposure prophylaxis to high-risk groups and how to maximize power to detect the individual and combined effects of universal test-and-treat and pre-exposure prophylaxis strategies. We simulated 1000 trials, each consisting of 32 villages with 200 individuals per village. At baseline, we randomized the universal test-and-treat strategy. Then, after 3 years of follow-up, we considered four strategies for targeting pre-exposure prophylaxis: (1) all HIV- individuals who self-identify as high risk, (2) all HIV- individuals who are identified by their HIV+ partner (serodiscordant couples), (3) highly connected HIV- individuals, and (4) the HIV- contacts of a newly diagnosed HIV+ individual (a ring-based strategy). We explored two possible trial designs, and all villages were followed for a total of 7 years. For each village in a trial, we used a stochastic block model to generate bipartite (male-female) networks and simulated an agent-based epidemic process on these networks. We estimated the individual and combined intervention effects with a novel targeted maximum likelihood estimator, which used cross-validation to data-adaptively select from a pre-specified library the candidate estimator that maximized the efficiency of the analysis. The universal test-and-treat strategy reduced the 3-year cumulative HIV incidence by 4.0% on average. The impact of each pre-exposure prophylaxis strategy on the 4-year cumulative HIV incidence varied by the coverage of the universal test-and-treat strategy with lower coverage resulting in a larger

  13. Manipulation Robustness of Collaborative Filtering

    OpenAIRE

    Benjamin Van Roy; Xiang Yan

    2010-01-01

    A collaborative filtering system recommends to users products that similar users like. Collaborative filtering systems influence purchase decisions and hence have become targets of manipulation by unscrupulous vendors. We demonstrate that nearest neighbors algorithms, which are widely used in commercial systems, are highly susceptible to manipulation and introduce new collaborative filtering algorithms that are relatively robust.

  14. A step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy and minimization of gate fee.

    Science.gov (United States)

    Kyriakis, Efstathios; Psomopoulos, Constantinos; Kokkotis, Panagiotis; Bourtsalas, Athanasios; Themelis, Nikolaos

    2017-06-23

    This study attempts the development of an algorithm in order to present a step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy, also considering the basic obstacle which is in many cases, the gate fee. Various parameters identified and evaluated in order to formulate the proposed decision making method in the form of an algorithm. The principle simulation input is the amount of municipal solid wastes (MSW) available for incineration and along with its net calorific value are the most important factors for the feasibility of the plant. Moreover, the research is focused both on the parameters that could increase the energy production and those that affect the R1 energy efficiency factor. Estimation of the final gate fee is achieved through the economic analysis of the entire project by investigating both expenses and revenues which are expected according to the selected site and outputs of the facility. In this point, a number of commonly revenue methods were included in the algorithm. The developed algorithm has been validated using three case studies in Greece-Athens, Thessaloniki, and Central Greece, where the cities of Larisa and Volos have been selected for the application of the proposed decision making tool. These case studies were selected based on a previous publication made by two of the authors, in which these areas where examined. Results reveal that the development of a «solid» methodological approach in selecting the site and the size of waste-to-energy (WtE) facility can be feasible. However, the maximization of the energy efficiency factor R1 requires high utilization factors while the minimization of the final gate fee requires high R1 and high metals recovery from the bottom ash as well as economic exploitation of recovered raw materials if any.

  15. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  16. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  17. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  18. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  19. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  20. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  1. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  2. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  3. Comparing photon and proton-based hypofractioned SBRT for prostate cancer accounting for robustness and realistic treatment deliverability.

    Science.gov (United States)

    Goddard, Lee C; Brodin, N Patrik; Bodner, William R; Garg, Madhur K; Tomé, Wolfgang A

    2018-05-01

    To investigate whether photon or proton-based stereotactic body radiation therapy (SBRT is the preferred modality for high dose hypofractionation prostate cancer treatment. Achievable dose distributions were compared when uncertainties in target positioning and range uncertainties were appropriately accounted for. 10 patients with prostate cancer previously treated at our institution (Montefiore Medical Center) with photon SBRT using volumetric modulated arc therapy (VMAT) were identified. MRI images fused to the treatment planning CT allowed for accurate target and organ at risk (OAR) delineation. The clinical target volume was defined as the prostate gland plus the proximal seminal vesicles. Critical OARs include the bladder wall, bowel, femoral heads, neurovascular bundle, penile bulb, rectal wall, urethra and urogenital diaphragm. Photon plan robustness was evaluated by simulating 2 mm isotropic setup variations. Comparative proton SBRT plans employing intensity modulated proton therapy (IMPT) were generated using robust optimization. Plan robustness was evaluated by simulating 2 mm setup variations and 3% or 1% Hounsfield unit (HU) calibration uncertainties. Comparable maximum OAR doses are achievable between photon and proton SBRT, however, robust optimization results in higher maximum doses for proton SBRT. Rectal maximum doses are significantly higher for Robust proton SBRT with 1% HU uncertainty compared to photon SBRT (p = 0.03), whereas maximum doses were comparable for bladder wall (p = 0.43), urethra (p = 0.82) and urogenital diaphragm (p = 0.50). Mean doses to bladder and rectal wall are lower for proton SBRT, but higher for neurovascular bundle, urethra and urogenital diaphragm due to increased lateral scatter. Similar target conformality is achieved, albeit with slightly larger treated volume ratios for proton SBRT, >1.4 compared to 1.2 for photon SBRT. Similar treatment plans can be generated with IMPT compared to VMAT in terms of

  4. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  5. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  6. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  7. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  8. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  9. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  10. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  11. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  12. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  13. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  14. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  15. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  16. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  17. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  18. SU-E-T-266: Proton PBS Plan Design and Robustness Evaluation for Head and Neck Cancers

    International Nuclear Information System (INIS)

    Liang, X; Tang, S; Zhai, H; Kirk, M; Kalbasi, A; Lin, A; Ahn, P; Tochner, Z; McDonough, J; Both, S

    2014-01-01

    Purpose: To describe a newly designed proton pencil beam scanning (PBS) planning technique for radiotherapy of patients with bilateral oropharyngeal cancer, and to assess plan robustness. Methods: We treated 10 patients with proton PBS plans using 2 posterior oblique field (2F PBS) comprised of 80% single-field uniform dose (SFUD) and 20% intensity-modulated proton therapy (IMPT). All patients underwent weekly CT scans for verification. Using dosimetric indicators for both targets and organs at risk (OARs), we quantitatively compared initial plans and verification plans using student t-tests. We created a second proton PBS plan for each patient using 2 posterior oblique plus 1 anterior field comprised of 100% SFUD (3F PBS). We assessed plan robustness for both proton plan groups, as well as a photon volumetric modulated arc therapy (VMAT) plan group by comparing initial and verification plans. Results: The 2F PBS plans were not robust in target coverage. D98% for clinical target volume (CTV) degraded from 100% to 96% on average, with maximum change Δ D98% of −24%. Two patients were moved to photon VMAT treatment due to insufficient CTV coverage on verification plans. Plan robustness was especially weak in the low-anterior neck. The 3F PBS plans, however, demonstrated robust target coverage, which was comparable to the VMAT photon plan group. Doses to oral cavity were lower in the Proton PBS plans compared to photon VMAT plans due to no lower exit dose to the oral cavity. Conclusion: Proton PBS plans using 2 posterior oblique fields were not robust for CTV coverage, due to variable positioning of redundant soft tissue in the posterior neck. We designed 3-field proton PBS plans using an anterior field to avoid long heterogeneous paths in the low neck. These 3-field proton PBS plans had significantly improved plan robustness, and the robustness is comparable to VMAT photon plans

  19. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  20. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  1. Comparing Four Instructional Techniques for Promoting Robust Knowledge

    Science.gov (United States)

    Richey, J. Elizabeth; Nokes-Malach, Timothy J.

    2015-01-01

    Robust knowledge serves as a common instructional target in academic settings. Past research identifying characteristics of experts' knowledge across many domains can help clarify the features of robust knowledge as well as ways of assessing it. We review the expertise literature and identify three key features of robust knowledge (deep,…

  2. Robust Utility Maximization Under Convex Portfolio Constraints

    International Nuclear Information System (INIS)

    Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed

    2015-01-01

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle

  3. Robust Geometric Control of a Distillation Column

    DEFF Research Database (Denmark)

    Kymmel, Mogens; Andersen, Henrik Weisberg

    1987-01-01

    A frequency domain method, which makes it possible to adjust multivariable controllers with respect to both nominal performance and robustness, is presented. The basic idea in the approach is that the designer assigns objectives such as steady-state tracking, maximum resonance peaks, bandwidth, m...... is used to examine and improve geometric control of a binary distillation column....

  4. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  5. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    International Nuclear Information System (INIS)

    McGowan, S E; Albertini, F; Lomax, A J; Thomas, S J

    2015-01-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties. (paper)

  6. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    Science.gov (United States)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  7. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results show...

  8. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  9. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  10. Handling Occlusions for Robust Augmented Reality Systems

    Directory of Open Access Journals (Sweden)

    Maidi Madjid

    2010-01-01

    Full Text Available Abstract In Augmented Reality applications, the human perception is enhanced with computer-generated graphics. These graphics must be exactly registered to real objects in the scene and this requires an effective Augmented Reality system to track the user's viewpoint. In this paper, a robust tracking algorithm based on coded fiducials is presented. Square targets are identified and pose parameters are computed using a hybrid approach based on a direct method combined with the Kalman filter. An important factor for providing a robust Augmented Reality system is the correct handling of targets occlusions by real scene elements. To overcome tracking failure due to occlusions, we extend our method using an optical flow approach to track visible points and maintain virtual graphics overlaying when targets are not identified. Our proposed real-time algorithm is tested with different camera viewpoints under various image conditions and shows to be accurate and robust.

  11. Robust optimization based upon statistical theory.

    Science.gov (United States)

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose

  12. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  13. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  14. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  15. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  16. Robust Approaches to Forecasting

    OpenAIRE

    Jennifer Castle; David Hendry; Michael P. Clements

    2014-01-01

    We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods ar...

  17. Robustness - theoretical framework

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.

    2010-01-01

    More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....

  18. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  19. Robustness in econometrics

    CERN Document Server

    Sriboonchitta, Songsak; Huynh, Van-Nam

    2017-01-01

    This book presents recent research on robustness in econometrics. Robust data processing techniques – i.e., techniques that yield results minimally affected by outliers – and their applications to real-life economic and financial situations are the main focus of this book. The book also discusses applications of more traditional statistical techniques to econometric problems. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. In day-by-day data, we often encounter outliers that do not reflect the long-term economic trends, e.g., unexpected and abrupt fluctuations. As such, it is important to develop robust data processing techniques that can accommodate these fluctuations.

  20. Robust plasmonic substrates

    DEFF Research Database (Denmark)

    Kostiučenko, Oksana; Fiutowski, Jacek; Tamulevicius, Tomas

    2014-01-01

    Robustness is a key issue for the applications of plasmonic substrates such as tip-enhanced Raman spectroscopy, surface-enhanced spectroscopies, enhanced optical biosensing, optical and optoelectronic plasmonic nanosensors and others. A novel approach for the fabrication of robust plasmonic...... substrates is presented, which relies on the coverage of gold nanostructures with diamond-like carbon (DLC) thin films of thicknesses 25, 55 and 105 nm. DLC thin films were grown by direct hydrocarbon ion beam deposition. In order to find the optimum balance between optical and mechanical properties...

  1. Robust Self Tuning Controllers

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    1985-01-01

    The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...... has several operation modes and a detector for controlling the mode. A special self tuning controller has been developed to regulate plant with changing time delay.......The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...

  2. Computing the maximum volume inscribed ellipsoid of a polytopic projection

    NARCIS (Netherlands)

    Zhen, Jianzhe; den Hertog, Dick

    We introduce a novel scheme based on a blending of Fourier-Motzkin elimination (FME) and adjustable robust optimization techniques to compute the maximum volume inscribed ellipsoid (MVE) in a polytopic projection. It is well-known that deriving an explicit description of a projected polytope is

  3. Computing the Maximum Volume Inscribed Ellipsoid of a Polytopic Projection

    NARCIS (Netherlands)

    Zhen, J.; den Hertog, D.

    2015-01-01

    We introduce a novel scheme based on a blending of Fourier-Motzkin elimination (FME) and adjustable robust optimization techniques to compute the maximum volume inscribed ellipsoid (MVE) in a polytopic projection. It is well-known that deriving an explicit description of a projected polytope is

  4. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  5. Robust estimation of the noise variance from background MR data

    NARCIS (Netherlands)

    Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.

    2006-01-01

    In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum

  6. Robust surgery loading

    NARCIS (Netherlands)

    Hans, Elias W.; Wullink, Gerhard; van Houdenhoven, Mark; Kazemier, Geert

    2008-01-01

    We consider the robust surgery loading problem for a hospital’s operating theatre department, which concerns assigning surgeries and sufficient planned slack to operating room days. The objective is to maximize capacity utilization and minimize the risk of overtime, and thus cancelled patients. This

  7. Robustness and structure of complex networks

    Science.gov (United States)

    Shao, Shuai

    This dissertation covers the two major parts of my PhD research on statistical physics and complex networks: i) modeling a new type of attack -- localized attack, and investigating robustness of complex networks under this type of attack; ii) discovering the clustering structure in complex networks and its influence on the robustness of coupled networks. Complex networks appear in every aspect of our daily life and are widely studied in Physics, Mathematics, Biology, and Computer Science. One important property of complex networks is their robustness under attacks, which depends crucially on the nature of attacks and the structure of the networks themselves. Previous studies have focused on two types of attack: random attack and targeted attack, which, however, are insufficient to describe many real-world damages. Here we propose a new type of attack -- localized attack, and study the robustness of complex networks under this type of attack, both analytically and via simulation. On the other hand, we also study the clustering structure in the network, and its influence on the robustness of a complex network system. In the first part, we propose a theoretical framework to study the robustness of complex networks under localized attack based on percolation theory and generating function method. We investigate the percolation properties, including the critical threshold of the phase transition pc and the size of the giant component Pinfinity. We compare localized attack with random attack and find that while random regular (RR) networks are more robust against localized attack, Erdoḧs-Renyi (ER) networks are equally robust under both types of attacks. As for scale-free (SF) networks, their robustness depends crucially on the degree exponent lambda. The simulation results show perfect agreement with theoretical predictions. We also test our model on two real-world networks: a peer-to-peer computer network and an airline network, and find that the real-world networks

  8. A robust classic.

    Science.gov (United States)

    Kutzner, Florian; Vogel, Tobias; Freytag, Peter; Fiedler, Klaus

    2011-01-01

    In the present research, we argue for the robustness of illusory correlations (ICs, Hamilton & Gifford, 1976) regarding two boundary conditions suggested in previous research. First, we argue that ICs are maintained under extended experience. Using simulations, we derive conflicting predictions. Whereas noise-based accounts predict ICs to be maintained (Fielder, 2000; Smith, 1991), a prominent account based on discrepancy-reducing feedback learning predicts ICs to disappear (Van Rooy et al., 2003). An experiment involving 320 observations with majority and minority members supports the claim that ICs are maintained. Second, we show that actively using the stereotype to make predictions that are met with reward and punishment does not eliminate the bias. In addition, participants' operant reactions afford a novel online measure of ICs. In sum, our findings highlight the robustness of ICs that can be explained as a result of unbiased but noisy learning.

  9. Robust Airline Schedules

    OpenAIRE

    Eggenberg, Niklaus; Salani, Matteo; Bierlaire, Michel

    2010-01-01

    Due to economic pressure industries, when planning, tend to focus on optimizing the expected profit or the yield. The consequence of highly optimized solutions is an increased sensitivity to uncertainty. This generates additional "operational" costs, incurred by possible modifications of the original plan to be performed when reality does not reflect what was expected in the planning phase. The modern research trend focuses on "robustness" of solutions instead of yield or profit. Although ro...

  10. Proceedings of the First International Symposium on Robust Design 2014

    DEFF Research Database (Denmark)

    The symposium concerns the topic of robust design from a practical and industry orientated perspective. During the 2 day symposium we will share our understanding of the need of industry with respect to the control of variance, reliability issues and approaches to robust design. The target audience...

  11. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  12. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  13. Information theory perspective on network robustness

    International Nuclear Information System (INIS)

    Schieber, Tiago A.; Carpi, Laura; Frery, Alejandro C.; Rosso, Osvaldo A.; Pardalos, Panos M.; Ravetti, Martín G.

    2016-01-01

    A crucial challenge in network theory is the study of the robustness of a network when facing a sequence of failures. In this work, we propose a dynamical definition of network robustness based on Information Theory, that considers measurements of the structural changes caused by failures of the network's components. Failures are defined here as a temporal process defined in a sequence. Robustness is then evaluated by measuring dissimilarities between topologies after each time step of the sequence, providing a dynamical information about the topological damage. We thoroughly analyze the efficiency of the method in capturing small perturbations by considering different probability distributions on networks. In particular, we find that distributions based on distances are more consistent in capturing network structural deviations, as better reflect the consequences of the failures. Theoretical examples and real networks are used to study the performance of this methodology. - Highlights: • A novel methodology to measure the robustness of a network to component failure or targeted attacks is proposed. • The use of the network's distance PDF allows a precise analysis. • The method provides a dynamic robustness profile showing the response of the topology to each failure event. • The measure is capable to detect network's critical elements.

  14. Robust efficient video fingerprinting

    Science.gov (United States)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  15. Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise

    Directory of Open Access Journals (Sweden)

    Bowen Hou

    2017-11-01

    Full Text Available As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model.

  16. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  17. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  18. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  19. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  20. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  1. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  2. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential...

  3. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  4. Passion, Robustness and Perseverance

    DEFF Research Database (Denmark)

    Lim, Miguel Antonio; Lund, Rebecca

    2016-01-01

    Evaluation and merit in the measured university are increasingly based on taken-for-granted assumptions about the “ideal academic”. We suggest that the scholar now needs to show that she is passionate about her work and that she gains pleasure from pursuing her craft. We suggest that passion...... and pleasure achieve an exalted status as something compulsory. The scholar ought to feel passionate about her work and signal that she takes pleasure also in the difficult moments. Passion has become a signal of robustness and perseverance in a job market characterised by funding shortages, increased pressure...... way to demonstrate their potential and, crucially, their passion for their work. Drawing on the literature on technologies of governance, we reflect on what is captured and what is left out by these two evaluation instruments. We suggest that bibliometric analysis at the individual level is deeply...

  5. Robust Optical Flow Estimation

    Directory of Open Access Journals (Sweden)

    Javier Sánchez Pérez

    2013-10-01

    Full Text Available n this work, we describe an implementation of the variational method proposed by Brox etal. in 2004, which yields accurate optical flows with low running times. It has several benefitswith respect to the method of Horn and Schunck: it is more robust to the presence of outliers,produces piecewise-smooth flow fields and can cope with constant brightness changes. Thismethod relies on the brightness and gradient constancy assumptions, using the information ofthe image intensities and the image gradients to find correspondences. It also generalizes theuse of continuous L1 functionals, which help mitigate the effect of outliers and create a TotalVariation (TV regularization. Additionally, it introduces a simple temporal regularizationscheme that enforces a continuous temporal coherence of the flow fields.

  6. Robust Multimodal Dictionary Learning

    Science.gov (United States)

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  7. Robust snapshot interferometric spectropolarimetry.

    Science.gov (United States)

    Kim, Daesuk; Seo, Yoonho; Yoon, Yonghee; Dembele, Vamara; Yoon, Jae Woong; Lee, Kyu Jin; Magnusson, Robert

    2016-05-15

    This Letter describes a Stokes vector measurement method based on a snapshot interferometric common-path spectropolarimeter. The proposed scheme, which employs an interferometric polarization-modulation module, can extract the spectral polarimetric parameters Ψ(k) and Δ(k) of a transmissive anisotropic object by which an accurate Stokes vector can be calculated in the spectral domain. It is inherently strongly robust to the object 3D pose variation, since it is designed distinctly so that the measured object can be placed outside of the interferometric module. Experiments are conducted to verify the feasibility of the proposed system. The proposed snapshot scheme enables us to extract the spectral Stokes vector of a transmissive anisotropic object within tens of msec with high accuracy.

  8. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  9. Robust Control Design via Linear Programming

    Science.gov (United States)

    Keel, L. H.; Bhattacharyya, S. P.

    1998-01-01

    This paper deals with the problem of synthesizing or designing a feedback controller of fixed dynamic order. The closed loop specifications considered here are given in terms of a target performance vector representing a desired set of closed loop transfer functions connecting various signals. In general these point targets are unattainable with a fixed order controller. By enlarging the target from a fixed point set to an interval set the solvability conditions with a fixed order controller are relaxed and a solution is more easily enabled. Results from the parametric robust control literature can be used to design the interval target family so that the performance deterioration is acceptable, even when plant uncertainty is present. It is shown that it is possible to devise a computationally simple linear programming approach that attempts to meet the desired closed loop specifications.

  10. Robustness Analyses of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Hald, Frederik

    2013-01-01

    The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many mo...... with respect to robustness of timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many...... modern building codes consider the need for the robustness of structures and provide strategies and methods to obtain robustness. Therefore, a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues...

  11. Robust UAV Mission Planning

    NARCIS (Netherlands)

    Evers, L.; Dollevoet, T.; Barros, A.I.; Monsuur, H.

    2014-01-01

    Unmanned Aerial Vehicles (UAVs) can provide significant contributions to information gathering in military missions. UAVs can be used to capture both full motion video and still imagery of specific target locations within the area of interest. In order to improve the effectiveness of a

  12. Robust UAV mission planning

    NARCIS (Netherlands)

    Evers, L.; Dollevoet, T.; Barros, A.I.; Monsuur, H.

    2011-01-01

    Unmanned Areal Vehicles (UAVs) can provide significant contributions to information gathering in military missions. UAVs can be used to capture both full motion video and still imagery of specific target locations within the area of interest. In order to improve the effectiveness of a reconnaissance

  13. Robust UAV Mission Planning

    NARCIS (Netherlands)

    Evers, L.; Dollevoet, T; Barros, A.I.; Monsuur, H.

    2011-01-01

    Unmanned Aerial Vehicles (UAVs) can provide significant contributions to information gathering in military missions. UAVs can be used to capture both full motion video and still imagery of specific target locations within the area of interest. In order to improve the effectiveness of a

  14. Robust UAV Mission Planning

    NARCIS (Netherlands)

    L. Evers (Lanah); T.A.B. Dollevoet (Twan); A.I. Barros (Ana); H. Monsuur (Herman)

    2011-01-01

    textabstractUnmanned Areal Vehicles (UAVs) can provide significant contributions to information gathering in military missions. UAVs can be used to capture both full motion video and still imagery of specific target locations within the area of interest. In order to improve the effectiveness of a

  15. Contributions to robust methods of creep analysis

    International Nuclear Information System (INIS)

    Penny, B.K.

    1991-01-01

    Robust methods for the predictions of deformations and lifetimes of components operating in the creep range are presented. The ingredients used for this are well-tried numerical techniques combined with the concepts of continuum damage and so-called reference stresses. The methods described are derived in order to obtain the maximum benefit during the early stages of design where broad assessments of the influences of material choice, loadings and geometry need to be made quickly and with economical use of computers. It is also intended that the same methods will be of value during operation if estimates of damage or if exercises in life extension or inspection timing are required. (orig.)

  16. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  17. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  18. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  19. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  20. Dynamics robustness of cascading systems.

    Directory of Open Access Journals (Sweden)

    Jonathan T Young

    2017-03-01

    Full Text Available A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1 Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2 Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it

  1. Robust continuous clustering.

    Science.gov (United States)

    Shah, Sohil Atul; Koltun, Vladlen

    2017-09-12

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

  2. Investigation of a measure of robustness in inductively coupled plasma mass spectrometry

    Science.gov (United States)

    Makonnen, Yoseif; Beauchemin, Diane

    2015-01-01

    In industrial/commercial settings where operators often have minimal expertise in inductively coupled plasma (ICP) mass spectrometry (MS), there is a prevalent need for a response factor indicating robust plasma conditions, which is analogous to the Mg II/Mg I ratio in ICP optical emission spectrometry (OES), whereby a Mg II/Mg I ratio of 10 constitutes robust conditions. While minimizing the oxide ratio usually corresponds to robust conditions, there is no specific target value that is widely accepted as indicating robust conditions. Furthermore, tuning for low oxide ratios does not necessarily guarantee minimal matrix effects, as they really address polyatomic interferences. From experiments, conducted in parallel for both MS and OES, there were some element pairs of similar mass and very different ionization potential that were exploited for such a purpose, the rationale being that, if these elements were ionized to the same extent, then that could be indicative of a robust plasma. The Be II/Li I intensity ratio was directly related to the Mg II/Mg I ratio in OES. Moreover, the 9Be+/7Li+ ratio was inversely related to the CeO+/Ce+ and LaO+/La+ oxide ratios in MS. The effects of different matrices (i.e. 0.01-0.1 M Na) were also investigated and compared to a conventional argon plasma optimized for maximum sensitivity. The suppression effect of these matrices was significantly reduced, if not eliminated in the case of 0.01 M Na, when the 9Be+/7Li+ ratio was around 0.30 on the Varian 820 MS instrument. Moreover, a very similar ratio (0.28) increased robustness to the same extent on a completely different ICP-MS instrument (PerkinElmer NEXION). Much greater robustness was achieved using a mixed-gas plasma with nitrogen in the outer gas and either nitrogen or hydrogen as a sheathing gas, as the 9Be+/7Li+ ratio was then around 1.70. To the best of our knowledge, this is the first report on using a simple analyte intensity ratio, 9Be+/7Li+, to gauge plasma robustness.

  3. Evaluation of the maximum-likelihood adaptive neural system (MLANS) applications to noncooperative IFF

    Science.gov (United States)

    Chernick, Julian A.; Perlovsky, Leonid I.; Tye, David M.

    1994-06-01

    This paper describes applications of maximum likelihood adaptive neural system (MLANS) to the characterization of clutter in IR images and to the identification of targets. The characterization of image clutter is needed to improve target detection and to enhance the ability to compare performance of different algorithms using diverse imagery data. Enhanced unambiguous IFF is important for fratricide reduction while automatic cueing and targeting is becoming an ever increasing part of operations. We utilized MLANS which is a parametric neural network that combines optimal statistical techniques with a model-based approach. This paper shows that MLANS outperforms classical classifiers, the quadratic classifier and the nearest neighbor classifier, because on the one hand it is not limited to the usual Gaussian distribution assumption and can adapt in real time to the image clutter distribution; on the other hand MLANS learns from fewer samples and is more robust than the nearest neighbor classifiers. Future research will address uncooperative IFF using fused IR and MMW data.

  4. Robust Trust in Expert Testimony

    Directory of Open Access Journals (Sweden)

    Christian Dahlman

    2015-05-01

    Full Text Available The standard of proof in criminal trials should require that the evidence presented by the prosecution is robust. This requirement of robustness says that it must be unlikely that additional information would change the probability that the defendant is guilty. Robustness is difficult for a judge to estimate, as it requires the judge to assess the possible effect of information that the he or she does not have. This article is concerned with expert witnesses and proposes a method for reviewing the robustness of expert testimony. According to the proposed method, the robustness of expert testimony is estimated with regard to competence, motivation, external strength, internal strength and relevance. The danger of trusting non-robust expert testimony is illustrated with an analysis of the Thomas Quick Case, a Swedish legal scandal where a patient at a mental institution was wrongfully convicted for eight murders.

  5. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  6. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  7. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  8. Robustness of IPTV business models

    NARCIS (Netherlands)

    Bouwman, H.; Zhengjia, M.; Duin, P. van der; Limonard, S.

    2008-01-01

    The final stage in the STOF method is an evaluation of the robustness of the design, for which the method provides some guidelines. For many innovative services, the future holds numerous uncertainties, which makes evaluating the robustness of a business model a difficult task. In this chapter, we

  9. Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    2009-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure....

  10. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  11. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  12. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  13. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  14. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  15. Robust statistical methods with R

    CERN Document Server

    Jureckova, Jana

    2005-01-01

    Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...

  16. Robust boosting via convex optimization

    Science.gov (United States)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems

  17. New robust statistical procedures for the polytomous logistic regression models.

    Science.gov (United States)

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  18. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  19. Enhanced echolocation via robust statistics and super-resolution of sonar images

    Science.gov (United States)

    Kim, Kio

    Echolocation is a process in which an animal uses acoustic signals to exchange information with environments. In a recent study, Neretti et al. have shown that the use of robust statistics can significantly improve the resiliency of echolocation against noise and enhance its accuracy by suppressing the development of sidelobes in the processing of an echo signal. In this research, the use of robust statistics is extended to problems in underwater explorations. The dissertation consists of two parts. Part I describes how robust statistics can enhance the identification of target objects, which in this case are cylindrical containers filled with four different liquids. Particularly, this work employs a variation of an existing robust estimator called an L-estimator, which was first suggested by Koenker and Bassett. As pointed out by Au et al.; a 'highlight interval' is an important feature, and it is closely related with many other important features that are known to be crucial for dolphin echolocation. A varied L-estimator described in this text is used to enhance the detection of highlight intervals, which eventually leads to a successful classification of echo signals. Part II extends the problem into 2 dimensions. Thanks to the advances in material and computer technology, various sonar imaging modalities are available on the market. By registering acoustic images from such video sequences, one can extract more information on the region of interest. Computer vision and image processing allowed application of robust statistics to the acoustic images produced by forward looking sonar systems, such as Dual-frequency Identification Sonar and ProViewer. The first use of robust statistics for sonar image enhancement in this text is in image registration. Random Sampling Consensus (RANSAC) is widely used for image registration. The registration algorithm using RANSAC is optimized for sonar image registration, and the performance is studied. The second use of robust

  20. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  1. A scoring mechanism for the rank aggregation of network robustness

    Science.gov (United States)

    Yazdani, Alireza; Dueñas-Osorio, Leonardo; Li, Qilin

    2013-10-01

    To date, a number of metrics have been proposed to quantify inherent robustness of network topology against failures. However, each single metric usually only offers a limited view of network vulnerability to different types of random failures and targeted attacks. When applied to certain network configurations, different metrics rank network topology robustness in different orders which is rather inconsistent, and no single metric fully characterizes network robustness against different modes of failure. To overcome such inconsistency, this work proposes a multi-metric approach as the basis of evaluating aggregate ranking of network topology robustness. This is based on simultaneous utilization of a minimal set of distinct robustness metrics that are standardized so to give way to a direct comparison of vulnerability across networks with different sizes and configurations, hence leading to an initial scoring of inherent topology robustness. Subsequently, based on the inputs of initial scoring a rank aggregation method is employed to allocate an overall ranking of robustness to each network topology. A discussion is presented in support of the presented multi-metric approach and its applications to more realistically assess and rank network topology robustness.

  2. Robust loss functions for boosting.

    Science.gov (United States)

    Kanamori, Takafumi; Takenouchi, Takashi; Eguchi, Shinto; Murata, Noboru

    2007-08-01

    Boosting is known as a gradient descent algorithm over loss functions. It is often pointed out that the typical boosting algorithm, Adaboost, is highly affected by outliers. In this letter, loss functions for robust boosting are studied. Based on the concept of robust statistics, we propose a transformation of loss functions that makes boosting algorithms robust against extreme outliers. Next, the truncation of loss functions is applied to contamination models that describe the occurrence of mislabels near decision boundaries. Numerical experiments illustrate that the proposed loss functions derived from the contamination models are useful for handling highly noisy data in comparison with other loss functions.

  3. Theoretical Framework for Robustness Evaluation

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, although...... the importance of robustness for structural design is widely recognized the code requirements are not specified in detail, which makes the practical use difficult. This paper describes a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines...

  4. Robustness of airline route networks

    Science.gov (United States)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  5. Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter

    KAUST Repository

    Luo, Xiaodong; Hoteit, Ibrahim

    2011-01-01

    A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used

  6. Robustness Recipes for Minimax Robust Optimization in Intensity Modulated Proton Therapy for Oropharyngeal Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Voort, Sebastian van der [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Water, Steven van de [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Perkó, Zoltán [Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Heijmen, Ben [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Lathouwers, Danny [Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Hoogeman, Mischa, E-mail: m.hoogeman@erasmusmc.nl [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands)

    2016-05-01

    Purpose: We aimed to derive a “robustness recipe” giving the range robustness (RR) and setup robustness (SR) settings (ie, the error values) that ensure adequate clinical target volume (CTV) coverage in oropharyngeal cancer patients for given gaussian distributions of systematic setup, random setup, and range errors (characterized by standard deviations of Σ, σ, and ρ, respectively) when used in minimax worst-case robust intensity modulated proton therapy (IMPT) optimization. Methods and Materials: For the analysis, contoured computed tomography (CT) scans of 9 unilateral and 9 bilateral patients were used. An IMPT plan was considered robust if, for at least 98% of the simulated fractionated treatments, 98% of the CTV received 95% or more of the prescribed dose. For fast assessment of the CTV coverage for given error distributions (ie, different values of Σ, σ, and ρ), polynomial chaos methods were used. Separate recipes were derived for the unilateral and bilateral cases using one patient from each group, and all 18 patients were included in the validation of the recipes. Results: Treatment plans for bilateral cases are intrinsically more robust than those for unilateral cases. The required RR only depends on the ρ, and SR can be fitted by second-order polynomials in Σ and σ. The formulas for the derived robustness recipes are as follows: Unilateral patients need SR = −0.15Σ{sup 2} + 0.27σ{sup 2} + 1.85Σ − 0.06σ + 1.22 and RR=3% for ρ = 1% and ρ = 2%; bilateral patients need SR = −0.07Σ{sup 2} + 0.19σ{sup 2} + 1.34Σ − 0.07σ + 1.17 and RR=3% and 4% for ρ = 1% and 2%, respectively. For the recipe validation, 2 plans were generated for each of the 18 patients corresponding to Σ = σ = 1.5 mm and ρ = 0% and 2%. Thirty-four plans had adequate CTV coverage in 98% or more of the simulated fractionated treatments; the remaining 2 had adequate coverage in 97.8% and 97.9%. Conclusions: Robustness recipes were derived that can

  7. Mutational robustness of gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Aalt D J van Dijk

    Full Text Available Mutational robustness of gene regulatory networks refers to their ability to generate constant biological output upon mutations that change network structure. Such networks contain regulatory interactions (transcription factor-target gene interactions but often also protein-protein interactions between transcription factors. Using computational modeling, we study factors that influence robustness and we infer several network properties governing it. These include the type of mutation, i.e. whether a regulatory interaction or a protein-protein interaction is mutated, and in the case of mutation of a regulatory interaction, the sign of the interaction (activating vs. repressive. In addition, we analyze the effect of combinations of mutations and we compare networks containing monomeric with those containing dimeric transcription factors. Our results are consistent with available data on biological networks, for example based on evolutionary conservation of network features. As a novel and remarkable property, we predict that networks are more robust against mutations in monomer than in dimer transcription factors, a prediction for which analysis of conservation of DNA binding residues in monomeric vs. dimeric transcription factors provides indirect evidence.

  8. Engineering Robustness of Microbial Cell Factories.

    Science.gov (United States)

    Gong, Zhiwei; Nielsen, Jens; Zhou, Yongjin J

    2017-10-01

    Metabolic engineering and synthetic biology offer great prospects in developing microbial cell factories capable of converting renewable feedstocks into fuels, chemicals, food ingredients, and pharmaceuticals. However, prohibitively low production rate and mass concentration remain the major hurdles in industrial processes even though the biosynthetic pathways are comprehensively optimized. These limitations are caused by a variety of factors unamenable for host cell survival, such as harsh industrial conditions, fermentation inhibitors from biomass hydrolysates, and toxic compounds including metabolic intermediates and valuable target products. Therefore, engineered microbes with robust phenotypes is essential for achieving higher yield and productivity. In this review, the recent advances in engineering robustness and tolerance of cell factories is described to cope with these issues and briefly introduce novel strategies with great potential to enhance the robustness of cell factories, including metabolic pathway balancing, transporter engineering, and adaptive laboratory evolution. This review also highlights the integration of advanced systems and synthetic biology principles toward engineering the harmony of overall cell function, more than the specific pathways or enzymes. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  10. Robust Portfolio Optimization Using Pseudodistances.

    Science.gov (United States)

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  11. Robustness of power systems under a democratic-fiber-bundle-like model.

    Science.gov (United States)

    Yağan, Osman

    2015-06-01

    We consider a power system with N transmission lines whose initial loads (i.e., power flows) L(1),...,L(N) are independent and identically distributed with P(L)(x)=P[L≤x]. The capacity C(i) defines the maximum flow allowed on line i and is assumed to be given by C(i)=(1+α)L(i), with α>0. We study the robustness of this power system against random attacks (or failures) that target a p fraction of the lines, under a democratic fiber-bundle-like model. Namely, when a line fails, the load it was carrying is redistributed equally among the remaining lines. Our contributions are as follows. (i) We show analytically that the final breakdown of the system always takes place through a first-order transition at the critical attack size p(☆)=1-(E[L]/max(x)(P[L>x](αx+E[L|L>x])), where E[·] is the expectation operator; (ii) we derive conditions on the distribution P(L)(x) for which the first-order breakdown of the system occurs abruptly without any preceding diverging rate of failure; (iii) we provide a detailed analysis of the robustness of the system under three specific load distributions-uniform, Pareto, and Weibull-showing that with the minimum load L(min) and mean load E[L] fixed, Pareto distribution is the worst (in terms of robustness) among the three, whereas Weibull distribution is the best with shape parameter selected relatively large; (iv) we provide numerical results that confirm our mean-field analysis; and (v) we show that p(☆) is maximized when the load distribution is a Dirac delta function centered at E[L], i.e., when all lines carry the same load. This last finding is particularly surprising given that heterogeneity is known to lead to high robustness against random failures in many other systems.

  12. Robust methods for data reduction

    CERN Document Server

    Farcomeni, Alessio

    2015-01-01

    Robust Methods for Data Reduction gives a non-technical overview of robust data reduction techniques, encouraging the use of these important and useful methods in practical applications. The main areas covered include principal components analysis, sparse principal component analysis, canonical correlation analysis, factor analysis, clustering, double clustering, and discriminant analysis.The first part of the book illustrates how dimension reduction techniques synthesize available information by reducing the dimensionality of the data. The second part focuses on cluster and discriminant analy

  13. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  14. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  15. Targeting and Persuasive Advertising

    OpenAIRE

    Egli, Alain (Autor/in)

    2015-01-01

    Firms face a prisoner's dilemma when advertising in a competitive environment. In a Hotelling framework with persuasive advertisingfirms counteract this prisoner's dilemma with targeting. The firms even solve the prisoner's problem if targeted advertising is effective enough. Advertising turns from wasteful competition into profits. This is in contrast to wasteful competition as argument for regulations. A further result is maximum advertising differentiation: thefirms target their advertisin...

  16. Attack robustness and centrality of complex networks.

    Directory of Open Access Journals (Sweden)

    Swami Iyer

    Full Text Available Many complex systems can be described by networks, in which the constituent components are represented by vertices and the connections between the components are represented by edges between the corresponding vertices. A fundamental issue concerning complex networked systems is the robustness of the overall system to the failure of its constituent parts. Since the degree to which a networked system continues to function, as its component parts are degraded, typically depends on the integrity of the underlying network, the question of system robustness can be addressed by analyzing how the network structure changes as vertices are removed. Previous work has considered how the structure of complex networks change as vertices are removed uniformly at random, in decreasing order of their degree, or in decreasing order of their betweenness centrality. Here we extend these studies by investigating the effect on network structure of targeting vertices for removal based on a wider range of non-local measures of potential importance than simply degree or betweenness. We consider the effect of such targeted vertex removal on model networks with different degree distributions, clustering coefficients and assortativity coefficients, and for a variety of empirical networks.

  17. New robust chaotic system with exponential quadratic term

    International Nuclear Information System (INIS)

    Bao Bocheng; Li Chunbiao; Liu Zhong; Xu Jianping

    2008-01-01

    This paper proposes a new robust chaotic system of three-dimensional quadratic autonomous ordinary differential equations by introducing an exponential quadratic term. This system can display a double-scroll chaotic attractor with only two equilibria, and can be found to be robust chaotic in a very wide parameter domain with positive maximum Lyapunov exponent. Some basic dynamical properties and chaotic behaviour of novel attractor are studied. By numerical simulation, this paper verifies that the three-dimensional system can also evolve into periodic and chaotic behaviours by a constant controller. (general)

  18. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  19. Optics robustness of the ATLAS Tile Calorimeter

    CERN Document Server

    Costa Batalha Pedro, Rute; The ATLAS collaboration

    2018-01-01

    TileCal, the central hadronic calorimeter of the ATLAS detector is composed of plastic scintillators interleaved by iron plates, and wavelength shifting optical fibres. The optical properties of these components are known to suffer from natural ageing and degrade due to exposure to radiation. The calorimeter was designed for 10 years of LHC operating at the design luminosity of $10^{34}$ cm$^{-1}$s$^{-1}$. Irradiation tests of scintillators and fibres shown that their light yield decrease about 10 for the maximum dose expected after the 10 years of LHC operation. The robustness of the TileCal optics components is evaluated using the calibration systems of the calorimeter: Cs-137 gamma source, laser light, and integrated photomultiplier signals of particles from collisions. It is observed that the loss of light yield increases with exposure to radiation as expected. The decrease in the light yield during the years 2015-2017 corresponding to the LHC Run 2 will be reported.

  20. Robust design optimization using the price of robustness, robust least squares and regularization methods

    Science.gov (United States)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  1. Efficacy of robust optimization plan with partial-arc VMAT for photon volumetric-modulated arc therapy: A phantom study.

    Science.gov (United States)

    Miura, Hideharu; Ozawa, Shuichi; Nagata, Yasushi

    2017-09-01

    This study investigated position dependence in planning target volume (PTV)-based and robust optimization plans using full-arc and partial-arc volumetric modulated arc therapy (VMAT). The gantry angles at the periphery, intermediate, and center CTV positions were 181°-180° (full-arc VMAT) and 181°-360° (partial-arc VMAT). A PTV-based optimization plan was defined by 5 mm margin expansion of the CTV to a PTV volume, on which the dose constraints were applied. The robust optimization plan consisted of a directly optimized dose to the CTV under a maximum-uncertainties setup of 5 mm. The prescription dose was normalized to the CTV D 99% (the minimum relative dose that covers 99% of the volume of the CTV) as an original plan. The isocenter was rigidly shifted at 1 mm intervals in the anterior-posterior (A-P), superior-inferior (S-I), and right-left (R-L) directions from the original position to the maximum-uncertainties setup of 5 mm in the original plan, yielding recalculated dose distributions. It was found that for the intermediate and center positions, the uncertainties in the D 99% doses to the CTV for all directions did not significantly differ when comparing the PTV-based and robust optimization plans (P > 0.05). For the periphery position, uncertainties in the D 99% doses to the CTV in the R-L direction for the robust optimization plan were found to be lower than those in the PTV-based optimization plan (P plan's efficacy using partial-arc VMAT depends on the periphery CTV position. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  2. Maximum Power Point Tracking Using Sliding Mode Control for Photovoltaic Array

    Directory of Open Access Journals (Sweden)

    J. Ghazanfari

    2013-09-01

    Full Text Available In this paper, a robust Maximum Power Point Tracking (MPPT for PV array has been proposed using sliding mode control by defining a new formulation for sliding surface which is based on increment conductance (INC method. The stability and robustness of the proposed controller are investigated to load variations and environment changes. Three different types of DC-DC converter are used in Maximum Power Point (MPP system and the results obtained are given. The simulation results confirm the effectiveness of the proposed method in the presence of load variations and environment changes for different types of DC-DC converter topologies.

  3. Robust, Causal, and Incremental Approaches to Investigating Linguistic Adaptation

    Science.gov (United States)

    Roberts, Seán G.

    2018-01-01

    This paper discusses the maximum robustness approach for studying cases of adaptation in language. We live in an age where we have more data on more languages than ever before, and more data to link it with from other domains. This should make it easier to test hypotheses involving adaptation, and also to spot new patterns that might be explained by adaptation. However, there is not much discussion of the overall approach to research in this area. There are outstanding questions about how to formalize theories, what the criteria are for directing research and how to integrate results from different methods into a clear assessment of a hypothesis. This paper addresses some of those issues by suggesting an approach which is causal, incremental and robust. It illustrates the approach with reference to a recent claim that dry environments select against the use of precise contrasts in pitch. Study 1 replicates a previous analysis of the link between humidity and lexical tone with an alternative dataset and finds that it is not robust. Study 2 performs an analysis with a continuous measure of tone and finds no significant correlation. Study 3 addresses a more recent analysis of the link between humidity and vowel use and finds that it is robust, though the effect size is small and the robustness of the measurement of vowel use is low. Methodological robustness of the general theory is addressed by suggesting additional approaches including iterated learning, a historical case study, corpus studies, and studying individual speech. PMID:29515487

  4. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  5. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    Science.gov (United States)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  6. Advances in robust fractional control

    CERN Document Server

    Padula, Fabrizio

    2015-01-01

    This monograph presents design methodologies for (robust) fractional control systems. It shows the reader how to take advantage of the superior flexibility of fractional control systems compared with integer-order systems in achieving more challenging control requirements. There is a high degree of current interest in fractional systems and fractional control arising from both academia and industry and readers from both milieux are catered to in the text. Different design approaches having in common a trade-off between robustness and performance of the control system are considered explicitly. The text generalizes methodologies, techniques and theoretical results that have been successfully applied in classical (integer) control to the fractional case. The first part of Advances in Robust Fractional Control is the more industrially-oriented. It focuses on the design of fractional controllers for integer processes. In particular, it considers fractional-order proportional-integral-derivative controllers, becau...

  7. Robustness of digital artist authentication

    DEFF Research Database (Denmark)

    Jacobsen, Robert; Nielsen, Morten

    In many cases it is possible to determine the authenticity of a painting from digital reproductions of the paintings; this has been demonstrated for a variety of artists and with different approaches. Common to all these methods in digital artist authentication is that the potential of the method...... is in focus, while the robustness has not been considered, i.e. the degree to which the data collection process influences the decision of the method. However, in order for an authentication method to be successful in practice, it needs to be robust to plausible error sources from the data collection....... In this paper we investigate the robustness of the newly proposed authenticity method introduced by the authors based on second generation multiresolution analysis. This is done by modelling a number of realistic factors that can occur in the data collection....

  8. Attractive ellipsoids in robust control

    CERN Document Server

    Poznyak, Alexander; Azhmyakov, Vadim

    2014-01-01

    This monograph introduces a newly developed robust-control design technique for a wide class of continuous-time dynamical systems called the “attractive ellipsoid method.” Along with a coherent introduction to the proposed control design and related topics, the monograph studies nonlinear affine control systems in the presence of uncertainty and presents a constructive and easily implementable control strategy that guarantees certain stability properties. The authors discuss linear-style feedback control synthesis in the context of the above-mentioned systems. The development and physical implementation of high-performance robust-feedback controllers that work in the absence of complete information is addressed, with numerous examples to illustrate how to apply the attractive ellipsoid method to mechanical and electromechanical systems. While theorems are proved systematically, the emphasis is on understanding and applying the theory to real-world situations. Attractive Ellipsoids in Robust Control will a...

  9. Robustness of holonomic quantum gates

    International Nuclear Information System (INIS)

    Solinas, P.; Zanardi, P.; Zanghi, N.

    2005-01-01

    Full text: If the driving field fluctuates during the quantum evolution this produces errors in the applied operator. The holonomic (and geometrical) quantum gates are believed to be robust against some kind of noise. Because of the geometrical dependence of the holonomic operators can be robust against this kind of noise; in fact if the fluctuations are fast enough they cancel out leaving the final operator unchanged. I present the numerical studies of holonomic quantum gates subject to this parametric noise, the fidelity of the noise and ideal evolution is calculated for different noise correlation times. The holonomic quantum gates seem robust not only for fast fluctuating fields but also for slow fluctuating fields. These results can be explained as due to the geometrical feature of the holonomic operator: for fast fluctuating fields the fluctuations are canceled out, for slow fluctuating fields the fluctuations do not perturb the loop in the parameter space. (author)

  10. Robustness in Railway Operations (RobustRailS)

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo; Nielsen, Otto Anker

    This study considers the problem of enhancing railway timetable robustness without adding slack time, hence increasing the travel time. The approach integrates a transit assignment model to assess how passengers adapt their behaviour whenever operations are changed. First, the approach considers...

  11. Robustness of muscle synergies during visuomotor adaptation

    Directory of Open Access Journals (Sweden)

    Reinhard eGentner

    2013-09-01

    Full Text Available During visuomotor adaptation a novel mapping between visual targets and motor commands is gradually acquired. How muscle activation patterns are affected by this process is an open question. We tested whether the structure of muscle synergies is preserved during adaptation to a visuomotor rotation. Eight subjects applied targeted isometric forces on a handle instrumented with a force transducer while electromyographic (EMG activity was recorded from 13 shoulder and elbow muscles. The recorded forces were mapped into horizontal displacements of a virtual sphere with simulated mass, elasticity, and damping. The task consisted of moving the sphere to a target at one of eight equally spaced directions. Subjects performed three baseline blocks of 32 trials, followed by six blocks with a 45° CW rotation applied to the planar force, and finally three wash-out blocks without the perturbation. The sphere position at 100 ms after movement onset revealed significant directional error at the beginning of the rotation, a gradual learning in subsequent blocks, and aftereffects at the beginning of the wash-out. The change in initial force direction was closely related to the change in directional tuning of the initial EMG activity of most muscles. Throughout the experiment muscle synergies extracted using a non-negative matrix factorization algorithm from the muscle patterns recorded during the baseline blocks could reconstruct the muscle patterns of all other blocks with an accuracy significantly higher than chance indicating structural robustness. In addition, the synergies extracted from individual blocks remained similar to the baseline synergies throughout the experiment. Thus synergy structure is robust during visuomotor adaptation suggesting that changes in muscle patterns are obtained by rotating the directional tuning of the synergy recruitment.

  12. A Robust Design Applicability Model

    DEFF Research Database (Denmark)

    Ebro, Martin; Lars, Krogstie; Howard, Thomas J.

    2015-01-01

    to be applicable in organisations assigning a high importance to one or more factors that are known to be impacted by RD, while also experiencing a high level of occurrence of this factor. The RDAM supplements existing maturity models and metrics to provide a comprehensive set of data to support management......This paper introduces a model for assessing the applicability of Robust Design (RD) in a project or organisation. The intention of the Robust Design Applicability Model (RDAM) is to provide support for decisions by engineering management considering the relevant level of RD activities...

  13. Ins-Robust Primitive Words

    OpenAIRE

    Srivastava, Amit Kumar; Kapoor, Kalpesh

    2017-01-01

    Let Q be the set of primitive words over a finite alphabet with at least two symbols. We characterize a class of primitive words, Q_I, referred to as ins-robust primitive words, which remain primitive on insertion of any letter from the alphabet and present some properties that characterizes words in the set Q_I. It is shown that the language Q_I is dense. We prove that the language of primitive words that are not ins-robust is not context-free. We also present a linear time algorithm to reco...

  14. Robust Forecasting for Energy Efficiency of Wireless Multimedia Sensor Networks.

    Science.gov (United States)

    Wang, Xue; Ma, Jun-Jie; Ding, Liang; Bi, Dao-Wei

    2007-11-15

    An important criterion of wireless sensor network is the energy efficiency inspecified applications. In this wireless multimedia sensor network, the observations arederived from acoustic sensors. Focused on the energy problem of target tracking, this paperproposes a robust forecasting method to enhance the energy efficiency of wirelessmultimedia sensor networks. Target motion information is acquired by acoustic sensornodes while a distributed network with honeycomb configuration is constructed. Thereby,target localization is performed by multiple sensor nodes collaboratively through acousticsignal processing. A novel method, combining autoregressive moving average (ARMA)model and radial basis function networks (RBFNs), is exploited to perform robust targetposition forecasting during target tracking. Then sensor nodes around the target areawakened according to the forecasted target position. With committee decision of sensornodes, target localization is performed in a distributed manner and the uncertainty ofdetection is reduced. Moreover, a sensor-to-observer routing approach of the honeycombmesh network is investigated to solve the data reporting considering the residual energy ofsensor nodes. Target localization and forecasting are implemented in experiments.Meanwhile, sensor node awakening and dynamic routing are evaluated. Experimentalresults verify that energy efficiency of wireless multimedia sensor network is enhanced bythe proposed target tracking method.

  15. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  16. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  17. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  18. Systematic and robust design of photonic crystal waveguides by topology optimization

    DEFF Research Database (Denmark)

    Wang, Fengwen; Jensen, Jakob Søndergaard; Sigmund, Ole

    2010-01-01

    on a threshold projection. The objective is formulated to minimize the maximum error between actual group indices and a prescribed group index among these three designs. Novel photonic crystal waveguide facilitating slow light with a group index of n(g) = 40 is achieved by the robust optimization approach......A robust topology optimization method is presented to consider manufacturing uncertainties in tailoring dispersion properties of photonic crystal waveguides. The under, normal and over-etching scenarios in manufacturing process are represented by dilated, intermediate and eroded designs based....... The numerical result illustrates that the robust topology optimization provides a systematic and robust design methodology for photonic crystal waveguide design....

  19. Essays on robust asset pricing

    NARCIS (Netherlands)

    Horváth, Ferenc

    2017-01-01

    The central concept of this doctoral dissertation is robustness. I analyze how model and parameter uncertainty affect financial decisions of investors and fund managers, and what their equilibrium consequences are. Chapter 1 gives an overview of the most important concepts and methodologies used in

  20. Robust visual hashing via ICA

    International Nuclear Information System (INIS)

    Fournel, Thierry; Coltuc, Daniela

    2010-01-01

    Designed to maximize information transmission in the presence of noise, independent component analysis (ICA) could appear in certain circumstances as a statistics-based tool for robust visual hashing. Several ICA-based scenarios can attempt to reach this goal. A first one is here considered.

  1. Robustness of raw quantum tomography

    Science.gov (United States)

    Asorey, M.; Facchi, P.; Florio, G.; Man'ko, V. I.; Marmo, G.; Pascazio, S.; Sudarshan, E. C. G.

    2011-01-01

    We scrutinize the effects of non-ideal data acquisition on the tomograms of quantum states. The presence of a weight function, schematizing the effects of a finite window or equivalently noise, only affects the state reconstruction procedure by a normalization constant. The results are extended to a discrete mesh and show that quantum tomography is robust under incomplete and approximate knowledge of tomograms.

  2. Robustness of raw quantum tomography

    Energy Technology Data Exchange (ETDEWEB)

    Asorey, M. [Departamento de Fisica Teorica, Facultad de Ciencias, Universidad de Zaragoza, 50009 Zaragoza (Spain); Facchi, P. [Dipartimento di Matematica, Universita di Bari, I-70125 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Florio, G. [Dipartimento di Fisica, Universita di Bari, I-70126 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Man' ko, V.I., E-mail: manko@lebedev.r [P.N. Lebedev Physical Institute, Leninskii Prospect 53, Moscow 119991 (Russian Federation); Marmo, G. [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , I-80126 Napoli (Italy); INFN, Sezione di Napoli, I-80126 Napoli (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Pascazio, S. [Dipartimento di Fisica, Universita di Bari, I-70126 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Sudarshan, E.C.G. [Department of Physics, University of Texas, Austin, TX 78712 (United States)

    2011-01-31

    We scrutinize the effects of non-ideal data acquisition on the tomograms of quantum states. The presence of a weight function, schematizing the effects of a finite window or equivalently noise, only affects the state reconstruction procedure by a normalization constant. The results are extended to a discrete mesh and show that quantum tomography is robust under incomplete and approximate knowledge of tomograms.

  3. Aspects of robust linear regression

    NARCIS (Netherlands)

    Davies, P.L.

    1993-01-01

    Section 1 of the paper contains a general discussion of robustness. In Section 2 the influence function of the Hampel-Rousseeuw least median of squares estimator is derived. Linearly invariant weak metrics are constructed in Section 3. It is shown in Section 4 that $S$-estimators satisfy an exact

  4. Robustness Regions for Dichotomous Decisions.

    Science.gov (United States)

    Vijn, Pieter; Molenaar, Ivo W.

    1981-01-01

    In the case of dichotomous decisions, the total set of all assumptions/specifications for which the decision would have been the same is the robustness region. Inspection of this (data-dependent) region is a form of sensitivity analysis which may lead to improved decision making. (Author/BW)

  5. Theoretical Framework for Robustness Evaluation

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, althou...

  6. Robust control design with MATLAB

    CERN Document Server

    Gu, Da-Wei; Konstantinov, Mihail M

    2013-01-01

    Robust Control Design with MATLAB® (second edition) helps the student to learn how to use well-developed advanced robust control design methods in practical cases. To this end, several realistic control design examples from teaching-laboratory experiments, such as a two-wheeled, self-balancing robot, to complex systems like a flexible-link manipulator are given detailed presentation. All of these exercises are conducted using MATLAB® Robust Control Toolbox 3, Control System Toolbox and Simulink®. By sharing their experiences in industrial cases with minimum recourse to complicated theories and formulae, the authors convey essential ideas and useful insights into robust industrial control systems design using major H-infinity optimization and related methods allowing readers quickly to move on with their own challenges. The hands-on tutorial style of this text rests on an abundance of examples and features for the second edition: ·        rewritten and simplified presentation of theoretical and meth...

  7. Robust Portfolio Optimization Using Pseudodistances

    Science.gov (United States)

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  8. Facial Symmetry in Robust Anthropometrics

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 57, č. 3 (2012), s. 691-698 ISSN 0022-1198 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : forensic science * anthropology * robust image analysis * correlation analysis * multivariate data * classification Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.244, year: 2012

  9. Sparse and Robust Factor Modelling

    NARCIS (Netherlands)

    C. Croux (Christophe); P. Exterkate (Peter)

    2011-01-01

    textabstractFactor construction methods are widely used to summarize a large panel of variables by means of a relatively small number of representative factors. We propose a novel factor construction procedure that enjoys the properties of robustness to outliers and of sparsity; that is, having

  10. Robust distributed cognitive relay beamforming

    KAUST Repository

    Pandarakkottilil, Ubaidulla; Aissa, Sonia

    2012-01-01

    design takes into account a parameter of the error in the channel state information (CSI) to render the performance of the beamformer robust in the presence of imperfect CSI. Though the original problem is non-convex, we show that the proposed design can

  11. Approximability of Robust Network Design

    NARCIS (Netherlands)

    Olver, N.K.; Shepherd, F.B.

    2014-01-01

    We consider robust (undirected) network design (RND) problems where the set of feasible demands may be given by an arbitrary convex body. This model, introduced by Ben-Ameur and Kerivin [Ben-Ameur W, Kerivin H (2003) New economical virtual private networks. Comm. ACM 46(6):69-73], generalizes the

  12. Robust optimization methods for cardiac sparing in tangential breast IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Mahmoudzadeh, Houra, E-mail: houra@mie.utoronto.ca [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8 (Canada); Lee, Jenny [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Chan, Timothy C. Y. [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada); Purdie, Thomas G. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada)

    2015-05-15

    Purpose: In left-sided tangential breast intensity modulated radiation therapy (IMRT), the heart may enter the radiation field and receive excessive radiation while the patient is breathing. The patient’s breathing pattern is often irregular and unpredictable. We verify the clinical applicability of a heart-sparing robust optimization approach for breast IMRT. We compare robust optimized plans with clinical plans at free-breathing and clinical plans at deep inspiration breath-hold (DIBH) using active breathing control (ABC). Methods: Eight patients were included in the study with each patient simulated using 4D-CT. The 4D-CT image acquisition generated ten breathing phase datasets. An average scan was constructed using all the phase datasets. Two of the eight patients were also imaged at breath-hold using ABC. The 4D-CT datasets were used to calculate the accumulated dose for robust optimized and clinical plans based on deformable registration. We generated a set of simulated breathing probability mass functions, which represent the fraction of time patients spend in different breathing phases. The robust optimization method was applied to each patient using a set of dose-influence matrices extracted from the 4D-CT data and a model of the breathing motion uncertainty. The goal of the optimization models was to minimize the dose to the heart while ensuring dose constraints on the target were achieved under breathing motion uncertainty. Results: Robust optimized plans were improved or equivalent to the clinical plans in terms of heart sparing for all patients studied. The robust method reduced the accumulated heart dose (D10cc) by up to 801 cGy compared to the clinical method while also improving the coverage of the accumulated whole breast target volume. On average, the robust method reduced the heart dose (D10cc) by 364 cGy and improved the optBreast dose (D99%) by 477 cGy. In addition, the robust method had smaller deviations from the planned dose to the

  13. Robust Short-Lag Spatial Coherence Imaging.

    Science.gov (United States)

    Nair, Arun Asokan; Tran, Trac Duy; Bell, Muyinatu A Lediju

    2018-03-01

    Short-lag spatial coherence (SLSC) imaging displays the spatial coherence between backscattered ultrasound echoes instead of their signal amplitudes and is more robust to noise and clutter artifacts when compared with traditional delay-and-sum (DAS) B-mode imaging. However, SLSC imaging does not consider the content of images formed with different lags, and thus does not exploit the differences in tissue texture at each short-lag value. Our proposed method improves SLSC imaging by weighting the addition of lag values (i.e., M-weighting) and by applying robust principal component analysis (RPCA) to search for a low-dimensional subspace for projecting coherence images created with different lag values. The RPCA-based projections are considered to be denoised versions of the originals that are then weighted and added across lags to yield a final robust SLSC (R-SLSC) image. Our approach was tested on simulation, phantom, and in vivo liver data. Relative to DAS B-mode images, the mean contrast, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) improvements with R-SLSC images are 21.22 dB, 2.54, and 2.36, respectively, when averaged over simulated, phantom, and in vivo data and over all lags considered, which corresponds to mean improvements of 96.4%, 121.2%, and 120.5%, respectively. When compared with SLSC images, the corresponding mean improvements with R-SLSC images were 7.38 dB, 1.52, and 1.30, respectively (i.e., mean improvements of 14.5%, 50.5%, and 43.2%, respectively). Results show great promise for smoothing out the tissue texture of SLSC images and enhancing anechoic or hypoechoic target visibility at higher lag values, which could be useful in clinical tasks such as breast cyst visualization, liver vessel tracking, and obese patient imaging.

  14. Radiation pressure acceleration: The factors limiting maximum attainable ion energy

    Energy Technology Data Exchange (ETDEWEB)

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)

    2016-05-15

    Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.

  15. SU-F-T-205: Effectiveness of Robust Treatment Planning to Account for Inter- Fractional Variation in Intensity Modulated Proton Therapy for Head Neck Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Li, X; Zhang, J; Qin, A; Liang, J; Zhou, J; Yan, D; Chen, P; Krauss, D; Ding, X [Beaumont Health Systeml, Royal Oak, Michigan (United States)

    2016-06-15

    Purpose: To evaluate the potential benefits of robust optimization in intensity modulated proton therapy(IMPT) treatment planning to account for inter-fractional variation for Head Neck Cancer(HNC). Methods: One patient with bilateral HNC previous treated at our institution was used in this study. Ten daily CBCTs were selected. The CT numbers of the CBCTs were corrected by mapping the CT numbers from simulation CT via Deformable Image Registration. The planning target volumes(PTVs) were defined by a 3mm expansion from clinical target volumes(CTVs). The prescription was 70Gy, 54Gy to CTV1, CTV2, and PTV1, PTV2 for robust optimized(RO) and conventionally optimized(CO) plans respectively. Both techniques were generated by RayStation with the same beam angles: two anterior oblique and two posterior oblique angles. The similar dose constraints were used to achieve 99% of CTV1 received 100% prescription dose while kept the hotspots less than 110% of the prescription. In order to evaluate the dosimetric result through the course of treatment, the contours were deformed from simulation CT to daily CBCTs, modified, and approved by a radiation oncologist. The initial plan on the simulation CT was re-replayed on the daily CBCTs followed the bony alignment. The target coverage was evaluated using the daily doses and the cumulative dose. Results: Eight of 10 daily deliveries with using RO plan achieved at least 95% prescription dose to CTV1 and CTV2, while still kept maximum hotspot less than 112% of prescription compared with only one of 10 for the CO plan to achieve the same standards. For the cumulative doses, the target coverage for both RO and CO plans was quite similar, which was due to the compensation of cold and hot spots. Conclusion: Robust optimization can be effectively applied to compensate for target dose deficit caused by inter-fractional target geometric variation in IMPT treatment planning.

  16. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  17. Approximate truncation robust computed tomography—ATRACT

    International Nuclear Information System (INIS)

    Dennerlein, Frank; Maier, Andreas

    2013-01-01

    We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)

  18. Robust Reliability or reliable robustness? - Integrated consideration of robustness and reliability aspects

    DEFF Research Database (Denmark)

    Kemmler, S.; Eifler, Tobias; Bertsche, B.

    2015-01-01

    products are and vice versa. For a comprehensive understanding and to use existing synergies between both domains, this paper discusses the basic principles of Reliability- and Robust Design theory. The development of a comprehensive model will enable an integrated consideration of both domains...

  19. Experimental demonstration of the maximum likelihood-based chromatic dispersion estimator for coherent receivers

    DEFF Research Database (Denmark)

    Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk

    2014-01-01

    We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR...

  20. SU-F-T-188: A Robust Treatment Planning Technique for Proton Pencil Beam Scanning Cranial Spinal Irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, M; Mehta, M; Badiyan, S; Young, K; Malyapa, R; Regine, W; Langen, K [University of Maryland School of Medicine, Baltimore, MD (United States); Yam, M [University of Florida Proton Therapy Institute, Jacksonville, FL (United States)

    2016-06-15

    Purpose: To propose a proton pencil beam scanning (PBS) cranial spinal irradiation (CSI) treatment planning technique robust against patient roll, isocenter offset and proton range uncertainty. Method: Proton PBS plans were created (Eclipse V11) for three previously treated CSI patients to 36 Gy (1.8 Gy/fractions). The target volume was separated into three regions: brain, upper spine and lower spine. One posterior-anterior (PA) beam was used for each spine region, and two posterior-oblique beams (15° apart from PA direction, denoted as 2PO-15) for the brain region. For comparison, another plan using one PA beam for the brain target (denoted as 1PA) was created. Using the same optimization objectives, 98% CTV was optimized to receive the prescription dose. To evaluate plan robustness against patient roll, the gantry angle was increased by 3° and dose was recalculated without changing the proton spot weights. On the re-calculated plan, doses were then calculated using 12 scenarios that are combinations of isocenter shift (±3mm in X, Y, and Z directions) and proton range variation (±3.5%). The worst-case-scenario (WCS) brain CTV dosimetric metrics were compared to the nominal plan. Results: For both beam arrangements, the brain field(s) and upper-spine field overlap in the T2–T5 region depending on patient anatomy. The maximum monitor unit per spot were 48.7%, 47.2%, and 40.0% higher for 1PA plans than 2PO-15 plans for the three patients. The 2PO-15 plans have better dose conformity. At the same level of CTV coverage, the 2PO-15 plans have lower maximum dose and higher minimum dose to the CTV. The 2PO-15 plans also showed lower WCS maximum dose to CTV, while the WCS minimum dose to CTV were comparable between the two techniques. Conclusion: Our method of using two posterior-oblique beams for brain target provides improved dose conformity and homogeneity, and plan robustness including patient roll.

  1. Robust Optimization of Fourth Party Logistics Network Design under Disruptions

    Directory of Open Access Journals (Sweden)

    Jia Li

    2015-01-01

    Full Text Available The Fourth Party Logistics (4PL network faces disruptions of various sorts under the dynamic and complex environment. In order to explore the robustness of the network, the 4PL network design with consideration of random disruptions is studied. The purpose of the research is to construct a 4PL network that can provide satisfactory service to customers at a lower cost when disruptions strike. Based on the definition of β-robustness, a robust optimization model of 4PL network design under disruptions is established. Based on the NP-hard characteristic of the problem, the artificial fish swarm algorithm (AFSA and the genetic algorithm (GA are developed. The effectiveness of the algorithms is tested and compared by simulation examples. By comparing the optimal solutions of the 4PL network for different robustness level, it is indicated that the robust optimization model can evade the market risks effectively and save the cost in the maximum limit when it is applied to 4PL network design.

  2. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  3. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  4. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  5. Robustness of the p53 network and biological hackers.

    Science.gov (United States)

    Dartnell, Lewis; Simeonidis, Evangelos; Hubank, Michael; Tsoka, Sophia; Bogle, I David L; Papageorgiou, Lazaros G

    2005-06-06

    The p53 protein interaction network is crucial in regulating the metazoan cell cycle and apoptosis. Here, the robustness of the p53 network is studied by analyzing its degeneration under two modes of attack. Linear Programming is used to calculate average path lengths among proteins and the network diameter as measures of functionality. The p53 network is found to be robust to random loss of nodes, but vulnerable to a targeted attack against its hubs, as a result of its architecture. The significance of the results is considered with respect to mutational knockouts of proteins and the directed attacks mounted by tumour inducing viruses.

  6. An Evolutionary Approach for Robust Layout Synthesis of MEMS

    DEFF Research Database (Denmark)

    Fan, Zhun; Wang, Jiachuan; Goodman, Erik

    2005-01-01

    The paper introduces a robust design method for layout synthesis of MEM resonators subject to inherent geometric uncertainties such as the fabrication error on the sidewall of the structure. The robust design problem is formulated as a multi-objective constrained optimisation problem after certain...... assumptions and treated with multiobjective genetic algorithm (MOGA), a special type of evolutionary computing approaches. Case study based on layout synthesis of a comb-driven MEM resonator shows that the approach proposed in this paper can lead to design results that meet the target performance and are less...

  7. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  8. Robust Decentralized Formation Flight Control

    Directory of Open Access Journals (Sweden)

    Zhao Weihua

    2011-01-01

    Full Text Available Motivated by the idea of multiplexed model predictive control (MMPC, this paper introduces a new framework for unmanned aerial vehicles (UAVs formation flight and coordination. Formulated using MMPC approach, the whole centralized formation flight system is considered as a linear periodic system with control inputs of each UAV subsystem as its periodic inputs. Divided into decentralized subsystems, the whole formation flight system is guaranteed stable if proper terminal cost and terminal constraints are added to each decentralized MPC formulation of the UAV subsystem. The decentralized robust MPC formulation for each UAV subsystem with bounded input disturbances and model uncertainties is also presented. Furthermore, an obstacle avoidance control scheme for any shape and size of obstacles, including the nonapriorily known ones, is integrated under the unified MPC framework. The results from simulations demonstrate that the proposed framework can successfully achieve robust collision-free formation flights.

  9. Inefficient but robust public leadership.

    OpenAIRE

    Matsumura, Toshihiro; Ogawa, Akira

    2014-01-01

    We investigate endogenous timing in a mixed duopoly in a differentiated product market. We find that private leadership is better than public leadership from a social welfare perspective if the private firm is domestic, regardless of the degree of product differentiation. Nevertheless, the public leadership equilibrium is risk-dominant, and it is thus robust if the degree of product differentiation is high. We also find that regardless of the degree of product differentiation, the public lead...

  10. Testing Heteroscedasticity in Robust Regression

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 1, č. 4 (2011), s. 25-28 ISSN 2045-3345 Grant - others:GA ČR(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust regression * heteroscedasticity * regression quantiles * diagnostics Subject RIV: BB - Applied Statistics , Operational Research http://www.researchjournals.co.uk/documents/Vol4/06%20Kalina.pdf

  11. Robust power system frequency control

    CERN Document Server

    Bevrani, Hassan

    2014-01-01

    This updated edition of the industry standard reference on power system frequency control provides practical, systematic and flexible algorithms for regulating load frequency, offering new solutions to the technical challenges introduced by the escalating role of distributed generation and renewable energy sources in smart electric grids. The author emphasizes the physical constraints and practical engineering issues related to frequency in a deregulated environment, while fostering a conceptual understanding of frequency regulation and robust control techniques. The resulting control strategi

  12. Catastrophic Disruption Threshold and Maximum Deflection from Kinetic Impact

    Science.gov (United States)

    Cheng, A. F.

    2017-12-01

    The use of a kinetic impactor to deflect an asteroid on a collision course with Earth was described in the NASA Near-Earth Object Survey and Deflection Analysis of Alternatives (2007) as the most mature approach for asteroid deflection and mitigation. The NASA DART mission will demonstrate asteroid deflection by kinetic impact at the Potentially Hazardous Asteroid 65803 Didymos in October, 2022. The kinetic impactor approach is considered to be applicable with warning times of 10 years or more and with hazardous asteroid diameters of 400 m or less. In principle, a larger kinetic impactor bringing greater kinetic energy could cause a larger deflection, but input of excessive kinetic energy will cause catastrophic disruption of the target, leaving possibly large fragments still on collision course with Earth. Thus the catastrophic disruption threshold limits the maximum deflection from a kinetic impactor. An often-cited rule of thumb states that the maximum deflection is 0.1 times the escape velocity before the target will be disrupted. It turns out this rule of thumb does not work well. A comparison to numerical simulation results shows that a similar rule applies in the gravity limit, for large targets more than 300 m, where the maximum deflection is roughly the escape velocity at momentum enhancement factor β=2. In the gravity limit, the rule of thumb corresponds to pure momentum coupling (μ=1/3), but simulations find a slightly different scaling μ=0.43. In the smaller target size range that kinetic impactors would apply to, the catastrophic disruption limit is strength-controlled. A DART-like impactor won't disrupt any target asteroid down to significantly smaller size than the 50 m below which a hazardous object would not penetrate the atmosphere in any case unless it is unusually strong.

  13. Efficient and robust gradient enhanced Kriging emulators.

    Energy Technology Data Exchange (ETDEWEB)

    Dalbey, Keith R.

    2013-08-01

    %E2%80%9CNaive%E2%80%9D or straight-forward Kriging implementations can often perform poorly in practice. The relevant features of the robustly accurate and efficient Kriging and Gradient Enhanced Kriging (GEK) implementations in the DAKOTA software package are detailed herein. The principal contribution is a novel, effective, and efficient approach to handle ill-conditioning of GEK's %E2%80%9Ccorrelation%E2%80%9D matrix, RN%CC%83, based on a pivoted Cholesky factorization of Kriging's (not GEK's) correlation matrix, R, which is a small sub-matrix within GEK's RN%CC%83 matrix. The approach discards sample points/equations that contribute the least %E2%80%9Cnew%E2%80%9D information to RN%CC%83. Since these points contain the least new information, they are the ones which when discarded are both the easiest to predict and provide maximum improvement of RN%CC%83's conditioning. Prior to this work, handling ill-conditioned correlation matrices was a major, perhaps the principal, unsolved challenge necessary for robust and efficient GEK emulators. Numerical results demonstrate that GEK predictions can be significantly more accurate when GEK is allowed to discard points by the presented method. Numerical results also indicate that GEK can be used to break the curse of dimensionality by exploiting inexpensive derivatives (such as those provided by automatic differentiation or adjoint techniques), smoothness in the response being modeled, and adaptive sampling. Development of a suitable adaptive sampling algorithm was beyond the scope of this work; instead adaptive sampling was approximated by omitting the cost of samples discarded by the presented pivoted Cholesky approach.

  14. Robust point matching via vector field consensus.

    Science.gov (United States)

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  15. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  16. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  17. Robust control design verification using the modular modeling system

    International Nuclear Information System (INIS)

    Edwards, R.M.; Ben-Abdennour, A.; Lee, K.Y.

    1991-01-01

    The Modular Modeling System (B ampersand W MMS) is being used as a design tool to verify robust controller designs for improving power plant performance while also providing fault-accommodating capabilities. These controllers are designed based on optimal control theory and are thus model based controllers which are targeted for implementation in a computer based digital control environment. The MMS is being successfully used to verify that the controllers are tolerant of uncertainties between the plant model employed in the controller and the actual plant; i.e., that they are robust. The two areas in which the MMS is being used for this purpose is in the design of (1) a reactor power controller with improved reactor temperature response, and (2) the design of a multiple input multiple output (MIMO) robust fault-accommodating controller for a deaerator level and pressure control problem

  18. A Probabilistic Approach for Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    of Structures and a probabilistic modelling of the timber material proposed in the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS). Due to the framework in the Danish Code the timber structure has to be evaluated with respect to the following criteria where at least one shall...... to criteria a) and b) the timber frame structure has one column with a reliability index a bit lower than an assumed target level. By removal three columns one by one no significant extensive failure of the entire structure or significant parts of it are obatined. Therefore the structure can be considered......A probabilistic based robustness analysis has been performed for a glulam frame structure supporting the roof over the main court in a Norwegian sports centre. The robustness analysis is based on the framework for robustness analysis introduced in the Danish Code of Practice for the Safety...

  19. Robustness Analysis of Timber Truss Structure

    DEFF Research Database (Denmark)

    Rajčić, Vlatka; Čizmar, Dean; Kirkegaard, Poul Henning

    2010-01-01

    The present paper discusses robustness of structures in general and the robustness requirements given in the codes. Robustness of timber structures is also an issues as this is closely related to Working group 3 (Robustness of systems) of the COST E55 project. Finally, an example of a robustness...... evaluation of a widespan timber truss structure is presented. This structure was built few years ago near Zagreb and has a span of 45m. Reliability analysis of the main members and the system is conducted and based on this a robustness analysis is preformed....

  20. Maximum Recoverable Gas from Hydrate Bearing Sediments by Depressurization

    KAUST Repository

    Terzariol, Marco

    2017-11-13

    The estimation of gas production rates from hydrate bearing sediments requires complex numerical simulations. This manuscript presents a set of simple and robust analytical solutions to estimate the maximum depressurization-driven recoverable gas. These limiting-equilibrium solutions are established when the dissociation front reaches steady state conditions and ceases to expand further. Analytical solutions show the relevance of (1) relative permeabilities between the hydrate free sediment, the hydrate bearing sediment, and the aquitard layers, and (2) the extent of depressurization in terms of the fluid pressures at the well, at the phase boundary, and in the far field. Close form solutions for the size of the produced zone allow for expeditious financial analyses; results highlight the need for innovative production strategies in order to make hydrate accumulations an economically-viable energy resource. Horizontal directional drilling and multi-wellpoint seafloor dewatering installations may lead to advantageous production strategies in shallow seafloor reservoirs.

  1. LIBOR troubles: Anomalous movements detection based on maximum entropy

    Science.gov (United States)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  2. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  3. Soliton robustness in optical fibers

    International Nuclear Information System (INIS)

    Menyuk, C.R.

    1993-01-01

    Simulations and experiments indicate that solitons in optical fibers are robust in the presence of Hamiltonian deformations such as higher-order dispersion and birefringence but are destroyed in the presence of non-Hamiltonian deformations such as attenuation and the Raman effect. Two hypotheses are introduced that generalize these observations and give a recipe for when deformations will be Hamiltonian. Concepts from nonlinear dynamics are used to make these two hypotheses plausible. Soliton stabilization with frequency filtering is also briefly discussed from this point of view

  4. Robust and Sparse Factor Modelling

    DEFF Research Database (Denmark)

    Croux, Christophe; Exterkate, Peter

    Factor construction methods are widely used to summarize a large panel of variables by means of a relatively small number of representative factors. We propose a novel factor construction procedure that enjoys the properties of robustness to outliers and of sparsity; that is, having relatively few...... nonzero factor loadings. Compared to the traditional factor construction method, we find that this procedure leads to a favorable forecasting performance in the presence of outliers and to better interpretable factors. We investigate the performance of the method in a Monte Carlo experiment...

  5. Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; čizmar, D.

    2010-01-01

    The present paper outlines results from working group 3 (WG3) in the EU COST Action E55 – ‘Modelling of the performance of timber structures’. The objectives of the project are related to the three main research activities: the identification and modelling of relevant load and environmental...... exposure scenarios, the improvement of knowledge concerning the behaviour of timber structural elements and the development of a generic framework for the assessment of the life-cycle vulnerability and robustness of timber structures....

  6. Sustainable Resilient, Robust & Resplendent Enterprises

    DEFF Research Database (Denmark)

    Edgeman, Rick

    to their impact. Resplendent enterprises are introduced with resplendence referring not to some sort of public or private façade, but instead refers to organizations marked by dual brilliance and nobility of strategy, governance and comportment that yields superior and sustainable triple bottom line performance....... Herein resilience, robustness, and resplendence (R3) are integrated with sustainable enterprise excellence (Edgeman and Eskildsen, 2013) or SEE and social-ecological innovation (Eskildsen and Edgeman, 2012) to aid progress of a firm toward producing continuously relevant performance that proceed from...

  7. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  8. UAV Robust Strategy Control Based on MAS

    Directory of Open Access Journals (Sweden)

    Jian Han

    2014-01-01

    Full Text Available A novel multiagent system (MAS has been proposed to integrate individual UAV (unmanned aerial vehicle to form a UAV team which can accomplish complex missions with better efficiency and effect. The MAS based UAV team control is more able to conquer dynamic situations and enhance the performance of any single UAV. In this paper, the MAS proposed and established combines the reacting and thinking abilities to be an initiative and autonomous hybrid system which can solve missions involving coordinated flight and cooperative operation. The MAS uses BDI model to support its logical perception and to classify the different missions; then the missions will be allocated by utilizing auction mechanism after analyzing dynamic parameters. Prim potential algorithm, particle swarm algorithm, and reallocation mechanism are proposed to realize the rational decomposing and optimal allocation in order to reach the maximum profit. After simulation, the MAS has been proved to be able to promote the success ratio and raise the robustness, while realizing feasibility of coordinated flight and optimality of cooperative mission.

  9. Robust Instrumentation[Water treatment for power plant]; Robust Instrumentering

    Energy Technology Data Exchange (ETDEWEB)

    Wik, Anders [Vattenfall Utveckling AB, Stockholm (Sweden)

    2003-08-01

    Cementa Slite Power Station is a heat recovery steam generator (HRSG) with moderate steam data; 3.0 MPa and 420 deg C. The heat is recovered from Cementa, a cement industry, without any usage of auxiliary fuel. The Power station commenced operation in 2001. The layout of the plant is unusual, there are no similar in Sweden and very few world-wide, so the operational experiences are limited. In connection with the commissioning of the power plant a R and D project was identified with the objective to minimise the manpower needed for chemistry management of the plant. The lean chemistry management is based on robust instrumentation and chemical-free water treatment plant. The concept with robust instrumentation consists of the following components; choice of on-line instrumentation with a minimum of O and M and a chemical-free water treatment. The parameters are specific conductivity, cation conductivity, oxygen and pH. In addition to that, two fairly new on-line instruments were included; corrosion monitors and differential pH calculated from specific and cation conductivity. The chemical-free water treatment plant consists of softening, reverse osmosis and electro-deionisation. The operational experience shows that the cycle chemistry is not within the guidelines due to major problems with the operation of the power plant. These problems have made it impossible to reach steady state and thereby not viable to fully verify and validate the concept with robust instrumentation. From readings on the panel of the online analysers some conclusions may be drawn, e.g. the differential pH measurements have fulfilled the expectations. The other on-line analysers have been working satisfactorily apart from contamination with turbine oil, which has been noticed at least twice. The corrosion monitors seem to be working but the lack of trend curves from the mainframe computer system makes it hard to draw any clear conclusions. The chemical-free water treatment has met all

  10. Spot-Scanning Proton Arc (SPArc) Therapy: The First Robust and Delivery-Efficient Spot-Scanning Proton Arc Therapy

    International Nuclear Information System (INIS)

    Ding, Xuanfeng; Li, Xiaoqiang; Zhang, J. Michele; Kabolizadeh, Peyman; Stevens, Craig; Yan, Di

    2016-01-01

    Purpose: To present a novel robust and delivery-efficient spot-scanning proton arc (SPArc) therapy technique. Methods and Materials: A SPArc optimization algorithm was developed that integrates control point resampling, energy layer redistribution, energy layer filtration, and energy layer resampling. The feasibility of such a technique was evaluated using sample patients: 1 patient with locally advanced head and neck oropharyngeal cancer with bilateral lymph node coverage, and 1 with a nonmobile lung cancer. Plan quality, robustness, and total estimated delivery time were compared with the robust optimized multifield step-and-shoot arc plan without SPArc optimization (Arc_m_u_l_t_i_-_f_i_e_l_d) and the standard robust optimized intensity modulated proton therapy (IMPT) plan. Dose-volume histograms of target and organs at risk were analyzed, taking into account the setup and range uncertainties. Total delivery time was calculated on the basis of a 360° gantry room with 1 revolutions per minute gantry rotation speed, 2-millisecond spot switching time, 1-nA beam current, 0.01 minimum spot monitor unit, and energy layer switching time of 0.5 to 4 seconds. Results: The SPArc plan showed potential dosimetric advantages for both clinical sample cases. Compared with IMPT, SPArc delivered 8% and 14% less integral dose for oropharyngeal and lung cancer cases, respectively. Furthermore, evaluating the lung cancer plan compared with IMPT, it was evident that the maximum skin dose, the mean lung dose, and the maximum dose to ribs were reduced by 60%, 15%, and 35%, respectively, whereas the conformity index was improved from 7.6 (IMPT) to 4.0 (SPArc). The total treatment delivery time for lung and oropharyngeal cancer patients was reduced by 55% to 60% and 56% to 67%, respectively, when compared with Arc_m_u_l_t_i_-_f_i_e_l_d plans. Conclusion: The SPArc plan is the first robust and delivery-efficient proton spot-scanning arc therapy technique, which could potentially be

  11. Spot-Scanning Proton Arc (SPArc) Therapy: The First Robust and Delivery-Efficient Spot-Scanning Proton Arc Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Xuanfeng, E-mail: Xuanfeng.ding@beaumont.org; Li, Xiaoqiang; Zhang, J. Michele; Kabolizadeh, Peyman; Stevens, Craig; Yan, Di

    2016-12-01

    Purpose: To present a novel robust and delivery-efficient spot-scanning proton arc (SPArc) therapy technique. Methods and Materials: A SPArc optimization algorithm was developed that integrates control point resampling, energy layer redistribution, energy layer filtration, and energy layer resampling. The feasibility of such a technique was evaluated using sample patients: 1 patient with locally advanced head and neck oropharyngeal cancer with bilateral lymph node coverage, and 1 with a nonmobile lung cancer. Plan quality, robustness, and total estimated delivery time were compared with the robust optimized multifield step-and-shoot arc plan without SPArc optimization (Arc{sub multi-field}) and the standard robust optimized intensity modulated proton therapy (IMPT) plan. Dose-volume histograms of target and organs at risk were analyzed, taking into account the setup and range uncertainties. Total delivery time was calculated on the basis of a 360° gantry room with 1 revolutions per minute gantry rotation speed, 2-millisecond spot switching time, 1-nA beam current, 0.01 minimum spot monitor unit, and energy layer switching time of 0.5 to 4 seconds. Results: The SPArc plan showed potential dosimetric advantages for both clinical sample cases. Compared with IMPT, SPArc delivered 8% and 14% less integral dose for oropharyngeal and lung cancer cases, respectively. Furthermore, evaluating the lung cancer plan compared with IMPT, it was evident that the maximum skin dose, the mean lung dose, and the maximum dose to ribs were reduced by 60%, 15%, and 35%, respectively, whereas the conformity index was improved from 7.6 (IMPT) to 4.0 (SPArc). The total treatment delivery time for lung and oropharyngeal cancer patients was reduced by 55% to 60% and 56% to 67%, respectively, when compared with Arc{sub multi-field} plans. Conclusion: The SPArc plan is the first robust and delivery-efficient proton spot-scanning arc therapy technique, which could potentially be implemented

  12. Perspective: Evolution and detection of genetic robustness

    NARCIS (Netherlands)

    Visser, de J.A.G.M.; Hermisson, J.; Wagner, G.P.; Ancel Meyers, L.; Bagheri-Chaichian, H.; Blanchard, J.L.; Chao, L.; Cheverud, J.M.; Elena, S.F.; Fontana, W.; Gibson, G.; Hansen, T.F.; Krakauer, D.; Lewontin, R.C.; Ofria, C.; Rice, S.H.; Dassow, von G.; Wagner, A.; Whitlock, M.C.

    2003-01-01

    Robustness is the invariance of phenotypes in the face of perturbation. The robustness of phenotypes appears at various levels of biological organization, including gene expression, protein folding, metabolic flux, physiological homeostasis, development, and even organismal fitness. The mechanisms

  13. Robust lyapunov controller for uncertain systems

    KAUST Repository

    Laleg-Kirati, Taous-Meriem; Elmetennani, Shahrazed

    2017-01-01

    Various examples of systems and methods are provided for Lyapunov control for uncertain systems. In one example, a system includes a process plant and a robust Lyapunov controller configured to control an input of the process plant. The robust

  14. Robust distributed cognitive relay beamforming

    KAUST Repository

    Pandarakkottilil, Ubaidulla

    2012-05-01

    In this paper, we present a distributed relay beamformer design for a cognitive radio network in which a cognitive (or secondary) transmit node communicates with a secondary receive node assisted by a set of cognitive non-regenerative relays. The secondary nodes share the spectrum with a licensed primary user (PU) node, and each node is assumed to be equipped with a single transmit/receive antenna. The interference to the PU resulting from the transmission from the cognitive nodes is kept below a specified limit. The proposed robust cognitive relay beamformer design seeks to minimize the total relay transmit power while ensuring that the transceiver signal-to-interference- plus-noise ratio and PU interference constraints are satisfied. The proposed design takes into account a parameter of the error in the channel state information (CSI) to render the performance of the beamformer robust in the presence of imperfect CSI. Though the original problem is non-convex, we show that the proposed design can be reformulated as a tractable convex optimization problem that can be solved efficiently. Numerical results are provided and illustrate the performance of the proposed designs for different network operating conditions and parameters. © 2012 IEEE.

  15. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  16. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  17. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  18. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  19. Robust canonical correlations: A comparative study

    OpenAIRE

    Branco, JA; Croux, Christophe; Filzmoser, P; Oliveira, MR

    2005-01-01

    Several approaches for robust canonical correlation analysis will be presented and discussed. A first method is based on the definition of canonical correlation analysis as looking for linear combinations of two sets of variables having maximal (robust) correlation. A second method is based on alternating robust regressions. These methods axe discussed in detail and compared with the more traditional approach to robust canonical correlation via covariance matrix estimates. A simulation study ...

  20. Novel methods for estimating lithium-ion battery state of energy and maximum available energy

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhu, Jianguo; Wang, Guoxiu; He, Tingting; Wei, Yiying

    2016-01-01

    Highlights: • Study on temperature, current, aging dependencies of maximum available energy. • Study on the various factors dependencies of relationships between SOE and SOC. • A quantitative relationship between SOE and SOC is proposed for SOE estimation. • Estimate maximum available energy by means of moving-window energy-integral. • The robustness and feasibility of the proposed approaches are systematic evaluated. - Abstract: The battery state of energy (SOE) allows a direct determination of the ratio between the remaining and maximum available energy of a battery, which is critical for energy optimization and management in energy storage systems. In this paper, the ambient temperature, battery discharge/charge current rate and cell aging level dependencies of battery maximum available energy and SOE are comprehensively analyzed. An explicit quantitative relationship between SOE and state of charge (SOC) for LiMn_2O_4 battery cells is proposed for SOE estimation, and a moving-window energy-integral technique is incorporated to estimate battery maximum available energy. Experimental results show that the proposed approaches can estimate battery maximum available energy and SOE with high precision. The robustness of the proposed approaches against various operation conditions and cell aging levels is systematically evaluated.

  1. Robust adaptive synchronization of general dynamical networks ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 86; Issue 6. Robust ... A robust adaptive synchronization scheme for these general complex networks with multiple delays and uncertainties is established and raised by employing the robust adaptive control principle and the Lyapunov stability theory. We choose ...

  2. Robust portfolio selection under norm uncertainty

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2016-06-01

    Full Text Available Abstract In this paper, we consider the robust portfolio selection problem which has a data uncertainty described by the ( p , w $(p,w$ -norm in the objective function. We show that the robust formulation of this problem is equivalent to a linear optimization problem. Moreover, we present some numerical results concerning our robust portfolio selection problem.

  3. A robust standard deviation control chart

    NARCIS (Netherlands)

    Schoonhoven, M.; Does, R.J.M.M.

    2012-01-01

    This article studies the robustness of Phase I estimators for the standard deviation control chart. A Phase I estimator should be efficient in the absence of contaminations and resistant to disturbances. Most of the robust estimators proposed in the literature are robust against either diffuse

  4. Methodology in robust and nonparametric statistics

    CERN Document Server

    Jurecková, Jana; Picek, Jan

    2012-01-01

    Introduction and SynopsisIntroductionSynopsisPreliminariesIntroductionInference in Linear ModelsRobustness ConceptsRobust and Minimax Estimation of LocationClippings from Probability and Asymptotic TheoryProblemsRobust Estimation of Location and RegressionIntroductionM-EstimatorsL-EstimatorsR-EstimatorsMinimum Distance and Pitman EstimatorsDifferentiable Statistical FunctionsProblemsAsymptotic Representations for L-Estimators

  5. Robust and efficient walking with spring-like legs

    Energy Technology Data Exchange (ETDEWEB)

    Rummel, J; Blum, Y; Seyfarth, A, E-mail: juergen.rummel@uni-jena.d, E-mail: andre.seyfarth@uni-jena.d [Lauflabor Locomotion Laboratory, University of Jena, Dornburger Strasse 23, 07743 Jena (Germany)

    2010-12-15

    The development of bipedal walking robots is inspired by human walking. A way of implementing walking could be performed by mimicking human leg dynamics. A fundamental model, representing human leg dynamics during walking and running, is the bipedal spring-mass model which is the basis for this paper. The aim of this study is the identification of leg parameters leading to a compromise between robustness and energy efficiency in walking. It is found that, compared to asymmetric walking, symmetric walking with flatter angles of attack reveals such a compromise. With increasing leg stiffness, energy efficiency increases continuously. However, robustness is the maximum at moderate leg stiffness and decreases slightly with increasing stiffness. Hence, an adjustable leg compliance would be preferred, which is adaptable to the environment. If the ground is even, a high leg stiffness leads to energy efficient walking. However, if external perturbations are expected, e.g. when the robot walks on uneven terrain, the leg should be softer and the angle of attack flatter. In the case of underactuated robots with constant physical springs, the leg stiffness should be larger than k-tilde = 14 in order to use the most robust gait. Soft legs, however, lack in both robustness and efficiency.

  6. Robust and efficient walking with spring-like legs

    International Nuclear Information System (INIS)

    Rummel, J; Blum, Y; Seyfarth, A

    2010-01-01

    The development of bipedal walking robots is inspired by human walking. A way of implementing walking could be performed by mimicking human leg dynamics. A fundamental model, representing human leg dynamics during walking and running, is the bipedal spring-mass model which is the basis for this paper. The aim of this study is the identification of leg parameters leading to a compromise between robustness and energy efficiency in walking. It is found that, compared to asymmetric walking, symmetric walking with flatter angles of attack reveals such a compromise. With increasing leg stiffness, energy efficiency increases continuously. However, robustness is the maximum at moderate leg stiffness and decreases slightly with increasing stiffness. Hence, an adjustable leg compliance would be preferred, which is adaptable to the environment. If the ground is even, a high leg stiffness leads to energy efficient walking. However, if external perturbations are expected, e.g. when the robot walks on uneven terrain, the leg should be softer and the angle of attack flatter. In the case of underactuated robots with constant physical springs, the leg stiffness should be larger than k-tilde = 14 in order to use the most robust gait. Soft legs, however, lack in both robustness and efficiency.

  7. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation

    Directory of Open Access Journals (Sweden)

    Xi Liu

    2016-09-01

    Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  8. Container Materials, Fabrication And Robustness

    International Nuclear Information System (INIS)

    Dunn, K.; Louthan, M.; Rawls, G.; Sindelar, R.; Zapp, P.; Mcclard, J.

    2009-01-01

    The multi-barrier 3013 container used to package plutonium-bearing materials is robust and thereby highly resistant to identified degradation modes that might cause failure. The only viable degradation mechanisms identified by a panel of technical experts were pressurization within and corrosion of the containers. Evaluations of the container materials and the fabrication processes and resulting residual stresses suggest that the multi-layered containers will mitigate the potential for degradation of the outer container and prevent the release of the container contents to the environment. Additionally, the ongoing surveillance programs and laboratory studies should detect any incipient degradation of containers in the 3013 storage inventory before an outer container is compromised.

  9. Robust matching for voice recognition

    Science.gov (United States)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  10. Robustness of climate metrics under climate policy ambiguity

    International Nuclear Information System (INIS)

    Ekholm, Tommi; Lindroos, Tomi J.; Savolainen, Ilkka

    2013-01-01

    Highlights: • We assess the economic impacts of using different climate metrics. • The setting is cost-efficient scenarios for three interpretations of the 2C target. • With each target setting, the optimal metric is different. • Therefore policy ambiguity prevents the selection of an optimal metric. • Robust metric values that perform well with multiple policy targets however exist. -- Abstract: A wide array of alternatives has been proposed as the common metrics with which to compare the climate impacts of different emission types. Different physical and economic metrics and their parameterizations give diverse weights between e.g. CH 4 and CO 2 , and fixing the metric from one perspective makes it sub-optimal from another. As the aims of global climate policy involve some degree of ambiguity, it is not possible to determine a metric that would be optimal and consistent with all policy aims. This paper evaluates the cost implications of using predetermined metrics in cost-efficient mitigation scenarios. Three formulations of the 2 °C target, including both deterministic and stochastic approaches, shared a wide range of metric values for CH 4 with which the mitigation costs are only slightly above the cost-optimal levels. Therefore, although ambiguity in current policy might prevent us from selecting an optimal metric, it can be possible to select robust metric values that perform well with multiple policy targets

  11. Robustness Assessment of Spatial Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    2012-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern buildi...... to robustness of spatial timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern building...... codes consider the need for robustness of structures and provide strategies and methods to obtain robustness. Therefore a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues with respect...

  12. Using the Nova target chamber for high-yield targets

    International Nuclear Information System (INIS)

    Pitts, J.H.

    1987-01-01

    The existing 2.2-m-radius Nova aluminum target chamber, coated and lined with boron-seeded carbon shields, is proposed for use with 1000-MJ-yield targets in the next laser facility. The laser beam and diagnostic holes in the target chamber are left open and the desired 10 -2 Torr vacuum is maintained both inside and outside the target chamber; a larger target chamber room is the vacuum barrier to the atmosphere. The hole area available is three times that necessary to maintain a maximum fluence below 12 J/cm 2 on optics placed at a radius of 10 m. Maximum stress in the target chamber wall is 73 MPa, which complies with the intent of the ASME Pressure Vessel Code. However, shock waves passing through the inner carbon shield could cause it to comminute. We propose tests and analyses to ensure that the inner carbon shield survives the environment. 13 refs

  13. Scheduling with target start times

    NARCIS (Netherlands)

    Hoogeveen, J.A.; Velde, van de S.L.; Klein Haneveld, W.K.; Vrieze, O.J.; Kallenberg, L.C.M.

    1997-01-01

    We address the single-machine problem of scheduling n independent jobs subject to target start times. Target start times are essentially release times that may be violated at a certain cost. The goal is to minimize an objective function that is composed of total completion time and maximum

  14. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    Science.gov (United States)

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  15. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  16. CERN: Fixed target targets

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1993-03-15

    Full text: While the immediate priority of CERN's research programme is to exploit to the full the world's largest accelerator, the LEP electron-positron collider and its concomitant LEP200 energy upgrade (January, page 1), CERN is also mindful of its long tradition of diversified research. Away from LEP and preparations for the LHC proton-proton collider to be built above LEP in the same 27-kilometre tunnel, CERN is also preparing for a new generation of heavy ion experiments using a new source, providing heavier ions (April 1992, page 8), with first physics expected next year. CERN's smallest accelerator, the LEAR Low Energy Antiproton Ring continues to cover a wide range of research topics, and saw a record number of hours of operation in 1992. The new ISOLDE on-line isotope separator was inaugurated last year (July, page 5) and physics is already underway. The remaining effort concentrates around fixed target experiments at the SPS synchrotron, which formed the main thrust of CERN's research during the late 1970s. With the SPS and LEAR now approaching middle age, their research future was extensively studied last year. Broadly, a vigorous SPS programme looks assured until at least the end of 1995. Decisions for the longer term future of the West Experimental Area of the SPS will have to take into account the heavy demand for test beams from work towards experiments at big colliders, both at CERN and elsewhere. The North Experimental Area is the scene of larger experiments with longer lead times. Several more years of LEAR exploitation are already in the pipeline, but for the longer term, the ambitious Superlear project for a superconducting ring (January 1992, page 7) did not catch on. Neutrino physics has a long tradition at CERN, and this continues with the preparations for two major projects, the Chorus and Nomad experiments (November 1991, page 7), to start next year in the West Area. Delicate neutrino oscillation effects could become visible for the first

  17. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  18. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  19. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  20. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  1. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  2. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  3. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  4. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  5. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  6. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  7. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  8. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  9. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  10. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  11. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  12. CERN: Fixed target targets

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    Full text: While the immediate priority of CERN's research programme is to exploit to the full the world's largest accelerator, the LEP electron-positron collider and its concomitant LEP200 energy upgrade (January, page 1), CERN is also mindful of its long tradition of diversified research. Away from LEP and preparations for the LHC proton-proton collider to be built above LEP in the same 27-kilometre tunnel, CERN is also preparing for a new generation of heavy ion experiments using a new source, providing heavier ions (April 1992, page 8), with first physics expected next year. CERN's smallest accelerator, the LEAR Low Energy Antiproton Ring continues to cover a wide range of research topics, and saw a record number of hours of operation in 1992. The new ISOLDE on-line isotope separator was inaugurated last year (July, page 5) and physics is already underway. The remaining effort concentrates around fixed target experiments at the SPS synchrotron, which formed the main thrust of CERN's research during the late 1970s. With the SPS and LEAR now approaching middle age, their research future was extensively studied last year. Broadly, a vigorous SPS programme looks assured until at least the end of 1995. Decisions for the longer term future of the West Experimental Area of the SPS will have to take into account the heavy demand for test beams from work towards experiments at big colliders, both at CERN and elsewhere. The North Experimental Area is the scene of larger experiments with longer lead times. Several more years of LEAR exploitation are already in the pipeline, but for the longer term, the ambitious Superlear project for a superconducting ring (January 1992, page 7) did not catch on. Neutrino physics has a long tradition at CERN, and this continues with the preparations for two major projects, the Chorus and Nomad experiments (November 1991, page 7), to start next year in the West Area. Delicate neutrino oscillation effects could become

  13. A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network.

    Science.gov (United States)

    Qi, Jun; Liu, Guo-Ping

    2017-11-06

    This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS). The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN) node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF) module, which is only used for time synchronization between different nodes, with accuracy up to 1 μ s. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF) for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM). The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS) signal.

  14. A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Jun Qi

    2017-11-01

    Full Text Available This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS. The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF module, which is only used for time synchronization between different nodes, with accuracy up to 1 μs. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM. The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS signal.

  15. Robust holographic storage system design.

    Science.gov (United States)

    Watanabe, Takahiro; Watanabe, Minoru

    2011-11-21

    Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration. © 2011 Optical Society of America

  16. Efficient robust conditional random fields.

    Science.gov (United States)

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  17. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  18. Internal targets for LEAR

    International Nuclear Information System (INIS)

    Kilian, K.; Gspann, J.; Mohl, D.; Poth, H.

    1984-01-01

    This chapter considers the use of thin internal targets in conjunction with phase-space cooling at the Low-Energy Antiproton Ring (LEAR). Topics considered include the merits of internal target operation; the most efficient use of antiprotons and of proton synchrotron (PS) protons, highest center-of-mass (c.m.) energy resolution; highest angular resolution and access to extreme angles; the transparent environment for all reaction products; a windowless source and pure targets; highest luminosity and count rates; access to lowest energies with increasing resolution; internal target thickness and vacuum requirements; required cooling performance; and modes of operation. It is demonstrated that an internal target in conjunction with phase-space cooling has the potential of better performance in terms of the economic use of antiprotons and consequently of PS protons; energy resolution; angular resolution; maximum reaction rate capability (statistical precision); efficient parasitic operation; transparency of the target for reaction products; access to low energies; and the ease of polarized target experiments. It is concluded that all p - experiments which need high statistics and high p - flux, such as studies of rare channels or broad, weak resonance structures, would profit from internal targets

  19. Revisiting the case for intensity targets: Better incentives and less uncertainty for developing countries

    International Nuclear Information System (INIS)

    Marschinski, Robert; Edenhofer, Ottmar

    2010-01-01

    In the debate on post-Kyoto global climate policy, intensity targets, which set a maximum amount of emissions per GDP, figure as prominent alternative to Kyoto-style absolute emission targets, especially for developing countries. This paper re-examines the case for intensity targets by critically assessing several of its properties, namely (i) reduction of cost-uncertainty, (ii) reduction of 'hot air', (iii) compatibility with international emissions trading, (iv) incentive to decouple carbon emissions and economic output (decarbonization), and, (v) use as a substitute for banking/borrowing. Relying on simple analytical models, it is shown that the effect on cost-uncertainty is ambiguous and depends on parameter values, and that the same holds for the risk of 'hot air'; that the intensity target distorts international emissions trading; that despite potential asymmetries in the choice of abatement technology between absolute and intensity target, the incentive for a lasting transformation of the energy system is not necessarily stronger under the latter; and, finally, that only a well-working intensity target could substitute banking/borrowing to some extent-but also vice versa. Overall, the results suggest that due to the increased complexity and the potentially only modest benefits of an intensity target, absolute targets remain a robust choice for a cautious policy maker.

  20. Robust data reconciliation with TEMPO

    International Nuclear Information System (INIS)

    Sunde, Svein; Banati, Jozsef

    2004-03-01

    The Halden Project's TEMPO system is devised to meet the increasing challenges facing utilities in performance monitoring and optimisation, among other things due to deregulation and market liberalisation. The data reconciliation mode is an important one in the TEMPO system. Data reconciliation is a method to provide reliable estimates of process states, parameters, the state of equipment and instruments, and efficiency. Data reconciliation follows maximum likelihood principles, and is usually based on a Gaussian error model for the data. Data reconciliation thus estimates the most likely state of the process, given the heat and mass balance and the current instrument readings. If the assumption of normally (Gaussian) distributed errors in the measurements break down, as it will in the case of gross errors (outliers) in the data set, this needs to be detected by the monitoring system. TEMPO does this by computing an overall test statistic subjected to a hypothesis test, resulting in the so-called goodness-of-fit. If the goodness-of-fit takes a too low value, a fault is probably present in the process. Identifying the location of the fault is more difficult, however. With the Gaussian error model this is still possible employing a so-called serial elimination method. The serial elimination method is, however, cumbersome to use and leads to quite lengthy calculations. The present report describes the application of alternative distributions in the maximum likelihood estimation in an attempt to directly identify the location of faulty sensors. For direct identification of sensor faults significant improvements over the method based on a Gaussian error model has been achieved. However, serial elimination still leads to the highest success rate for fault identification. All results were based on Monte Carlo simulations. (Author)

  1. Robust AIC with High Breakdown Scale Estimate

    Directory of Open Access Journals (Sweden)

    Shokrya Saleh

    2014-01-01

    Full Text Available Akaike Information Criterion (AIC based on least squares (LS regression minimizes the sum of the squared residuals; LS is sensitive to outlier observations. Alternative criterion, which is less sensitive to outlying observation, has been proposed; examples are robust AIC (RAIC, robust Mallows Cp (RCp, and robust Bayesian information criterion (RBIC. In this paper, we propose a robust AIC by replacing the scale estimate with a high breakdown point estimate of scale. The robustness of the proposed methods is studied through its influence function. We show that, the proposed robust AIC is effective in selecting accurate models in the presence of outliers and high leverage points, through simulated and real data examples.

  2. Adaptive robust Kalman filtering for precise point positioning

    International Nuclear Information System (INIS)

    Guo, Fei; Zhang, Xiaohong

    2014-01-01

    The optimality of precise point postioning (PPP) solution using a Kalman filter is closely connected to the quality of the a priori information about the process noise and the updated mesurement noise, which are sometimes difficult to obtain. Also, the estimation enviroment in the case of dynamic or kinematic applications is not always fixed but is subject to change. To overcome these problems, an adaptive robust Kalman filtering algorithm, the main feature of which introduces an equivalent covariance matrix to resist the unexpected outliers and an adaptive factor to balance the contribution of observational information and predicted information from the system dynamic model, is applied for PPP processing. The basic models of PPP including the observation model, dynamic model and stochastic model are provided first. Then an adaptive robust Kalmam filter is developed for PPP. Compared with the conventional robust estimator, only the observation with largest standardized residual will be operated by the IGG III function in each iteration to avoid reducing the contribution of the normal observations or even filter divergence. Finally, tests carried out in both static and kinematic modes have confirmed that the adaptive robust Kalman filter outperforms the classic Kalman filter by turning either the equivalent variance matrix or the adaptive factor or both of them. This becomes evident when analyzing the positioning errors in flight tests at the turns due to the target maneuvering and unknown process/measurement noises. (paper)

  3. Multimodel Robust Control for Hydraulic Turbine

    OpenAIRE

    Osuský, Jakub; Števo, Stanislav

    2014-01-01

    The paper deals with the multimodel and robust control system design and their combination based on M-Δ structure. Controller design will be done in the frequency domain with nominal performance specified by phase margin. Hydraulic turbine model is analyzed as system with unstructured uncertainty, and robust stability condition is included in controller design. Multimodel and robust control approaches are presented in detail on hydraulic turbine model. Control design approaches are compared a...

  4. Forecasting exchange rates: a robust regression approach

    OpenAIRE

    Preminger, Arie; Franck, Raphael

    2005-01-01

    The least squares estimation method as well as other ordinary estimation method for regression models can be severely affected by a small number of outliers, thus providing poor out-of-sample forecasts. This paper suggests a robust regression approach, based on the S-estimation method, to construct forecasting models that are less sensitive to data contamination by outliers. A robust linear autoregressive (RAR) and a robust neural network (RNN) models are estimated to study the predictabil...

  5. Maximum margin semi-supervised learning with irrelevant data.

    Science.gov (United States)

    Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R

    2015-10-01

    Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright

  6. Robust visual tracking via multiscale deep sparse networks

    Science.gov (United States)

    Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo

    2017-04-01

    In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.

  7. Robustness: confronting lessons from physics and biology.

    Science.gov (United States)

    Lesne, Annick

    2008-11-01

    The term robustness is encountered in very different scientific fields, from engineering and control theory to dynamical systems to biology. The main question addressed herein is whether the notion of robustness and its correlates (stability, resilience, self-organisation) developed in physics are relevant to biology, or whether specific extensions and novel frameworks are required to account for the robustness properties of living systems. To clarify this issue, the different meanings covered by this unique term are discussed; it is argued that they crucially depend on the kind of perturbations that a robust system should by definition withstand. Possible mechanisms underlying robust behaviours are examined, either encountered in all natural systems (symmetries, conservation laws, dynamic stability) or specific to biological systems (feedbacks and regulatory networks). Special attention is devoted to the (sometimes counterintuitive) interrelations between robustness and noise. A distinction between dynamic selection and natural selection in the establishment of a robust behaviour is underlined. It is finally argued that nested notions of robustness, relevant to different time scales and different levels of organisation, allow one to reconcile the seemingly contradictory requirements for robustness and adaptability in living systems.

  8. Robustness of Long Span Reciprocal Timber Structures

    DEFF Research Database (Denmark)

    Balfroid, Nathalie; Kirkegaard, Poul Henning

    2011-01-01

    engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper makes a discussion of such robustness issues related to the future development of reciprocal timber structures. The paper concludes that these kind of structures can have...... a potential as long span timber structures in real projects if they are carefully designed with respect to the overall robustness strategies.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. The interest has also been facilitated due to recently severe structural failures...

  9. Impact of marine reserve on maximum sustainable yield in a traditional prey-predator system

    Science.gov (United States)

    Paul, Prosenjit; Kar, T. K.; Ghorai, Abhijit

    2018-01-01

    Multispecies fisheries management requires managers to consider the impact of fishing activities on several species as fishing impacts both targeted and non-targeted species directly or indirectly in several ways. The intended goal of traditional fisheries management is to achieve maximum sustainable yield (MSY) from the targeted species, which on many occasions affect the targeted species as well as the entire ecosystem. Marine reserves are often acclaimed as the marine ecosystem management tool. Few attempts have been made to generalize the ecological effects of marine reserve on MSY policy. We examine here how MSY and population level in a prey-predator system are affected by the low, medium and high reserve size under different possible scenarios. Our simulation works shows that low reserve area, the value of MSY for prey exploitation is maximum when both prey and predator species have fast movement rate. For medium reserve size, our analysis revealed that the maximum value of MSY for prey exploitation is obtained when prey population has fast movement rate and predator population has slow movement rate. For high reserve area, the maximum value of MSY for prey's exploitation is very low compared to the maximum value of MSY for prey's exploitation in case of low and medium reserve. On the other hand, for low and medium reserve area, MSY for predator exploitation is maximum when both the species have fast movement rate.

  10. Target laboratory

    International Nuclear Information System (INIS)

    Ephraim, D.C.; Pednekar, A.R.

    1993-01-01

    A target laboratory to make stripper foils for the accelerator and various targets for use in the experiments is set up in the pelletron accelerator facility. The facilities available in the laboratory are: (1) D.C. glow discharge setup, (2) carbon arc set up, and (3) vacuum evaporation set up (resistance heating), electron beam source, rolling mill - all for target preparation. They are described. Centrifugal deposition technique is used for target preparation. (author). 3 figs

  11. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  12. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  13. Ice targets

    International Nuclear Information System (INIS)

    Pacheco, C.; Stark, C.; Tanaka, N.; Hodgkins, D.; Barnhart, J.; Kosty, J.

    1979-12-01

    This report presents a description of ice targets that were constructed for research work at the High Resolution Spectrometer (HRS) and at the Energetic Pion Channel and Spectrometer (EPICS). Reasons for using these ice targets and the instructions for their construction are given. Results of research using ice targets will be published at a later date

  14. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  15. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  16. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  17. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  18. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  19. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  20. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  1. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  2. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  3. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  4. Robust estimation of the correlation matrix of longitudinal data

    KAUST Repository

    Maadooliat, Mehdi

    2011-09-23

    We propose a double-robust procedure for modeling the correlation matrix of a longitudinal dataset. It is based on an alternative Cholesky decomposition of the form Σ=DLL⊤D where D is a diagonal matrix proportional to the square roots of the diagonal entries of Σ and L is a unit lower-triangular matrix determining solely the correlation matrix. The first robustness is with respect to model misspecification for the innovation variances in D, and the second is robustness to outliers in the data. The latter is handled using heavy-tailed multivariate t-distributions with unknown degrees of freedom. We develop a Fisher scoring algorithm for computing the maximum likelihood estimator of the parameters when the nonredundant and unconstrained entries of (L,D) are modeled parsimoniously using covariates. We compare our results with those based on the modified Cholesky decomposition of the form LD2L⊤ using simulations and a real dataset. © 2011 Springer Science+Business Media, LLC.

  5. Robustness analysis of bogie suspension components Pareto optimised values

    Science.gov (United States)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  6. SU-E-T-07: 4DCT Robust Optimization for Esophageal Cancer Using Intensity Modulated Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Liao, L [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Department of Industrial Engineering, University of Houston, Houston, TX (United States); Yu, J; Zhu, X; Li, H; Zhang, X [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Li, Y [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Varian Medical Systems, Houston, TX (United States); Lim, G [Department of Industrial Engineering, University of Houston, Houston, TX (United States)

    2015-06-15

    Purpose: To develop a 4DCT robust optimization method to reduce the dosimetric impact from respiratory motion in intensity modulated proton therapy (IMPT) for esophageal cancer. Methods: Four esophageal cancer patients were selected for this study. The different phases of CT from a set of 4DCT were incorporated into the worst-case dose distribution robust optimization algorithm. 4DCT robust treatment plans were designed and compared with the conventional non-robust plans. Result doses were calculated on the average and maximum inhale/exhale phases of 4DCT. Dose volume histogram (DVH) band graphic and ΔD95%, ΔD98%, ΔD5%, ΔD2% of CTV between different phases were used to evaluate the robustness of the plans. Results: Compare to the IMPT plans optimized using conventional methods, the 4DCT robust IMPT plans can achieve the same quality in nominal cases, while yield a better robustness to breathing motion. The mean ΔD95%, ΔD98%, ΔD5% and ΔD2% of CTV are 6%, 3.2%, 0.9% and 1% for the robustly optimized plans vs. 16.2%, 11.8%, 1.6% and 3.3% from the conventional non-robust plans. Conclusion: A 4DCT robust optimization method was proposed for esophageal cancer using IMPT. We demonstrate that the 4DCT robust optimization can mitigate the dose deviation caused by the diaphragm motion.

  7. Food supply chain network robustness : a literature review and research agenda

    NARCIS (Netherlands)

    Vlajic, J.V.; Hendrix, E.M.T.; Vorst, van der J.G.A.J.

    2008-01-01

    Today’s business environment is characterized by challenges of strong global competition where companies tend to achieve leanness and maximum responsiveness. However, lean supply chain networks (SCNs) become more vulnerable to all kind of disruptions. Food SCNs have to become robust, i.e. they

  8. Robust Tracking Control for Rendezvous in Near-Circular Orbits

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2013-01-01

    Full Text Available This paper investigates a robust guaranteed cost tracking control problem for thrust-limited spacecraft rendezvous in near-circular orbits. Relative motion model is established based on the two-body problem with noncircularity of the target orbit described as a parameter uncertainty. A guaranteed cost tracking controller with input saturation is designed via a linear matrix inequality (LMI method, and sufficient conditions for the existence of the robust tracking controller are derived, which is more concise and less conservative compared with the previous works. Numerical examples are provided for both time-invariant and time-variant reference signals to illustrate the effectiveness of the proposed control scheme when applied to the terminal rendezvous and other astronautic missions with scheduled states signal.

  9. Compatibility of detached divertor operation with robust edge pedestal performance

    Energy Technology Data Exchange (ETDEWEB)

    Leonard, A.W., E-mail: leonard@fusion.gat.com [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States); Makowski, M.A.; McLean, A.G. [Lawrence Livermore National Laboratory, Livermore, CA (United States); Osborne, T.H.; Snyder, P.B. [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States)

    2015-08-15

    The compatibility of detached radiative divertor operation with a robust H-mode pedestal is examined in DIII-D. A density scan produced low temperature plasmas at the divertor target, T{sub e} ⩽ 2 eV, with high radiation leading to a factor of ⩾4 drop in peak divertor heat flux. The cold radiative plasma was confined to the divertor and did not extend across the separatrix in X-point region. A robust H-mode pedestal was maintained with a small degradation in pedestal pressure at the highest densities. The response of the pedestal pressure to increasing density is reproduced by the EPED pedestal model. However, agreement of the EPED model with experiment at high density requires an assumption of reduced diamagnetic stabilization of edge Peeling–Ballooning modes.

  10. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  11. Effects of methodology and analysis strategy on robustness of pestivirus phylogeny.

    Science.gov (United States)

    Liu, Lihong; Xia, Hongyan; Baule, Claudia; Belák, Sándor; Wahlberg, Niklas

    2010-01-01

    Phylogenetic analysis of pestiviruses is a useful tool for classifying novel pestiviruses and for revealing their phylogenetic relationships. In this study, robustness of pestivirus phylogenies has been compared by analyses of the 5'UTR, and complete N(pro) and E2 gene regions separately and combined, performed by four methods: neighbour-joining (NJ), maximum parsimony (MP), maximum likelihood (ML), and Bayesian inference (BI). The strategy of analysing the combined sequence dataset by BI, ML, and MP methods resulted in a single, well-supported tree topology, indicating a reliable and robust pestivirus phylogeny. By contrast, the single-gene analysis strategy resulted in 12 trees of different topologies, revealing different relationships among pestiviruses. These results indicate that the strategies and methodologies are two vital aspects affecting the robustness of the pestivirus phylogeny. The strategy and methodologies outlined in this paper may have a broader application in inferring phylogeny of other RNA viruses.

  12. 76 FR 34953 - Funding Opportunity Title: Risk Management Education in Targeted States (Targeted States Program...

    Science.gov (United States)

    2011-06-15

    ... Availability C. Location and Target Audience D. Maximum Award E. Project Period F. Description of Agreement..., 2011. C. Location and Target Audience The RMA Regional Offices that service the Targeted States are... marketing systems to pursue new markets. D. Purpose The purpose of the Targeted States Program is to provide...

  13. Implicitly Weighted Methods in Robust Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 44, č. 3 (2012), s. 449-462 ISSN 0924-9907 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robustness * high breakdown point * outlier detection * robust correlation analysis * template matching * face recognition Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.767, year: 2012

  14. What is it to be sturdy (robust)?

    DEFF Research Database (Denmark)

    Nielsen, Niss Skov; Zwisler, Lars Pagter; Bojsen, Ann Kristina Mikkelsen

    Purpose: This paper intends to give a first insight into the concept of being "sturdy/robust"; To develop and test a Danish model of how to measure sturdi-ness/robustness; To test the scale's ability to identify people in emergency situa-tions who have high risk of developing psychological illness....

  15. Structural Robustness Evaluation of Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Giuliani, Luisa; Bontempi, Franco

    2010-01-01

    in the framework of a safe design: it depends on different factors, like exposure, vulnerability and robustness. Particularly, the requirement of structural vulnerability and robustness are discussed in this paper and a numerical application is presented, in order to evaluate the effects of a ship collision...

  16. In Silico Design of Robust Bolalipid Membranes

    NARCIS (Netherlands)

    Bulacu, Monica; Periole, Xavier; Marrink, Siewert J.; Périole, Xavier

    The robustness of microorganisms used in industrial fermentations is essential for the efficiency and yield of the production process. A viable tool to increase the robustness is through engineering of the cell membrane and especially by incorporating lipids from species that survive under harsh

  17. Assessment of Process Robustness for Mass Customization

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev

    2013-01-01

    robustness and their capability to develop it. Through literature study and analysis of robust process design characteristics a number of metrics are described which can be used for assessment. The metrics are evaluated and analyzed to be applied as KPI’s to help MC companies prioritize efforts in business...

  18. Applying Robust Design in an Industrial Context

    DEFF Research Database (Denmark)

    Christensen, Martin Ebro

    mechanical architectures. Furthermore a set of 15 robust design principles for reducing the variation in functional performance is compiled in a format directly supporting the work of the design engineer. With these foundational methods in place, the existing tools, methods and KPIs of Robust Design...

  19. The importance of robust design methodology

    DEFF Research Database (Denmark)

    Eifler, Tobias; Howard, Thomas J.

    2018-01-01

    infamous recalls in automotive history, that of the GM ignition switch, from the perspective of Robust Design. It is investigated if available Robust Design methods such as sensitivity analysis, tolerance stack-ups, design clarity, etc. would have been suitable to account for the performance variation...

  20. Robust Control Charts for Time Series Data

    NARCIS (Netherlands)

    Croux, C.; Gelper, S.; Mahieu, K.

    2010-01-01

    This article presents a control chart for time series data, based on the one-step- ahead forecast errors of the Holt-Winters forecasting method. We use robust techniques to prevent that outliers affect the estimation of the control limits of the chart. Moreover, robustness is important to maintain

  1. Efficient reanalysis techniques for robust topology optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Sigmund, Ole; Lazarov, Boyan Stefanov

    2012-01-01

    efficient robust topology optimization procedures based on reanalysis techniques. The approach is demonstrated on two compliant mechanism design problems where robust design is achieved by employing either a worst case formulation or a stochastic formulation. It is shown that the time spent on finite...

  2. Extending the Scope of Robust Quadratic Optimization

    NARCIS (Netherlands)

    Marandi, Ahmadreza; Ben-Tal, A.; den Hertog, Dick; Melenberg, Bertrand

    In this paper, we derive tractable reformulations of the robust counterparts of convex quadratic and conic quadratic constraints with concave uncertainties for a broad range of uncertainty sets. For quadratic constraints with convex uncertainty, it is well-known that the robust counterpart is, in

  3. Security and robustness for collaborative monitors

    NARCIS (Netherlands)

    Testerink, Bas; Bulling, Nils; Dastani, Mehdi

    2016-01-01

    Decentralized monitors can be subject to robustness and security risks. Robustness risks include attacks on the monitor’s infrastructure in order to disable parts of its functionality. Security risks include attacks that try to extract information from the monitor and thereby possibly leak sensitive

  4. How Robust is Your System Resilience?

    Science.gov (United States)

    Homayounfar, M.; Muneepeerakul, R.

    2017-12-01

    Robustness and resilience are concepts in system thinking that have grown in importance and popularity. For many complex social-ecological systems, however, robustness and resilience are difficult to quantify and the connections and trade-offs between them difficult to study. Most studies have either focused on qualitative approaches to discuss their connections or considered only one of them under particular classes of disturbances. In this study, we present an analytical framework to address the linkage between robustness and resilience more systematically. Our analysis is based on a stylized dynamical model that operationalizes a widely used concept framework for social-ecological systems. The model enables us to rigorously define robustness and resilience and consequently investigate their connections. The results reveal the tradeoffs among performance, robustness, and resilience. They also show how the nature of the such tradeoffs varies with the choices of certain policies (e.g., taxation and investment in public infrastructure), internal stresses and external disturbances.

  5. A Survey on Robustness in Railway Planning

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Bull, Simon Henry

    2018-01-01

    Planning problems in passenger railway range from long term strategic decision making to the detailed planning of operations.Operations research methods have played an increasing role in this planning process. However, recently more attention has been given to considerations of robustness...... in the quality of solutions to individual planning problems, and of operations in general. Robustness in general is the capacity for some system to absorb or resist changes. In the context of railway robustness it is often taken to be the capacity for operations to continue at some level when faced...... with a disruption such as delay or failure. This has resulted in more attention given to the inclusion of robustness measures and objectives in individual planning problems, and to the providing of tools to ensure operations continue under disrupted situations. In this paper we survey the literature on robustness...

  6. International Conference on Robust Statistics 2015

    CERN Document Server

    Basu, Ayanendranath; Filzmoser, Peter; Mukherjee, Diganta

    2016-01-01

    This book offers a collection of recent contributions and emerging ideas in the areas of robust statistics presented at the International Conference on Robust Statistics 2015 (ICORS 2015) held in Kolkata during 12–16 January, 2015. The book explores the applicability of robust methods in other non-traditional areas which includes the use of new techniques such as skew and mixture of skew distributions, scaled Bregman divergences, and multilevel functional data methods; application areas being circular data models and prediction of mortality and life expectancy. The contributions are of both theoretical as well as applied in nature. Robust statistics is a relatively young branch of statistical sciences that is rapidly emerging as the bedrock of statistical analysis in the 21st century due to its flexible nature and wide scope. Robust statistics supports the application of parametric and other inference techniques over a broader domain than the strictly interpreted model scenarios employed in classical statis...

  7. Comparison of Extremum-Seeking Control Techniques for Maximum Power Point Tracking in Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Chen-Han Wu

    2011-12-01

    Full Text Available Due to Japan’s recent nuclear crisis and petroleum price hikes, the search for renewable energy sources has become an issue of immediate concern. A promising candidate attracting much global attention is solar energy, as it is green and also inexhaustible. A maximum power point tracking (MPPT controller is employed in such a way that the output power provided by a photovoltaic (PV system is boosted to its maximum level. However, in the context of abrupt changes in irradiance, conventional MPPT controller approaches suffer from insufficient robustness against ambient variation, inferior transient response and a loss of output power as a consequence of the long duration required of tracking procedures. Accordingly, in this work the maximum power point tracking is carried out successfully using a sliding mode extremum-seeking control (SMESC method, and the tracking performances of three controllers are compared by simulations, that is, an extremum-seeking controller, a sinusoidal extremum-seeking controller and a sliding mode extremum-seeking controller. Being able to track the maximum power point promptly in the case of an abrupt change in irradiance, the SMESC approach is proven by simulations to be superior in terms of system dynamic and steady state responses, and an excellent robustness along with system stability is demonstrated as well.

  8. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    Science.gov (United States)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  9. Robust estimation of seismic coda shape

    Science.gov (United States)

    Nikkilä, Mikko; Polishchuk, Valentin; Krasnoshchekov, Dmitry

    2014-04-01

    We present a new method for estimation of seismic coda shape. It falls into the same class of methods as non-parametric shape reconstruction with the use of neural network techniques where data are split into a training and validation data sets. We particularly pursue the well-known problem of image reconstruction formulated in this case as shape isolation in the presence of a broadly defined noise. This combined approach is enabled by the intrinsic feature of seismogram which can be divided objectively into a pre-signal seismic noise with lack of the target shape, and the remainder that contains scattered waveforms compounding the coda shape. In short, we separately apply shape restoration procedure to pre-signal seismic noise and the event record, which provides successful delineation of the coda shape in the form of a smooth almost non-oscillating function of time. The new algorithm uses a recently developed generalization of classical computational-geometry tool of α-shape. The generalization essentially yields robust shape estimation by ignoring locally a number of points treated as extreme values, noise or non-relevant data. Our algorithm is conceptually simple and enables the desired or pre-determined level of shape detail, constrainable by an arbitrary data fit criteria. The proposed tool for coda shape delineation provides an alternative to moving averaging and/or other smoothing techniques frequently used for this purpose. The new algorithm is illustrated with an application to the problem of estimating the coda duration after a local event. The obtained relation coefficient between coda duration and epicentral distance is consistent with the earlier findings in the region of interest.

  10. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  11. On the Five-Moment Hamburger Maximum Entropy Reconstruction

    Science.gov (United States)

    Summy, D. P.; Pullin, D. I.

    2018-05-01

    We consider the Maximum Entropy Reconstruction (MER) as a solution to the five-moment truncated Hamburger moment problem in one dimension. In the case of five monomial moment constraints, the probability density function (PDF) of the MER takes the form of the exponential of a quartic polynomial. This implies a possible bimodal structure in regions of moment space. An analytical model is developed for the MER PDF applicable near a known singular line in a centered, two-component, third- and fourth-order moment (μ _3 , μ _4 ) space, consistent with the general problem of five moments. The model consists of the superposition of a perturbed, centered Gaussian PDF and a small-amplitude packet of PDF-density, called the outlying moment packet (OMP), sitting far from the mean. Asymptotic solutions are obtained which predict the shape of the perturbed Gaussian and both the amplitude and position on the real line of the OMP. The asymptotic solutions show that the presence of the OMP gives rise to an MER solution that is singular along a line in (μ _3 , μ _4 ) space emanating from, but not including, the point representing a standard normal distribution, or thermodynamic equilibrium. We use this analysis of the OMP to develop a numerical regularization of the MER, creating a procedure we call the Hybrid MER (HMER). Compared with the MER, the HMER is a significant improvement in terms of robustness and efficiency while preserving accuracy in its prediction of other important distribution features, such as higher order moments.

  12. Is countershading camouflage robust to lighting change due to weather?

    Science.gov (United States)

    Penacchio, Olivier; Lovell, P George; Harris, Julie M

    2018-02-01

    Countershading is a pattern of coloration thought to have evolved in order to implement camouflage. By adopting a pattern of coloration that makes the surface facing towards the sun darker and the surface facing away from the sun lighter, the overall amount of light reflected off an animal can be made more uniformly bright. Countershading could hence contribute to visual camouflage by increasing background matching or reducing cues to shape. However, the usefulness of countershading is constrained by a particular pattern delivering 'optimal' camouflage only for very specific lighting conditions. In this study, we test the robustness of countershading camouflage to lighting change due to weather, using human participants as a 'generic' predator. In a simulated three-dimensional environment, we constructed an array of simple leaf-shaped items and a single ellipsoidal target 'prey'. We set these items in two light environments: strongly directional 'sunny' and more diffuse 'cloudy'. The target object was given the optimal pattern of countershading for one of these two environment types or displayed a uniform pattern. By measuring detection time and accuracy, we explored whether and how target detection depended on the match between the pattern of coloration on the target object and scene lighting. Detection times were longest when the countershading was appropriate to the illumination; incorrectly camouflaged targets were detected with a similar pattern of speed and accuracy to uniformly coloured targets. We conclude that structural changes in light environment, such as caused by differences in weather, do change the effectiveness of countershading camouflage.

  13. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  14. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  15. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  16. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  17. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  18. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  19. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  20. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  1. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  2. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  3. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  4. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  5. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    Science.gov (United States)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  6. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  7. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  8. Interference-robust Air Interface for 5G Small Cells

    DEFF Research Database (Denmark)

    Tavares, Fernando Menezes Leitão

    the existing wireless network infrastructure to the limit. Mobile network operators must invest in network expansion to deal with this problem, but the predicted network requirements show that a new Radio Access Technology (RAT) standard will be fundamental to reach the future target performance. This new 5th...... to the fundamental role of inter-cell interference in this type of networks, the inter-cell interference problem must be addressed since the beginning of the design of the new standard. This Ph.D. thesis deals with the design of an interference-robust air interface for 5G small cell networks. The interference...

  9. Maximum neutron yeidls in experimental fusion devices

    International Nuclear Information System (INIS)

    Jassby, D.L.

    1979-02-01

    The optimal performances of 12 types of fusion devices are compared with regard to neutron production rate, neutrons per pulse, and fusion energy multiplication, Q/sub p/ (converted to the equivalent value in D-T operation). The record values in all categories are held by the beam-injected tokamak plasma, followed by other beam-target systems. The achieved values of Q/sub p/ for nearly all laboratory plasma fusion devices (magnetically or inertially confined) are found to roughly satisfy a common empirical scaling, Q/sub p/ approx. 10 -6 E/sub in//sup 3/2/, where E/sub in/ is the energy (in kilojoules) injected into the plasma during one or two energy confinement times, or the total energy delivered to the target for inertially confined systems. Fusion energy break-even (Q/sub p/ = 1) in any system apparently requires E/sub in/ approx. 10,000 kJ

  10. Ensemble Modeling for Robustness Analysis in engineering non-native metabolic pathways.

    Science.gov (United States)

    Lee, Yun; Lafontaine Rivera, Jimmy G; Liao, James C

    2014-09-01

    Metabolic pathways in cells must be sufficiently robust to tolerate fluctuations in expression levels and changes in environmental conditions. Perturbations in expression levels may lead to system failure due to the disappearance of a stable steady state. Increasing evidence has suggested that biological networks have evolved such that they are intrinsically robust in their network structure. In this article, we presented Ensemble Modeling for Robustness Analysis (EMRA), which combines a continuation method with the Ensemble Modeling approach, for investigating the robustness issue of non-native pathways. EMRA investigates a large ensemble of reference models with different parameters, and determines the effects of parameter drifting until a bifurcation point, beyond which a stable steady state disappears and system failure occurs. A pathway is considered to have high bifurcational robustness if the probability of system failure is low in the ensemble. To demonstrate the utility of EMRA, we investigate the bifurcational robustness of two synthetic central metabolic pathways that achieve carbon conservation: non-oxidative glycolysis and reverse glyoxylate cycle. With EMRA, we determined the probability of system failure of each design and demonstrated that alternative designs of these pathways indeed display varying degrees of bifurcational robustness. Furthermore, we demonstrated that target selection for flux improvement should consider the trade-offs between robustness and performance. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Robust recognition via information theoretic learning

    CERN Document Server

    He, Ran; Yuan, Xiaotong; Wang, Liang

    2014-01-01

    This Springer Brief represents a comprehensive review of information theoretic methods for robust recognition. A variety of information theoretic methods have been proffered in the past decade, in a large variety of computer vision applications; this work brings them together, attempts to impart the theory, optimization and usage of information entropy.The?authors?resort to a new information theoretic concept, correntropy, as a robust measure and apply it to solve robust face recognition and object recognition problems. For computational efficiency,?the brief?introduces the additive and multip

  12. Robust statistics and geochemical data analysis

    International Nuclear Information System (INIS)

    Di, Z.

    1987-01-01

    Advantages of robust procedures over ordinary least-squares procedures in geochemical data analysis is demonstrated using NURE data from the Hot Springs Quadrangle, South Dakota, USA. Robust principal components analysis with 5% multivariate trimming successfully guarded the analysis against perturbations by outliers and increased the number of interpretable factors. Regression with SINE estimates significantly increased the goodness-of-fit of the regression and improved the correspondence of delineated anomalies with known uranium prospects. Because of the ubiquitous existence of outliers in geochemical data, robust statistical procedures are suggested as routine procedures to replace ordinary least-squares procedures

  13. Design Robust Controller for Rotary Kiln

    Directory of Open Access Journals (Sweden)

    Omar D. Hernández-Arboleda

    2013-11-01

    Full Text Available This paper presents the design of a robust controller for a rotary kiln. The designed controller is a combination of a fractional PID and linear quadratic regulator (LQR, these are not used to control the kiln until now, in addition robustness criteria are evaluated (gain margin, phase margin, strength gain, rejecting high frequency noise and sensitivity applied to the entire model (controller-plant, obtaining good results with a frequency range of 0.020 to 90 rad/s, which contributes to the robustness of the system.

  14. Towards distortion-free robust image authentication

    International Nuclear Information System (INIS)

    Coltuc, D

    2007-01-01

    This paper investigates a general framework for distortion-free robust image authentication by multiple marking. First, by robust watermarking a subsampled version of image edges is embedded. Then, by reversible watermarking the information needed to recover the original image is inserted, too. The hiding capacity of the reversible watermarking is the essential requirement for this approach. Thus in case of no attacks not only image is authenticated but also the original is exactly recovered. In case of attacks, reversibility is lost, but image can still be authenticated. Preliminary results providing very good robustness against JPEG compression are presented

  15. An Overview of the Adaptive Robust DFT

    Directory of Open Access Journals (Sweden)

    Djurović Igor

    2010-01-01

    Full Text Available Abstract This paper overviews basic principles and applications of the robust DFT (RDFT approach, which is used for robust processing of frequency-modulated (FM signals embedded in non-Gaussian heavy-tailed noise. In particular, we concentrate on the spectral analysis and filtering of signals corrupted by impulsive distortions using adaptive and nonadaptive robust estimators. Several adaptive estimators of location parameter are considered, and it is shown that their application is preferable with respect to non-adaptive counterparts. This fact is demonstrated by efficiency comparison of adaptive and nonadaptive RDFT methods for different noise environments.

  16. A robust interpretation of duration calculus

    DEFF Research Database (Denmark)

    Franzle, M.; Hansen, Michael Reichhardt

    2005-01-01

    We transfer the concept of robust interpretation from arithmetic first-order theories to metric-time temporal logics. The idea is that the interpretation of a formula is robust iff its truth value does not change under small variation of the constants in the formula. Exemplifying this on Duration...... Calculus (DC), our findings are that the robust interpretation of DC is equivalent to a multi-valued interpretation that uses the real numbers as semantic domain and assigns Lipschitz-continuous interpretations to all operators of DC. Furthermore, this continuity permits approximation between discrete...

  17. REINA at CLEF 2007 Robust Task

    OpenAIRE

    Zazo Rodríguez, Ángel Francisco; Figuerola, Carlos G.; Alonso Berrocal, José Luis

    2007-01-01

    This paper describes our work at CLEF 2007 Robust Task. We have participated in the monolingual (English, French and Portuguese) and the bilingual (English to French) subtask. At CLEF 2006 our research group obtained very good results applying local query expansion using windows of terms in the robust task. This year we have used the same expansion technique, but taking into account some criteria of robustness: MAP, GMAP, MMR, GS@10, P@10, number of failed topics, number of topics bellow 0.1 ...

  18. REINA at CLEF 2007 Robust Track (2007)

    OpenAIRE

    Zazo, Ángel F.; G.-Figuerola, Carlos; Alonso-Berrocal, José-Luis

    2007-01-01

    This paper describes our work at CLEF 2007 Robust Task. We have participated in the monolingual (English, French and Portuguese) and the bilingual (English to French) subtask. At CLEF 2006 our research group obtained very good results applying local query expansion using windows of terms in the robust task. This year we have used the same expansion technique, but taking into account some criteria of robustness: MAP, GMAP, MMR, GS@10, P@10, number of failed topics, number of topics bellow 0.1 ...

  19. Danish Requirements for Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Christensen, H. H.

    2006-01-01

    . This paper describes the background of the revised robustness requirements implemented in the Danish Code of Practice for Safety of Structures in 2003 [1, 2, 3]. According to the Danish design rules robustness shall be documented for all structures where consequences of failure are serious. This paper...... describes the background of the design procedure in the Danish codes, which shall be followed in order to document sufficient robustness in the following steps: Step 1: review of loads and possible failure modes/scenarios and determination of acceptable collapse extent. Step 2: review of the structural...

  20. Robustness-related issues in speaker recognition

    CERN Document Server

    Zheng, Thomas Fang

    2017-01-01

    This book presents an overview of speaker recognition technologies with an emphasis on dealing with robustness issues. Firstly, the book gives an overview of speaker recognition, such as the basic system framework, categories under different criteria, performance evaluation and its development history. Secondly, with regard to robustness issues, the book presents three categories, including environment-related issues, speaker-related issues and application-oriented issues. For each category, the book describes the current hot topics, existing technologies, and potential research focuses in the future. The book is a useful reference book and self-learning guide for early researchers working in the field of robust speech recognition.

  1. Robust Structured Control Design via LMI Optimization

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Stoustrup, Jakob

    2011-01-01

    This paper presents a new procedure for discrete-time robust structured control design. Parameter-dependent nonconvex conditions for stabilizable and induced L2-norm performance controllers are solved by an iterative linear matrix inequalities (LMI) optimization. A wide class of controller...... structures including decentralized of any order, fixed-order dynamic output feedback, static output feedback can be designed robust to polytopic uncertainties. Stability is proven by a parameter-dependent Lyapunov function. Numerical examples on robust stability margins shows that the proposed procedure can...

  2. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  3. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  4. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  5. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine

    2012-11-01

    The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.

  6. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  7. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  8. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  9. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  10. Holistic metrology qualification extension and its application to characterize overlay targets with asymmetric effects

    Science.gov (United States)

    Dos Santos Ferreira, Olavio; Sadat Gousheh, Reza; Visser, Bart; Lie, Kenrick; Teuwen, Rachel; Izikson, Pavel; Grzela, Grzegorz; Mokaberi, Babak; Zhou, Steve; Smith, Justin; Husain, Danish; Mandoy, Ram S.; Olvera, Raul

    2018-03-01

    Ever increasing need for tighter on-product overlay (OPO), as well as enhanced accuracy in overlay metrology and methodology, is driving semiconductor industry's technologists to innovate new approaches to OPO measurements. In case of High Volume Manufacturing (HVM) fabs, it is often critical to strive for both accuracy and robustness. Robustness, in particular, can be challenging in metrology since overlay targets can be impacted by proximity of other structures next to the overlay target (asymmetric effects), as well as symmetric stack changes such as photoresist height variations. Both symmetric and asymmetric contributors have impact on robustness. Furthermore, tweaking or optimizing wafer processing parameters for maximum yield may have an adverse effect on physical target integrity. As a result, measuring and monitoring physical changes or process abnormalities/artefacts in terms of new Key Performance Indicators (KPIs) is crucial for the end goal of minimizing true in-die overlay of the integrated circuits (ICs). IC manufacturing fabs often relied on CD-SEM in the past to capture true in-die overlay. Due to destructive and intrusive nature of CD-SEMs on certain materials, it's desirable to characterize asymmetry effects for overlay targets via inline KPIs utilizing YieldStar (YS) metrology tools. These KPIs can also be integrated as part of (μDBO) target evaluation and selection for final recipe flow. In this publication, the Holistic Metrology Qualification (HMQ) flow was extended to account for process induced (asymmetric) effects such as Grating Imbalance (GI) and Bottom Grating Asymmetry (BGA). Local GI typically contributes to the intrafield OPO whereas BGA typically impacts the interfield OPO, predominantly at the wafer edge. Stack height variations highly impact overlay metrology accuracy, in particular in case of multi-layer LithoEtch Litho-Etch (LELE) overlay control scheme. Introducing a GI impact on overlay (in nm) KPI check quantifies the

  11. Robust synthesis for real-time systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Legay, Axel; Traonouez, Luois-Marie

    2014-01-01

    Specification theories for real-time systems allow reasoning about interfaces and their implementation models, using a set of operators that includes satisfaction, refinement, logical and parallel composition. To make such theories applicable throughout the entire design process from an abstract...... of introducing small perturbations into formal models. We address this problem of robust implementations in timed specification theories. We first consider a fixed perturbation and study the robustness of timed specifications with respect to the operators of the theory. To this end we synthesize robust...... specification to an implementation, we need to reason about the possibility to effectively implement the theoretical specifications on physical systems, despite their limited precision. In the literature, this implementation problem has been linked to the robustness problem that analyzes the consequences...

  12. Robust adaptive synchronization of general dynamical networks ...

    Indian Academy of Sciences (India)

    Robust adaptive synchronization; dynamical network; multiple delays; multiple uncertainties. ... Networks such as neural networks, communication transmission networks, social rela- tionship networks etc. ..... a very good effect. Pramana – J.

  13. Technical Challenges Hindering Development of Robust Wireless ...

    African Journals Online (AJOL)

    PROF. OLIVER OSUAGWA

    2015-12-01

    Dec 1, 2015 ... challenges remain to be resolved, in designing robust wireless networks that can deliver the performance ... demonstrated the first radio transmission from the Isle of ... distances with better quality, less power, and smaller ...

  14. Multifidelity Robust Aeroelastic Design, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Nielsen Engineering & Research (NEAR) proposes a new method to generate mathematical models of wind-tunnel models and flight vehicles for robust aeroelastic...

  15. The structural robustness of multiprocessor computing system

    Directory of Open Access Journals (Sweden)

    N. Andronaty

    1996-03-01

    Full Text Available The model of the multiprocessor computing system on the base of transputers which permits to resolve the question of valuation of a structural robustness (viability, survivability is described.

  16. Design principles for robust oscillatory behavior.

    Science.gov (United States)

    Castillo-Hair, Sebastian M; Villota, Elizabeth R; Coronado, Alberto M

    2015-09-01

    Oscillatory responses are ubiquitous in regulatory networks of living organisms, a fact that has led to extensive efforts to study and replicate the circuits involved. However, to date, design principles that underlie the robustness of natural oscillators are not completely known. Here we study a three-component enzymatic network model in order to determine the topological requirements for robust oscillation. First, by simulating every possible topological arrangement and varying their parameter values, we demonstrate that robust oscillators can be obtained by augmenting the number of both negative feedback loops and positive autoregulations while maintaining an appropriate balance of positive and negative interactions. We then identify network motifs, whose presence in more complex topologies is a necessary condition for obtaining oscillatory responses. Finally, we pinpoint a series of simple architectural patterns that progressively render more robust oscillators. Together, these findings can help in the design of more reliable synthetic biomolecular networks and may also have implications in the understanding of other oscillatory systems.

  17. Robust and Efficient Parametric Face Alignment

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2011-01-01

    We propose a correlation-based approach to parametric object alignment particularly suitable for face analysis applications which require efficiency and robustness against occlusions and illumination changes. Our algorithm registers two images by iteratively maximizing their correlation coefficient

  18. Framework for Robustness Assessment of Timber Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a theoretical framework for the design and analysis of robustness of timber structures. This is actualized by a more4 frequent use of advanced types of timber structures with limited redundancy and serious consequences in the case of failure. Combined with increased requirements...... to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential. Further, the collapse of the Ballerup Super Arena, the bad Reichenhall Ice-Arena and a number of other structural systems during the last 10 years has...... increased the interest in robustness. Typically, modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, although the importance of robustness for structural design is widely recognized, the code requirements...

  19. Robust Analysis and Design of Multivariable Systems

    National Research Council Canada - National Science Library

    Tannenbaum, Allen

    1998-01-01

    In this Final Report, we will describe the work we have performed in robust control theory and nonlinear control, and the utilization of techniques in image processing and computer vision for problems in visual tracking...

  20. Robust Tracking Control for a Piezoelectric Actuator

    National Research Council Canada - National Science Library

    Salah, M; McIntyre, M; Dawson, D; Wagner, J

    2006-01-01

    In this paper, a hysteresis model-based nonlinear robust controller is developed for a piezoelectric actuator, utilizing a Lyapunov-based stability analysis, which ensures that a desired displacement...

  1. Robustness studies on coal gasification process variables

    African Journals Online (AJOL)

    coal before feeding to the gasification process [1]. .... to-control variables will make up the terms in the response surface model for the ... Montgomery (1999) explained that all the Taguchi engineering objectives for a robust ..... software [3].

  2. Antiproton Target

    CERN Multimedia

    1980-01-01

    Antiproton target used for the AA (antiproton accumulator). The first type of antiproton production target used from 1980 to 1982 comprised a rod of copper 3mm diameter and 120mm long embedded in a graphite cylinder that was itself pressed into a finned aluminium container. This assembly was air-cooled and it was used in conjunction with the Van der Meer magnetic horn. In 1983 Fermilab provided us with lithium lenses to replace the horn with a view to increasing the antiproton yield by about 30%. These lenses needed a much shorter target made of heavy metal - iridium was chosen for this purpose. The 50 mm iridium rod was housed in an extension to the original finned target container so that it could be brought very close to the entrance to the lithium lens. Picture 1 shows this target assembly and Picture 2 shows it mounted together with the lithium lens. These target containers had a short lifetime due to a combination of beam heating and radiation damage. This led to the design of the water-cooled target in...

  3. Antecedents and Dimensions of Supply Chain Robustness

    OpenAIRE

    Durach, Christian F.; Wieland, Andreas; Machuca, Jose A.D.

    2015-01-01

    Purpose – The purpose of this paper is to provide groundwork for an emerging theory of supply chain robustness – which has been conceptualized as a dimension of supply chain resilience – through reviewing and synthesizing related yet disconnected studies. The paper develops a formal definition of supply chain robustness to build a framework that captures the dimensions, antecedents and moderators of the construct as discussed in the literature. Design/methodology/approach – The...

  4. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    Science.gov (United States)

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental

  5. Optimal robust control strategy of a solid oxide fuel cell system

    Science.gov (United States)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  6. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  7. Adaptive Critic Nonlinear Robust Control: A Survey.

    Science.gov (United States)

    Wang, Ding; He, Haibo; Liu, Derong

    2017-10-01

    Adaptive dynamic programming (ADP) and reinforcement learning are quite relevant to each other when performing intelligent optimization. They are both regarded as promising methods involving important components of evaluation and improvement, at the background of information technology, such as artificial intelligence, big data, and deep learning. Although great progresses have been achieved and surveyed when addressing nonlinear optimal control problems, the research on robustness of ADP-based control strategies under uncertain environment has not been fully summarized. Hence, this survey reviews the recent main results of adaptive-critic-based robust control design of continuous-time nonlinear systems. The ADP-based nonlinear optimal regulation is reviewed, followed by robust stabilization of nonlinear systems with matched uncertainties, guaranteed cost control design of unmatched plants, and decentralized stabilization of interconnected systems. Additionally, further comprehensive discussions are presented, including event-based robust control design, improvement of the critic learning rule, nonlinear H ∞ control design, and several notes on future perspectives. By applying the ADP-based optimal and robust control methods to a practical power system and an overhead crane plant, two typical examples are provided to verify the effectiveness of theoretical results. Overall, this survey is beneficial to promote the development of adaptive critic control methods with robustness guarantee and the construction of higher level intelligent systems.

  8. Robustness analysis of interdependent networks under multiple-attacking strategies

    Science.gov (United States)

    Gao, Yan-Li; Chen, Shi-Ming; Nie, Sen; Ma, Fei; Guan, Jun-Jie

    2018-04-01

    The robustness of complex networks under attacks largely depends on the structure of a network and the nature of the attacks. Previous research on interdependent networks has focused on two types of initial attack: random attack and degree-based targeted attack. In this paper, a deliberate attack function is proposed, where six kinds of deliberate attacking strategies can be derived by adjusting the tunable parameters. Moreover, the robustness of four types of interdependent networks (BA-BA, ER-ER, BA-ER and ER-BA) with different coupling modes (random, positive and negative correlation) is evaluated under different attacking strategies. Interesting conclusions could be obtained. It can be found that the positive coupling mode can make the vulnerability of the interdependent network to be absolutely dependent on the most vulnerable sub-network under deliberate attacks, whereas random and negative coupling modes make the vulnerability of interdependent network to be mainly dependent on the being attacked sub-network. The robustness of interdependent network will be enhanced with the degree-degree correlation coefficient varying from positive to negative. Therefore, The negative coupling mode is relatively more optimal than others, which can substantially improve the robustness of the ER-ER network and ER-BA network. In terms of the attacking strategies on interdependent networks, the degree information of node is more valuable than the betweenness. In addition, we found a more efficient attacking strategy for each coupled interdependent network and proposed the corresponding protection strategy for suppressing cascading failure. Our results can be very useful for safety design and protection of interdependent networks.

  9. Robust object tracking combining color and scale invariant features

    Science.gov (United States)

    Zhang, Shengping; Yao, Hongxun; Gao, Peipei

    2010-07-01

    Object tracking plays a very important role in many computer vision applications. However its performance will significantly deteriorate due to some challenges in complex scene, such as pose and illumination changes, clustering background and so on. In this paper, we propose a robust object tracking algorithm which exploits both global color and local scale invariant (SIFT) features in a particle filter framework. Due to the expensive computation cost of SIFT features, the proposed tracker adopts a speed-up variation of SIFT, SURF, to extract local features. Specially, the proposed method first finds matching points between the target model and target candidate, than the weight of the corresponding particle based on scale invariant features is computed as the the proportion of matching points of that particle to matching points of all particles, finally the weight of the particle is obtained by combining weights of color and SURF features with a probabilistic way. The experimental results on a variety of challenging videos verify that the proposed method is robust to pose and illumination changes and is significantly superior to the standard particle filter tracker and the mean shift tracker.

  10. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    Directory of Open Access Journals (Sweden)

    Yuanshen Zhao

    2016-01-01

    Full Text Available Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  11. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion.

    Science.gov (United States)

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-29

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  12. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  13. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  14. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  15. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  16. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  17. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  18. Maximum power flux of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Benson, R.F.; Fainberg, J.

    1991-01-01

    The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3

  19. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  20. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  1. Mechanical Design for Robustness of the LHC Collimators

    CERN Document Server

    Bertarelli, Alessandro; Assmann, R W; Calatroni, Sergio; Dallocchio, Alessandro; Kurtyka, Tadeusz; Mayer, Manfred; Perret, Roger; Redaelli, Stefano; Robert-Demolaize, Guillaume

    2005-01-01

    The functional specification of the LHC Collimators requires, for the start-up of the machine and the initial luminosity runs (Phase 1), a collimation system with maximum robustness against abnormal beam operating conditions. The most severe cases to be considered in the mechanical design are the asynchronous beam dump at 7 TeV and the 450 GeV injection error. To ensure that the collimator jaws survive such accident scenarios, low-Z materials were chosen, driving the design towards Graphite or Carbon/Carbon composites. Furthermore, in-depth thermo-mechanical simulations, both static and dynamic, were necessary.This paper presents the results of the numerical analyses performed for the 450 GeV accident case, along with the experimental results of the tests conducted on a collimator prototype in Cern TT40 transfer line, impacted by a 450 GeV beam of 3.1·1013

  2. Robust Combining of Disparate Classifiers Through Order Statistics

    Science.gov (United States)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  3. A Robust Response of the Hadley Circulation to Global Warming

    Science.gov (United States)

    Lau, William K M.; Kim, Kyu-Myong

    2014-01-01

    Tropical rainfall is expected to increase in a warmer climate. Yet, recent studies have inferred that the Hadley Circulation (HC), which is primarily driven by latent heating from tropical rainfall, is weakened under global warming. Here, we show evidence of a robust intensification of the HC from analyses of 33 CMIP5 model projections under a scenario of 1 per year CO2 emission increase. The intensification is manifested in a deep-tropics squeeze, characterized by a pronounced increase in the zonal mean ascending motion in the mid and upper troposphere, a deepening and narrowing of the convective zone and enhanced rainfall in the deep tropics. These changes occur in conjunction with a rise in the region of maximum outflow of the HC, with accelerated meridional mass outflow in the uppermost branch of the HC away from the equator, coupled to a weakened inflow in the return branches of the HC in the lower troposphere.

  4. Measure of robustness for complex networks

    Science.gov (United States)

    Youssef, Mina Nabil

    Critical infrastructures are repeatedly attacked by external triggers causing tremendous amount of damages. Any infrastructure can be studied using the powerful theory of complex networks. A complex network is composed of extremely large number of different elements that exchange commodities providing significant services. The main functions of complex networks can be damaged by different types of attacks and failures that degrade the network performance. These attacks and failures are considered as disturbing dynamics, such as the spread of viruses in computer networks, the spread of epidemics in social networks, and the cascading failures in power grids. Depending on the network structure and the attack strength, every network differently suffers damages and performance degradation. Hence, quantifying the robustness of complex networks becomes an essential task. In this dissertation, new metrics are introduced to measure the robustness of technological and social networks with respect to the spread of epidemics, and the robustness of power grids with respect to cascading failures. First, we introduce a new metric called the Viral Conductance (VCSIS ) to assess the robustness of networks with respect to the spread of epidemics that are modeled through the susceptible/infected/susceptible (SIS) epidemic approach. In contrast to assessing the robustness of networks based on a classical metric, the epidemic threshold, the new metric integrates the fraction of infected nodes at steady state for all possible effective infection strengths. Through examples, VCSIS provides more insights about the robustness of networks than the epidemic threshold. In addition, both the paradoxical robustness of Barabasi-Albert preferential attachment networks and the effect of the topology on the steady state infection are studied, to show the importance of quantifying the robustness of networks. Second, a new metric VCSIR is introduced to assess the robustness of networks with respect

  5. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  6. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  7. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  8. The Automatic Measurement of Targets

    DEFF Research Database (Denmark)

    Höhle, Joachim

    1997-01-01

    The automatic measurement of targets is demonstrated by means of a theoretical example and by an interactive measuring program for real imagery from a réseau camera. The used strategy is a combination of two methods: the maximum correlation coefficient and the correlation in the subpixel range...... interactive software is also part of a computer-assisted learning program on digital photogrammetry....

  9. MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.

  10. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history.

    Science.gov (United States)

    Cherry, Joshua L

    2017-02-23

    Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data. The algorithm is applied to bacterial data sets containing up to nearly 2000 genomes with several thousand variable nucleotide sites. Run times are several seconds or less. Computational experiments show that maximum compatibility is less sensitive than maximum parsimony to the inclusion of nucleotide data that, though derived from actual sequence reads, has been identified as likely to be misleading. Maximum compatibility is a useful tool for certain phylogenetic problems, such as inferring the relationships among closely-related bacteria from whole-genome sequence data. The algorithm presented here rapidly solves fairly large problems of this type, and provides robustness against misleading characters than can pollute large-scale sequencing data.

  11. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  12. Progress on LMJ targets for ignition

    Energy Technology Data Exchange (ETDEWEB)

    Cherfils-Clerouin, C; Boniface, C; Bonnefille, M; Dattolo, E; Galmiche, D; Gauthier, P; Giorla, J; Laffite, S; Liberatore, S; Loiseau, P; Malinie, G; Masse, L; Masson-Laborde, P E; Monteil, M C; Poggi, F; Seytor, P; Wagon, F; Willien, J L, E-mail: catherine.cherfils@cea.f [CEA, DAM, DIF, F-91297 Arpajon (France)

    2009-12-15

    Targets designed to produce ignition on the Laser Megajoule (LMJ) are being simulated in order to set specifications for target fabrication. The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 160 laser beams, delivering up to 1.4 MJ and 380 TW. New targets needing reduced laser energy with only a small decrease in robustness have then been designed for this purpose. Working specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, has led to the design of a rugby-ball shaped cocktail hohlraum; with these improvements, a target based on the 240-beam A1040 capsule can be included in the 160-beam laser energy-power space. Robustness evaluations of these different targets shed light on critical points for ignition, which can trade off by tightening some specifications or by preliminary experimental and numerical tuning experiments.

  13. Progress on LMJ targets for ignition

    International Nuclear Information System (INIS)

    Cherfils-Clerouin, C; Boniface, C; Bonnefille, M; Dattolo, E; Galmiche, D; Gauthier, P; Giorla, J; Laffite, S; Liberatore, S; Loiseau, P; Malinie, G; Masse, L; Masson-Laborde, P E; Monteil, M C; Poggi, F; Seytor, P; Wagon, F; Willien, J L

    2009-01-01

    Targets designed to produce ignition on the Laser Megajoule (LMJ) are being simulated in order to set specifications for target fabrication. The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 160 laser beams, delivering up to 1.4 MJ and 380 TW. New targets needing reduced laser energy with only a small decrease in robustness have then been designed for this purpose. Working specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, has led to the design of a rugby-ball shaped cocktail hohlraum; with these improvements, a target based on the 240-beam A1040 capsule can be included in the 160-beam laser energy-power space. Robustness evaluations of these different targets shed light on critical points for ignition, which can trade off by tightening some specifications or by preliminary experimental and numerical tuning experiments.

  14. Intelligent and robust optimization frameworks for smart grids

    Science.gov (United States)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic

  15. Does a crouched leg posture enhance running stability and robustness?

    Science.gov (United States)

    Blum, Yvonne; Birn-Jeffery, Aleksandra; Daley, Monica A; Seyfarth, Andre

    2011-07-21

    Humans and birds both walk and run bipedally on compliant legs. However, differences in leg architecture may result in species-specific leg control strategies as indicated by the observed gait patterns. In this work, control strategies for stable running are derived based on a conceptual model and compared with experimental data on running humans and pheasants (Phasianus colchicus). From a model perspective, running with compliant legs can be represented by the planar spring mass model and stabilized by applying swing leg control. Here, linear adaptations of the three leg parameters, leg angle, leg length and leg stiffness during late swing phase are assumed. Experimentally observed kinematic control parameters (leg rotation and leg length change) of human and avian running are compared, and interpreted within the context of this model, with specific focus on stability and robustness characteristics. The results suggest differences in stability characteristics and applied control strategies of human and avian running, which may relate to differences in leg posture (straight leg posture in humans, and crouched leg posture in birds). It has been suggested that crouched leg postures may improve stability. However, as the system of control strategies is overdetermined, our model findings suggest that a crouched leg posture does not necessarily enhance running stability. The model also predicts different leg stiffness adaptation rates for human and avian running, and suggests that a crouched avian leg posture, which is capable of both leg shortening and lengthening, allows for stable running without adjusting leg stiffness. In contrast, in straight-legged human running, the preparation of the ground contact seems to be more critical, requiring leg stiffness adjustment to remain stable. Finally, analysis of a simple robustness measure, the normalized maximum drop, suggests that the crouched leg posture may provide greater robustness to changes in terrain height

  16. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    Science.gov (United States)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  17. Project Robust Scheduling Based on the Scattered Buffer Technology

    Directory of Open Access Journals (Sweden)

    Nansheng Pang

    2018-04-01

    Full Text Available The research object in this paper is the sub network formed by the predecessor’s affect on the solution activity. This paper is to study three types of influencing factors from the predecessors that lead to the delay of starting time of the solution activity on the longest path, and to analyze the influence degree on the delay of the solution activity’s starting time from different types of factors. On this basis, through the comprehensive analysis of various factors that influence the solution activity, this paper proposes a metric that is used to evaluate the solution robustness of the project scheduling, and this metric is taken as the optimization goal. This paper also adopts the iterative process to design a scattered buffer heuristics algorithm based on the robust scheduling of the time buffer. At the same time, the resource flow network is introduced in this algorithm, using the tabu search algorithm to solve baseline scheduling. For the generation of resource flow network in the baseline scheduling, this algorithm designs a resource allocation algorithm with the maximum use of the precedence relations. Finally, the algorithm proposed in this paper and some other algorithms in previous literature are taken into the simulation experiment; under the comparative analysis, the experimental results show that the algorithm proposed in this paper is reasonable and feasible.

  18. Tracking Target and Spiral Waves

    DEFF Research Database (Denmark)

    Jensen, Flemming G.; Sporring, Jon; Nielsen, Mads

    2002-01-01

    A new algorithm for analyzing the evolution of patterns of spiral and target waves in large aspect ratio chemical systems is introduced. The algorithm does not depend on finding the spiral tip but locates the center of the pattern by a new concept, called the spiral focus, which is defined...... by the evolutes of the actual spiral or target wave. With the use of Gaussian smoothing, a robust method is developed that permits the identification of targets and spirals foci independently of the wave profile. Examples of an analysis of long image sequences from experiments with the Belousov......–Zhabotinsky reaction catalyzed by ruthenium-tris-bipyridyl are presented. Moving target and spiral foci are found, and the speed and direction of movement of single as well as double spiral foci are investigated. For the experiments analyzed in this paper it is found that the movement of a focus correlates with foci...

  19. Optimal Constellation Design for Maximum Continuous Coverage of Targets Against a Space Background

    Science.gov (United States)

    2012-05-31

    B2B : R sin 2γ > h and h > 0 The cutting plane...53. 56 x̂ ẑ φ γ γR sin 2γ h (a) |φ|+ γ < π/2 x̂ ẑ φ γ γ R sin 2γ h (b) |φ|+ γ > π/2 Figure 50: Case B2B x̂ ŷ √ R2 − h2 p1 p2+ p2− f (a) |φ|+ γ < π...2 x̂ ŷ √ R2 − h2 p1 p2+ p2− f (b) |φ|+ γ > π/2 Figure 51: Case B2B 57 x̂ ẑ φ γ γ R h (a) |φ|+ γ < π/2 x̂ ẑ φ γ γ R h (b) |φ|+ γ > π/2 Figure

  20. Robustness analysis of chiller sequencing control

    International Nuclear Information System (INIS)

    Liao, Yundan; Sun, Yongjun; Huang, Gongsheng

    2015-01-01

    Highlights: • Uncertainties with chiller sequencing control were systematically quantified. • Robustness of chiller sequencing control was systematically analyzed. • Different sequencing control strategies were sensitive to different uncertainties. • A numerical method was developed for easy selection of chiller sequencing control. - Abstract: Multiple-chiller plant is commonly employed in the heating, ventilating and air-conditioning system to increase operational feasibility and energy-efficiency under part load condition. In a multiple-chiller plant, chiller sequencing control plays a key role in achieving overall energy efficiency while not sacrifices the cooling sufficiency for indoor thermal comfort. Various sequencing control strategies have been developed and implemented in practice. Based on the observation that (i) uncertainty, which cannot be avoided in chiller sequencing control, has a significant impact on the control performance and may cause the control fail to achieve the expected control and/or energy performance; and (ii) in current literature few studies have systematically addressed this issue, this paper therefore presents a study on robustness analysis of chiller sequencing control in order to understand the robustness of various chiller sequencing control strategies under different types of uncertainty. Based on the robustness analysis, a simple and applicable method is developed to select the most robust control strategy for a given chiller plant in the presence of uncertainties, which will be verified using case studies

  1. On the robustness of Herlihy's hierarchy

    Science.gov (United States)

    Jayanti, Prasad

    1993-01-01

    A wait-free hierarchy maps object types to levels in Z(+) U (infinity) and has the following property: if a type T is at level N, and T' is an arbitrary type, then there is a wait-free implementation of an object of type T', for N processes, using only registers and objects of type T. The infinite hierarchy defined by Herlihy is an example of a wait-free hierarchy. A wait-free hierarchy is robust if it has the following property: if T is at level N, and S is a finite set of types belonging to levels N - 1 or lower, then there is no wait-free implementation of an object of type T, for N processes, using any number and any combination of objects belonging to the types in S. Robustness implies that there are no clever ways of combining weak shared objects to obtain stronger ones. Contrary to what many researchers believe, we prove that Herlihy's hierarchy is not robust. We then define some natural variants of Herlihy's hierarchy, which are also infinite wait-free hierarchies. With the exception of one, which is still open, these are not robust either. We conclude with the open question of whether non-trivial robust wait-free hierarchies exist.

  2. Replication and robustness in developmental research.

    Science.gov (United States)

    Duncan, Greg J; Engel, Mimi; Claessens, Amy; Dowsett, Chantelle J

    2014-11-01

    Replications and robustness checks are key elements of the scientific method and a staple in many disciplines. However, leading journals in developmental psychology rarely include explicit replications of prior research conducted by different investigators, and few require authors to establish in their articles or online appendices that their key results are robust across estimation methods, data sets, and demographic subgroups. This article makes the case for prioritizing both explicit replications and, especially, within-study robustness checks in developmental psychology. It provides evidence on variation in effect sizes in developmental studies and documents strikingly different replication and robustness-checking practices in a sample of journals in developmental psychology and a sister behavioral science-applied economics. Our goal is not to show that any one behavioral science has a monopoly on best practices, but rather to show how journals from a related discipline address vital concerns of replication and generalizability shared by all social and behavioral sciences. We provide recommendations for promoting graduate training in replication and robustness-checking methods and for editorial policies that encourage these practices. Although some of our recommendations may shift the form and substance of developmental research articles, we argue that they would generate considerable scientific benefits for the field. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  3. Emergence of robustness in networks of networks

    Science.gov (United States)

    Roth, Kevin; Morone, Flaviano; Min, Byungjoon; Makse, Hernán A.

    2017-06-01

    A model of interdependent networks of networks (NONs) was introduced recently [Proc. Natl. Acad. Sci. (USA) 114, 3849 (2017), 10.1073/pnas.1620808114] in the context of brain activation to identify the neural collective influencers in the brain NON. Here we investigate the emergence of robustness in such a model, and we develop an approach to derive an exact expression for the random percolation transition in Erdös-Rényi NONs of this kind. Analytical calculations are in agreement with numerical simulations, and highlight the robustness of the NON against random node failures, which thus presents a new robust universality class of NONs. The key aspect of this robust NON model is that a node can be activated even if it does not belong to the giant mutually connected component, thus allowing the NON to be built from below the percolation threshold, which is not possible in previous models of interdependent networks. Interestingly, the phase diagram of the model unveils particular patterns of interconnectivity for which the NON is most vulnerable, thereby marking the boundary above which the robustness of the system improves with increasing dependency connections.

  4. H∞ Robust Control of a Large-Piston MEMS Micromirror for Compact Fourier Transform Spectrometer Systems

    Directory of Open Access Journals (Sweden)

    Huipeng Chen

    2018-02-01

    Full Text Available Incorporating linear-scanning micro-electro-mechanical systems (MEMS micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM spectral linewidth of 96 cm−1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions.

  5. H∞ Robust Control of a Large-Piston MEMS Micromirror for Compact Fourier Transform Spectrometer Systems.

    Science.gov (United States)

    Chen, Huipeng; Li, Mengyuan; Zhang, Yi; Xie, Huikai; Chen, Chang; Peng, Zhangming; Su, Shaohui

    2018-02-08

    Incorporating linear-scanning micro-electro-mechanical systems (MEMS) micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS) possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID) closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM) spectral linewidth of 96 cm -1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions.

  6. Primal and dual approaches to adjustable robust optimization

    NARCIS (Netherlands)

    de Ruiter, Frans

    2018-01-01

    Robust optimization has become an important paradigm to deal with optimization under uncertainty. Adjustable robust optimization is an extension that deals with multistage problems. This thesis starts with a short but comprehensive introduction to adjustable robust optimization. Then the two

  7. 75 FR 8902 - Funding Opportunity Title: Crop Insurance Education in Targeted States (Targeted States Program)

    Science.gov (United States)

    2010-02-26

    ... and Target Audience D. Maximum Award E. Project Period F. Description of Agreement Award--Awardee.... Location and Target Audience Targeted States serviced by RMA Regional Offices are listed below. Staff from... established farmers or ranchers who are converting production and marketing systems to pursue new markets. D...

  8. Targeted Learning

    CERN Document Server

    van der Laan, Mark J

    2011-01-01

    The statistics profession is at a unique point in history. The need for valid statistical tools is greater than ever; data sets are massive, often measuring hundreds of thousands of measurements for a single subject. The field is ready to move towards clear objective benchmarks under which tools can be evaluated. Targeted learning allows (1) the full generalization and utilization of cross-validation as an estimator selection tool so that the subjective choices made by humans are now made by the machine, and (2) targeting the fitting of the probability distribution of the data toward the targe

  9. Target preparation

    International Nuclear Information System (INIS)

    Hinn, G.M.

    1984-01-01

    A few of the more interesting of the 210 targets prepared in the Laboratory last year are listed. In addition the author continues to use powdered silver mixed with /sup 9,10/BeO to produce sources for accelerator radio dating of Alaskan and South Polar snow. Currently, he is trying to increase production by multiple sample processing. Also the author routinely makes 3 μg/cm 2 cracked slacked carbon stripper foils and is continuing research with some degree of success in making enriched 28 Si targets starting with the oxide

  10. A Fast Algorithm for Maximum Likelihood Estimation of Harmonic Chirp Parameters

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    . A statistically efficient estimator for extracting the parameters of the harmonic chirp model in additive white Gaussian noise is the maximum likelihood (ML) estimator which recently has been demonstrated to be robust to noise and accurate --- even when the model order is unknown. The main drawback of the ML......The analysis of (approximately) periodic signals is an important element in numerous applications. One generalization of standard periodic signals often occurring in practice are harmonic chirp signals where the instantaneous frequency increases/decreases linearly as a function of time...

  11. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  12. Occupant behaviour and robustness of building design

    DEFF Research Database (Denmark)

    Buso, Tiziana; Fabi, Valentina; Andersen, Rune Korsholm

    2015-01-01

    in a dynamic building energy simulation tool (IDA ICE). The analysis was carried out by simulating 15 building envelope designs in different thermal zones of an Office Reference Building in 3 climates: Stockholm, Frankfurt and Athens.In general, robustness towards changes in occupants' behaviour increased......Occupant behaviour can cause major discrepancies between the designed and the real total energy use in buildings. A possible solution to reduce the differences between predictions and actual performances is designing robust buildings, i.e. buildings whose performances show little variations...... with alternating occupant behaviour patterns. The aim of this work was to investigate how alternating occupant behaviour patterns impact the performance of different envelope design solutions in terms of building robustness. Probabilistic models of occupants' window opening and use of shading were implemented...

  13. Robust Mediation Analysis Based on Median Regression

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2014-01-01

    Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925

  14. Robustness of Distance-to-Default

    DEFF Research Database (Denmark)

    Jessen, Cathrine; Lando, David

    2013-01-01

    Distance-to-default is a remarkably robust measure for ranking firms according to their risk of default. The ranking seems to work despite the fact that the Merton model from which the measure is derived produces default probabilities that are far too small when applied to real data. We use...... simulations to investigate the robustness of the distance-to-default measure to different model specifications. Overall we find distance-to-default to be robust to a number of deviations from the simple Merton model that involve different asset value dynamics and different default triggering mechanisms....... A notable exception is a model with stochastic volatility of assets. In this case both the ranking of firms and the estimated default probabilities using distance-to-default perform significantly worse. We therefore propose a volatility adjustment of the distance-to-default measure, that significantly...

  15. Robust Portfolio Optimization using CAPM Approach

    Directory of Open Access Journals (Sweden)

    mohsen gharakhani

    2013-08-01

    Full Text Available In this paper, a new robust model of multi-period portfolio problem has been developed. One of the key concerns in any asset allocation problem is how to cope with uncertainty about future returns. There are some approaches in the literature for this purpose including stochastic programming and robust optimization. Applying these techniques to multi-period portfolio problem may increase the problem size in a way that the resulting model is intractable. In this paper, a novel approach has been proposed to formulate multi-period portfolio problem as an uncertain linear program assuming that asset return follows the single-index factor model. Robust optimization technique has been also used to solve the problem. In order to evaluate the performance of the proposed model, a numerical example has been applied using simulated data.

  16. Robustness of quantum correlations against linear noise

    International Nuclear Information System (INIS)

    Guo, Zhihua; Cao, Huaixin; Qu, Shixian

    2016-01-01

    Relative robustness of quantum correlations (RRoQC) of a bipartite state is firstly introduced relative to a classically correlated state. Robustness of quantum correlations (RoQC) of a bipartite state is then defined as the minimum of RRoQC of the state relative to all classically correlated ones. It is proved that as a function on quantum states, RoQC is nonnegative, lower semi-continuous and neither convex nor concave; especially, it is zero if and only if the state is classically correlated. Thus, RoQC not only quantifies the endurance of quantum correlations of a state against linear noise, but also can be used to distinguish between quantum and classically correlated states. Furthermore, the effects of local quantum channels on the robustness are explored and characterized. (paper)

  17. Parametric uncertainty modeling for robust control

    DEFF Research Database (Denmark)

    Rasmussen, K.H.; Jørgensen, Sten Bay

    1999-01-01

    The dynamic behaviour of a non-linear process can often be approximated with a time-varying linear model. In the presented methodology the dynamics is modeled non-conservatively as parametric uncertainty in linear lime invariant models. The obtained uncertainty description makes it possible...... to perform robustness analysis on a control system using the structured singular value. The idea behind the proposed method is to fit a rational function to the parameter variation. The parameter variation can then be expressed as a linear fractional transformation (LFT), It is discussed how the proposed...... point changes. It is shown that a diagonal PI control structure provides robust performance towards variations in feed flow rate or feed concentrations. However including both liquid and vapor flow delays robust performance specifications cannot be satisfied with this simple diagonal control structure...

  18. Incentive-Compatible Robust Line Planning

    Science.gov (United States)

    Bessas, Apostolos; Kontogiannis, Spyros; Zaroliagis, Christos

    The problem of robust line planning requests for a set of origin-destination paths (lines) along with their frequencies in an underlying railway network infrastructure, which are robust to fluctuations of real-time parameters of the solution. In this work, we investigate a variant of robust line planning stemming from recent regulations in the railway sector that introduce competition and free railway markets, and set up a new application scenario: there is a (potentially large) number of line operators that have their lines fixed and operate as competing entities issuing frequency requests, while the management of the infrastructure itself remains the responsibility of a single entity, the network operator. The line operators are typically unwilling to reveal their true incentives, while the network operator strives to ensure a fair (or socially optimal) usage of the infrastructure, e.g., by maximizing the (unknown to him) aggregate incentives of the line operators.

  19. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  20. Tail Risk Constraints and Maximum Entropy

    Directory of Open Access Journals (Sweden)

    Donald Geman

    2015-06-01

    Full Text Available Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. In the real world, operators build portfolios under risk constraints which are expressed both by their clients and regulators and which bear on the maximal loss that may be generated over a given time period at a given confidence level (the so-called Value at Risk of the position. Interestingly, in the finance literature, a serious discussion of how much or little is known from a probabilistic standpoint about the multi-dimensional density of the assets’ returns seems to be of limited relevance. Our approach in contrast is to highlight these issues and then adopt throughout a framework of entropy maximization to represent the real world ignorance of the “true” probability distributions, both univariate and multivariate, of traded securities’ returns. In this setting, we identify the optimal portfolio under a number of downside risk constraints. Two interesting results are exhibited: (i the left- tail constraints are sufficiently powerful to override all other considerations in the conventional theory; (ii the “barbell portfolio” (maximal certainty/ low risk in one set of holdings, maximal uncertainty in another, which is quite familiar to traders, naturally emerges in our construction.