WorldWideScience

Sample records for robust targeted maximum

  1. A Maximum Entropy Method for a Robust Portfolio Problem

    Directory of Open Access Journals (Sweden)

    Yingying Xu

    2014-06-01

    Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.

  2. On-orbit real-time robust cooperative target identification in complex background

    Directory of Open Access Journals (Sweden)

    Wen Zhuoman

    2015-10-01

    Full Text Available Cooperative target identification is the prerequisite for the relative position and orientation measurement between the space robot arm and the to-be-arrested object. We propose an on-orbit real-time robust algorithm for cooperative target identification in complex background using the features of circle and lines. It first extracts only the interested edges in the target image using an adaptive threshold and refines them to about single-pixel-width with improved non-maximum suppression. Adapting a novel tracking approach, edge segments changing smoothly in tangential directions are obtained. With a small amount of calculation, large numbers of invalid edges are removed. From the few remained edges, valid circular arcs are extracted and reassembled to obtain circles according to a reliable criterion. Finally, the target is identified if there are certain numbers of straight lines whose relative positions with the circle match the known target pattern. Experiments demonstrate that the proposed algorithm accurately identifies the cooperative target within the range of 0.3–1.5 m under complex background at the speed of 8 frames per second, regardless of lighting condition and target attitude. The proposed algorithm is very suitable for real-time visual measurement of space robot arm because of its robustness and small memory requirement.

  3. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  4. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  5. Robust optimum design with maximum entropy method; Saidai entropy ho mochiita robust sei saitekika sekkeiho

    Energy Technology Data Exchange (ETDEWEB)

    Kawaguchi, K; Egashira, Y; Watanabe, G [Mazda Motor Corp., Hiroshima (Japan)

    1997-10-01

    Vehicle and unit performance change according to not only external causes represented by the environment such as temperature or weather, but also internal causes which are dispersion of component characteristics and manufacturing processes or aged deteriorations. We developed the design method to estimate thus performance distributions with maximum entropy method and to calculate specifications with high performance robustness using Fuzzy theory. This paper describes the details of these methods and examples applied to power window system. 3 refs., 7 figs., 4 tabs.

  6. Robust H∞ Control for Spacecraft Rendezvous with a Noncooperative Target

    Directory of Open Access Journals (Sweden)

    Shu-Nan Wu

    2013-01-01

    Full Text Available The robust H∞ control for spacecraft rendezvous with a noncooperative target is addressed in this paper. The relative motion of chaser and noncooperative target is firstly modeled as the uncertain system, which contains uncertain orbit parameter and mass. Then the H∞ performance and finite time performance are proposed, and a robust H∞ controller is developed to drive the chaser to rendezvous with the non-cooperative target in the presence of control input saturation, measurement error, and thrust error. The linear matrix inequality technology is used to derive the sufficient condition of the proposed controller. An illustrative example is finally provided to demonstrate the effectiveness of the controller.

  7. Robust Controller to Extract the Maximum Power of a Photovoltaic System

    Directory of Open Access Journals (Sweden)

    OULD CHERCHALI Noureddine

    2014-05-01

    Full Text Available This paper proposes a technique of intelligent control to track the maximum power point (MPPT of a photovoltaic system . The PV system is non-linear and it is exposed to external perturbations like temperature and solar irradiation. Fuzzy logic control is known for its stability and robustness. FLC is adopted in this work for the improvement and optimization of control performance of a photovoltaic system. Another technique called perturb and observe (P & O is studied and compared with the FLC technique. The PV system is constituted of a photovoltaic panel (PV, a DC-DC converter (Boost and a battery like a load. The simulation results are developed in MATLAB / Simulink software. The results show that the controller based on fuzzy logic is better and faster than the conventional controller perturb and observe (P & O and gives a good maximum power of a photovoltaic generator under different changes of weather conditions.

  8. Robust H(∞) control for spacecraft rendezvous with a noncooperative target.

    Science.gov (United States)

    Wu, Shu-Nan; Zhou, Wen-Ya; Tan, Shu-Jun; Wu, Guo-Qiang

    2013-01-01

    The robust H(∞) control for spacecraft rendezvous with a noncooperative target is addressed in this paper. The relative motion of chaser and noncooperative target is firstly modeled as the uncertain system, which contains uncertain orbit parameter and mass. Then the H(∞) performance and finite time performance are proposed, and a robust H(∞) controller is developed to drive the chaser to rendezvous with the non-cooperative target in the presence of control input saturation, measurement error, and thrust error. The linear matrix inequality technology is used to derive the sufficient condition of the proposed controller. An illustrative example is finally provided to demonstrate the effectiveness of the controller.

  9. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  10. Robustness studies of ignition targets for the National Ignition Facility in two dimensions

    International Nuclear Information System (INIS)

    Clark, Daniel S.; Haan, Steven W.; Salmonson, Jay D.

    2008-01-01

    Inertial confinement fusion capsules are critically dependent on the integrity of their hot spots to ignite. At the time of ignition, only a certain fractional perturbation of the nominally spherical hot spot boundary can be tolerated and the capsule still achieve ignition. The degree to which the expected hot spot perturbation in any given capsule design is less than this maximum tolerable perturbation is a measure of the ignition margin or robustness of that design. Moreover, since there will inevitably be uncertainties in the initial character and implosion dynamics of any given capsule, all of which can contribute to the eventual hot spot perturbation, quantifying the robustness of that capsule against a range of parameter variations is an important consideration in the capsule design. Here, the robustness of the 300 eV indirect drive target design for the National Ignition Facility [Lindl et al., Phys. Plasmas 11, 339 (2004)] is studied in the parameter space of inner ice roughness, implosion velocity, and capsule scale. A suite of 2000 two-dimensional simulations, run with the radiation hydrodynamics code LASNEX, is used as the data base for the study. For each scale, an ignition region in the two remaining variables is identified and the ignition cliff is mapped. In accordance with the theoretical arguments of Levedahl and Lindl [Nucl. Fusion 37, 165 (1997)] and Kishony and Shvarts [Phys. Plasmas 8, 4925 (2001)], the location of this cliff is fitted to a power law of the capsule implosion velocity and scale. It is found that the cliff can be quite well represented in this power law form, and, using this scaling law, an assessment of the overall (one- and two-dimensional) ignition margin of the design can be made. The effect on the ignition margin of an increase or decrease in the density of the target fill gas is also assessed

  11. Robust H ∞ Control for Spacecraft Rendezvous with a Noncooperative Target

    Science.gov (United States)

    Wu, Shu-Nan; Zhou, Wen-Ya; Tan, Shu-Jun; Wu, Guo-Qiang

    2013-01-01

    The robust H ∞ control for spacecraft rendezvous with a noncooperative target is addressed in this paper. The relative motion of chaser and noncooperative target is firstly modeled as the uncertain system, which contains uncertain orbit parameter and mass. Then the H ∞ performance and finite time performance are proposed, and a robust H ∞ controller is developed to drive the chaser to rendezvous with the non-cooperative target in the presence of control input saturation, measurement error, and thrust error. The linear matrix inequality technology is used to derive the sufficient condition of the proposed controller. An illustrative example is finally provided to demonstrate the effectiveness of the controller. PMID:24027446

  12. Confidence from uncertainty - A multi-target drug screening method from robust control theory

    Directory of Open Access Journals (Sweden)

    Petzold Linda R

    2010-11-01

    Full Text Available Abstract Background Robustness is a recognized feature of biological systems that evolved as a defence to environmental variability. Complex diseases such as diabetes, cancer, bacterial and viral infections, exploit the same mechanisms that allow for robust behaviour in healthy conditions to ensure their own continuance. Single drug therapies, while generally potent regulators of their specific protein/gene targets, often fail to counter the robustness of the disease in question. Multi-drug therapies offer a powerful means to restore disrupted biological networks, by targeting the subsystem of interest while preventing the diseased network from reconciling through available, redundant mechanisms. Modelling techniques are needed to manage the high number of combinatorial possibilities arising in multi-drug therapeutic design, and identify synergistic targets that are robust to system uncertainty. Results We present the application of a method from robust control theory, Structured Singular Value or μ- analysis, to identify highly effective multi-drug therapies by using robustness in the face of uncertainty as a new means of target discrimination. We illustrate the method by means of a case study of a negative feedback network motif subject to parametric uncertainty. Conclusions The paper contributes to the development of effective methods for drug screening in the context of network modelling affected by parametric uncertainty. The results have wide applicability for the analysis of different sources of uncertainty like noise experienced in the data, neglected dynamics, or intrinsic biological variability.

  13. Robustness of Dengue Complex Network under Targeted versus Random Attack

    Directory of Open Access Journals (Sweden)

    Hafiz Abid Mahmood Malik

    2017-01-01

    Full Text Available Dengue virus infection is one of those epidemic diseases that require much consideration in order to save the humankind from its unsafe impacts. According to the World Health Organization (WHO, 3.6 billion individuals are at risk because of the dengue virus sickness. Researchers are striving to comprehend the dengue threat. This study is a little commitment to those endeavors. To observe the robustness of the dengue network, we uprooted the links between nodes randomly and targeted by utilizing different centrality measures. The outcomes demonstrated that 5% targeted attack is equivalent to the result of 65% random assault, which showed the topology of this complex network validated a scale-free network instead of random network. Four centrality measures (Degree, Closeness, Betweenness, and Eigenvector have been ascertained to look for focal hubs. It has been observed through the results in this study that robustness of a node and links depends on topology of the network. The dengue epidemic network presented robust behaviour under random attack, and this network turned out to be more vulnerable when the hubs of higher degree have higher probability to fail. Moreover, representation of this network has been projected, and hub removal impact has been shown on the real map of Gombak (Malaysia.

  14. Comparing photon and proton-based hypofractioned SBRT for prostate cancer accounting for robustness and realistic treatment deliverability.

    Science.gov (United States)

    Goddard, Lee C; Brodin, N Patrik; Bodner, William R; Garg, Madhur K; Tomé, Wolfgang A

    2018-05-01

    To investigate whether photon or proton-based stereotactic body radiation therapy (SBRT is the preferred modality for high dose hypofractionation prostate cancer treatment. Achievable dose distributions were compared when uncertainties in target positioning and range uncertainties were appropriately accounted for. 10 patients with prostate cancer previously treated at our institution (Montefiore Medical Center) with photon SBRT using volumetric modulated arc therapy (VMAT) were identified. MRI images fused to the treatment planning CT allowed for accurate target and organ at risk (OAR) delineation. The clinical target volume was defined as the prostate gland plus the proximal seminal vesicles. Critical OARs include the bladder wall, bowel, femoral heads, neurovascular bundle, penile bulb, rectal wall, urethra and urogenital diaphragm. Photon plan robustness was evaluated by simulating 2 mm isotropic setup variations. Comparative proton SBRT plans employing intensity modulated proton therapy (IMPT) were generated using robust optimization. Plan robustness was evaluated by simulating 2 mm setup variations and 3% or 1% Hounsfield unit (HU) calibration uncertainties. Comparable maximum OAR doses are achievable between photon and proton SBRT, however, robust optimization results in higher maximum doses for proton SBRT. Rectal maximum doses are significantly higher for Robust proton SBRT with 1% HU uncertainty compared to photon SBRT (p = 0.03), whereas maximum doses were comparable for bladder wall (p = 0.43), urethra (p = 0.82) and urogenital diaphragm (p = 0.50). Mean doses to bladder and rectal wall are lower for proton SBRT, but higher for neurovascular bundle, urethra and urogenital diaphragm due to increased lateral scatter. Similar target conformality is achieved, albeit with slightly larger treated volume ratios for proton SBRT, >1.4 compared to 1.2 for photon SBRT. Similar treatment plans can be generated with IMPT compared to VMAT in terms of

  15. Maximum entropy restoration of laser fusion target x-ray photographs

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.

    1976-01-01

    Maximum entropy principles were used to analyze the microdensitometer traces of a laser-fusion target photograph. The object is a glowing laser-fusion target microsphere 0.95 cm from a pinhole of radius 2 x 10 -4 cm, the image is 7.2 cm from the pinhole and the photon wavelength is likely to be 6.2 x 10 -8 cm. Some computational aspects of the problem are also considered

  16. Optimum design of exploding pusher target to produce maximum neutrons

    International Nuclear Information System (INIS)

    Kitagawa, Y.; Miyanaga, N.; Kato, Y.; Nakatsuka, M.; Nishiguchi, A.; Yabe, T.; Yamanaka, C.

    1985-03-01

    Exploding pusher target experiments have been conducted with the 1.052-μm GEKKO MII two-beam glass laser system to design an optimum target, which couples to the incident laser light most effectively to produce the maximum neutrons. Since hot electrons preheat the shell entirely in spite of strongly nonuniform irradiation, a simple model can design the optimum target, of which the shell/fuel interface is accelerated to 0.5 to 0.7 times the initial radius within a laser pulse. A 2-dimensional computer simulation supports this target design. The scaling of the neutron yield N with the laser power P is N ∝ P 2.4±0.4 . (author)

  17. Robust Deep Network with Maximum Correntropy Criterion for Seizure Detection

    Directory of Open Access Journals (Sweden)

    Yu Qi

    2014-01-01

    Full Text Available Effective seizure detection from long-term EEG is highly important for seizure diagnosis. Existing methods usually design the feature and classifier individually, while little work has been done for the simultaneous optimization of the two parts. This work proposes a deep network to jointly learn a feature and a classifier so that they could help each other to make the whole system optimal. To deal with the challenge of the impulsive noises and outliers caused by EMG artifacts in EEG signals, we formulate a robust stacked autoencoder (R-SAE as a part of the network to learn an effective feature. In R-SAE, the maximum correntropy criterion (MCC is proposed to reduce the effect of noise/outliers. Unlike the mean square error (MSE, the output of the new kernel MCC increases more slowly than that of MSE when the input goes away from the center. Thus, the effect of those noises/outliers positioned far away from the center can be suppressed. The proposed method is evaluated on six patients of 33.6 hours of scalp EEG data. Our method achieves a sensitivity of 100% and a specificity of 99%, which is promising for clinical applications.

  18. Robust Small Target Co-Detection from Airborne Infrared Image Sequences.

    Science.gov (United States)

    Gao, Jingli; Wen, Chenglin; Liu, Meiqin

    2017-09-29

    In this paper, a novel infrared target co-detection model combining the self-correlation features of backgrounds and the commonality features of targets in the spatio-temporal domain is proposed to detect small targets in a sequence of infrared images with complex backgrounds. Firstly, a dense target extraction model based on nonlinear weights is proposed, which can better suppress background of images and enhance small targets than weights of singular values. Secondly, a sparse target extraction model based on entry-wise weighted robust principal component analysis is proposed. The entry-wise weight adaptively incorporates structural prior in terms of local weighted entropy, thus, it can extract real targets accurately and suppress background clutters efficiently. Finally, the commonality of targets in the spatio-temporal domain are used to construct target refinement model for false alarms suppression and target confirmation. Since real targets could appear in both of the dense and sparse reconstruction maps of a single frame, and form trajectories after tracklet association of consecutive frames, the location correlation of the dense and sparse reconstruction maps for a single frame and tracklet association of the location correlation maps for successive frames have strong ability to discriminate between small targets and background clutters. Experimental results demonstrate that the proposed small target co-detection method can not only suppress background clutters effectively, but also detect targets accurately even if with target-like interference.

  19. Robust maximum power point tracker using sliding mode controller for the three-phase grid-connected photovoltaic system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Il-Song [LG Chem. Ltd./Research park, Mobile Energy R and D, 104-1 Moonji-Dong, Yuseong-Gu, Daejeon 305-380 (Korea)

    2007-03-15

    A robust maximum power point tracker (MPPT) using sliding mode controller for the three-phase grid-connected photovoltaic system has been proposed in this paper. Contrary to the previous controller, the proposed system consists of MPPT controller and current controller for tight regulation of the current. The proposed MPPT controller generates current reference directly from the solar array power information and the current controller uses the integral sliding mode for the tight control of current. The proposed system can prevent the current overshoot and provide optimal design for the system components. The structure of the proposed system is simple, and it shows robust tracking property against modeling uncertainties and parameter variations. Mathematical modeling is developed and the experimental results verify the validity of the proposed controller. (author)

  20. Robust Target Tracking with Multi-Static Sensors under Insufficient TDOA Information.

    Science.gov (United States)

    Shin, Hyunhak; Ku, Bonhwa; Nelson, Jill K; Ko, Hanseok

    2018-05-08

    This paper focuses on underwater target tracking based on a multi-static sonar network composed of passive sonobuoys and an active ping. In the multi-static sonar network, the location of the target can be estimated using TDOA (Time Difference of Arrival) measurements. However, since the sensor network may obtain insufficient and inaccurate TDOA measurements due to ambient noise and other harsh underwater conditions, target tracking performance can be significantly degraded. We propose a robust target tracking algorithm designed to operate in such a scenario. First, track management with track splitting is applied to reduce performance degradation caused by insufficient measurements. Second, a target location is estimated by a fusion of multiple TDOA measurements using a Gaussian Mixture Model (GMM). In addition, the target trajectory is refined by conducting a stack-based data association method based on multiple-frames measurements in order to more accurately estimate target trajectory. The effectiveness of the proposed method is verified through simulations.

  1. Robust Detection of Moving Human Target in Foliage-Penetration Environment Based on Hough Transform

    Directory of Open Access Journals (Sweden)

    P. Lei

    2014-04-01

    Full Text Available Attention has been focused on the robust moving human target detection in foliage-penetration environment, which presents a formidable task in a radar system because foliage is a rich scattering environment with complex multipath propagation and time-varying clutter. Generally, multiple-bounce returns and clutter are additionally superposed to direct-scatter echoes. They obscure true target echo and lead to poor visual quality time-range image, making target detection particular difficult. Consequently, an innovative approach is proposed to suppress clutter and mitigate multipath effects. In particular, a clutter suppression technique based on range alignment is firstly applied to suppress the time-varying clutter and the instable antenna coupling. Then entropy weighted coherent integration (EWCI algorithm is adopted to mitigate the multipath effects. In consequence, the proposed method effectively reduces the clutter and ghosting artifacts considerably. Based on the high visual quality image, the target trajectory is detected robustly and the radial velocity is estimated accurately with the Hough transform (HT. Real data used in the experimental results are provided to verify the proposed method.

  2. Shock ignition targets: gain and robustness vs ignition threshold factor

    Science.gov (United States)

    Atzeni, Stefano; Antonelli, Luca; Schiavi, Angelo; Picone, Silvia; Volponi, Gian Marco; Marocchino, Alberto

    2017-10-01

    Shock ignition is a laser direct-drive inertial confinement fusion scheme, in which the stages of compression and hot spot formation are partly separated. The hot spot is created at the end of the implosion by a converging shock driven by a final ``spike'' of the laser pulse. Several shock-ignition target concepts have been proposed and relevant gain curves computed (see, e.g.). Here, we consider both pure-DT targets and more facility-relevant targets with plastic ablator. The investigation is conducted with 1D and 2D hydrodynamic simulations. We determine ignition threshold factors ITF's (and their dependence on laser pulse parameters) by means of 1D simulations. 2D simulations indicate that robustness to long-scale perturbations increases with ITF. Gain curves (gain vs laser energy), for different ITF's, are generated using 1D simulations. Work partially supported by Sapienza Project C26A15YTMA, Sapienza 2016 (n. 257584), Eurofusion Project AWP17-ENR-IFE-CEA-01.

  3. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    International Nuclear Information System (INIS)

    Stsepankou, D; Arns, A; Hesser, J; Ng, S K; Zygmanski, P

    2012-01-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone–beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system. (paper)

  4. Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar

    Directory of Open Access Journals (Sweden)

    Zhenxin Cao

    2018-02-01

    Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.

  5. Influence of Different Coupling Modes on the Robustness of Smart Grid under Targeted Attack

    Directory of Open Access Journals (Sweden)

    WenJie Kang

    2018-05-01

    Full Text Available Many previous works only focused on the cascading failure of global coupling of one-to-one structures in interdependent networks, but the local coupling of dual coupling structures has rarely been studied due to its complex structure. This will result in a serious consequence that many conclusions of the one-to-one structure may be incorrect in the dual coupling network and do not apply to the smart grid. Therefore, it is very necessary to subdivide the dual coupling link into a top-down coupling link and a bottom-up coupling link in order to study their influence on network robustness by combining with different coupling modes. Additionally, the power flow of the power grid can cause the load of a failed node to be allocated to its neighboring nodes and trigger a new round of load distribution when the load of these nodes exceeds their capacity. This means that the robustness of smart grids may be affected by four factors, i.e., load redistribution, local coupling, dual coupling link and coupling mode; however, the research on the influence of those factors on the network robustness is missing. In this paper, firstly, we construct the smart grid as a two-layer network with a dual coupling link and divide the power grid and communication network into many subnets based on the geographical location of their nodes. Secondly, we define node importance ( N I as an evaluation index to access the impact of nodes on the cyber or physical network and propose three types of coupling modes based on N I of nodes in the cyber and physical subnets, i.e., Assortative Coupling in Subnets (ACIS, Disassortative Coupling in Subnets (DCIS, and Random Coupling in Subnets (RCIS. Thirdly, a cascading failure model is proposed for studying the effect of local coupling of dual coupling link in combination with ACIS, DCIS, and RCIS on the robustness of the smart grid against a targeted attack, and the survival rate of functional nodes is used to assess the robustness of

  6. Influence of Different Coupling Modes on the Robustness of Smart Grid under Targeted Attack.

    Science.gov (United States)

    Kang, WenJie; Hu, Gang; Zhu, PeiDong; Liu, Qiang; Hang, Zhi; Liu, Xin

    2018-05-24

    Many previous works only focused on the cascading failure of global coupling of one-to-one structures in interdependent networks, but the local coupling of dual coupling structures has rarely been studied due to its complex structure. This will result in a serious consequence that many conclusions of the one-to-one structure may be incorrect in the dual coupling network and do not apply to the smart grid. Therefore, it is very necessary to subdivide the dual coupling link into a top-down coupling link and a bottom-up coupling link in order to study their influence on network robustness by combining with different coupling modes. Additionally, the power flow of the power grid can cause the load of a failed node to be allocated to its neighboring nodes and trigger a new round of load distribution when the load of these nodes exceeds their capacity. This means that the robustness of smart grids may be affected by four factors, i.e., load redistribution, local coupling, dual coupling link and coupling mode; however, the research on the influence of those factors on the network robustness is missing. In this paper, firstly, we construct the smart grid as a two-layer network with a dual coupling link and divide the power grid and communication network into many subnets based on the geographical location of their nodes. Secondly, we define node importance ( N I ) as an evaluation index to access the impact of nodes on the cyber or physical network and propose three types of coupling modes based on N I of nodes in the cyber and physical subnets, i.e., Assortative Coupling in Subnets (ACIS), Disassortative Coupling in Subnets (DCIS), and Random Coupling in Subnets (RCIS). Thirdly, a cascading failure model is proposed for studying the effect of local coupling of dual coupling link in combination with ACIS, DCIS, and RCIS on the robustness of the smart grid against a targeted attack, and the survival rate of functional nodes is used to assess the robustness of the smart grid

  7. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  8. Robust cell tracking in epithelial tissues through identification of maximum common subgraphs.

    Science.gov (United States)

    Kursawe, Jochen; Bardenet, Rémi; Zartman, Jeremiah J; Baker, Ruth E; Fletcher, Alexander G

    2016-11-01

    Tracking of cells in live-imaging microscopy videos of epithelial sheets is a powerful tool for investigating fundamental processes in embryonic development. Characterizing cell growth, proliferation, intercalation and apoptosis in epithelia helps us to understand how morphogenetic processes such as tissue invagination and extension are locally regulated and controlled. Accurate cell tracking requires correctly resolving cells entering or leaving the field of view between frames, cell neighbour exchanges, cell removals and cell divisions. However, current tracking methods for epithelial sheets are not robust to large morphogenetic deformations and require significant manual interventions. Here, we present a novel algorithm for epithelial cell tracking, exploiting the graph-theoretic concept of a 'maximum common subgraph' to track cells between frames of a video. Our algorithm does not require the adjustment of tissue-specific parameters, and scales in sub-quadratic time with tissue size. It does not rely on precise positional information, permitting large cell movements between frames and enabling tracking in datasets acquired at low temporal resolution due to experimental constraints such as phototoxicity. To demonstrate the method, we perform tracking on the Drosophila embryonic epidermis and compare cell-cell rearrangements to previous studies in other tissues. Our implementation is open source and generally applicable to epithelial tissues. © 2016 The Authors.

  9. Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics

    International Nuclear Information System (INIS)

    Prix, Reinhard; Krishnan, Badri

    2009-01-01

    We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as F-statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('B-statistic') using the Bayes factor with a more natural amplitude prior, namely an isotropic probability distribution for the orientation of GW sources. Monte Carlo simulations of targeted searches show that the resulting Bayesian B-statistic is more powerful in the Neyman-Pearson sense (i.e., has a higher expected detection probability at equal false-alarm probability) than the frequentist F-statistic.

  10. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  11. Robust Automatic Target Recognition via HRRP Sequence Based on Scatterer Matching

    Directory of Open Access Journals (Sweden)

    Yuan Jiang

    2018-02-01

    Full Text Available High resolution range profile (HRRP plays an important role in wideband radar automatic target recognition (ATR. In order to alleviate the sensitivity to clutter and target aspect, employing a sequence of HRRP is a promising approach to enhance the ATR performance. In this paper, a novel HRRP sequence-matching method based on singular value decomposition (SVD is proposed. First, the HRRP sequence is decoupled into the angle space and the range space via SVD, which correspond to the span of the left and the right singular vectors, respectively. Second, atomic norm minimization (ANM is utilized to estimate dominant scatterers in the range space and the Hausdorff distance is employed to measure the scatter similarity between the test and training data. Next, the angle space similarity between the test and training data is evaluated based on the left singular vector correlations. Finally, the range space matching result and the angle space correlation are fused with the singular values as weights. Simulation and outfield experimental results demonstrate that the proposed matching metric is a robust similarity measure for HRRP sequence recognition.

  12. Measurement of the Barkas effect around the stopping-power maximum for light and heavy targets

    International Nuclear Information System (INIS)

    Moeller, S.P.; Knudsen, H.; Mikkelsen, U.; Paludan, K.; Morenzoni, E.

    1997-01-01

    The first direct measurements of antiproton stopping powers around the stopping power maximum are presented. The LEAR antiproton-beam of 5.9 MeV is degraded to 50-700 keV, and the energy-loss is found by measuring the antiproton velocity before and after the target. The antiproton stopping powers of Si and Au are found to be reduced by 30 and 40% near the electronic stopping power maximum as compared to the equivalent proton stopping power. The Barkas effect, that is the stopping power difference between protons and antiprotons, is extracted and compared to theoretical estimates. (orig.)

  13. Maximum credibly yield for deuteriuim-filled double shell imaging targets meeting requirements for yield bin Category A

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Douglas Carl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Loomis, Eric Nicholas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-17

    We are anticipating our first NIF double shell shot using an aluminum ablator and a glass inner shell filled with deuterium shown in figure 1. The expected yield is between a few 1010 to a few 1011 dd neutrons. The maximum credible yield is 5e+13. This memo describes why, and what would be expected with variations on the target. This memo evaluates the maximum credible yield for deuterium filled double shell capsule targets with an aluminum ablator shell and a glass inner shell in yield Category A (< 1014 neutrons). It also pertains to fills of gas diluted with hydrogen, helium (3He or 4He), or any other fuel except tritium. This memo does not apply to lower z ablator dopants, such as beryllium, as this would increase the ablation efficiency. This evaluation is for 5.75 scale hohlraum targets of either gold or uranium with helium gas fills with density between 0 and 1.6 mg/cc. It could be extended to other hohlraum sizes and shapes with slight modifications. At present only laser pulse energies up to 1.5 MJ were considered with a single step laser pulse of arbitrary shape. Since yield decreases with laser energy for this target, the memo could be extended to higher laser energies if desired. These maximum laser parameters of pulses addressed here are near the edge of NIF’s capability, and constitute the operating envelope for experiments covered by this memo. We have not considered multiple step pulses, would probably create no advantages in performance, and are not planned for double shell capsules. The main target variables are summarized in Table 1 and explained in detail in the memo. Predicted neutron yields are based on 1D and 2D clean simulations.

  14. Dynamic-MLC leaf control utilizing on-flight intensity calculations: A robust method for real-time IMRT delivery over moving rigid targets

    International Nuclear Information System (INIS)

    McMahon, Ryan; Papiez, Lech; Rangaraj, Dharanipathy

    2007-01-01

    An algorithm is presented that allows for the control of multileaf collimation (MLC) leaves based entirely on real-time calculations of the intensity delivered over the target. The algorithm is capable of efficiently correcting generalized delivery errors without requiring the interruption of delivery (self-correcting trajectories), where a generalized delivery error represents anything that causes a discrepancy between the delivered and intended intensity profiles. The intensity actually delivered over the target is continually compared to its intended value. For each pair of leaves, these comparisons are used to guide the control of the following leaf and keep this discrepancy below a user-specified value. To demonstrate the basic principles of the algorithm, results of corrected delivery are shown for a leading leaf positional error during dynamic-MLC (DMLC) IMRT delivery over a rigid moving target. It is then shown that, with slight modifications, the algorithm can be used to track moving targets in real time. The primary results of this article indicate that the algorithm is capable of accurately delivering DMLC IMRT over a rigid moving target whose motion is (1) completely unknown prior to delivery and (2) not faster than the maximum MLC leaf velocity over extended periods of time. These capabilities are demonstrated for clinically derived intensity profiles and actual tumor motion data, including situations when the target moves in some instances faster than the maximum admissible MLC leaf velocity. The results show that using the algorithm while calculating the delivered intensity every 50 ms will provide a good level of accuracy when delivering IMRT over a rigid moving target translating along the direction of MLC leaf travel. When the maximum velocities of the MLC leaves and target were 4 and 4.2 cm/s, respectively, the resulting error in the two intensity profiles used was 0.1±3.1% and -0.5±2.8% relative to the maximum of the intensity profiles. For

  15. SU-E-T-266: Proton PBS Plan Design and Robustness Evaluation for Head and Neck Cancers

    International Nuclear Information System (INIS)

    Liang, X; Tang, S; Zhai, H; Kirk, M; Kalbasi, A; Lin, A; Ahn, P; Tochner, Z; McDonough, J; Both, S

    2014-01-01

    Purpose: To describe a newly designed proton pencil beam scanning (PBS) planning technique for radiotherapy of patients with bilateral oropharyngeal cancer, and to assess plan robustness. Methods: We treated 10 patients with proton PBS plans using 2 posterior oblique field (2F PBS) comprised of 80% single-field uniform dose (SFUD) and 20% intensity-modulated proton therapy (IMPT). All patients underwent weekly CT scans for verification. Using dosimetric indicators for both targets and organs at risk (OARs), we quantitatively compared initial plans and verification plans using student t-tests. We created a second proton PBS plan for each patient using 2 posterior oblique plus 1 anterior field comprised of 100% SFUD (3F PBS). We assessed plan robustness for both proton plan groups, as well as a photon volumetric modulated arc therapy (VMAT) plan group by comparing initial and verification plans. Results: The 2F PBS plans were not robust in target coverage. D98% for clinical target volume (CTV) degraded from 100% to 96% on average, with maximum change Δ D98% of −24%. Two patients were moved to photon VMAT treatment due to insufficient CTV coverage on verification plans. Plan robustness was especially weak in the low-anterior neck. The 3F PBS plans, however, demonstrated robust target coverage, which was comparable to the VMAT photon plan group. Doses to oral cavity were lower in the Proton PBS plans compared to photon VMAT plans due to no lower exit dose to the oral cavity. Conclusion: Proton PBS plans using 2 posterior oblique fields were not robust for CTV coverage, due to variable positioning of redundant soft tissue in the posterior neck. We designed 3-field proton PBS plans using an anterior field to avoid long heterogeneous paths in the low neck. These 3-field proton PBS plans had significantly improved plan robustness, and the robustness is comparable to VMAT photon plans

  16. SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Y; Yu, J; Xiao, Y [Thomas Jefferson University Hospital, Philadelphia, PA (United States)

    2015-06-15

    Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant.

  17. SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume

    International Nuclear Information System (INIS)

    Gong, Y; Yu, J; Xiao, Y

    2015-01-01

    Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant

  18. Electron spin resonance and its implication on the maximum nuclear polarization of deuterated solid target materials

    International Nuclear Information System (INIS)

    Heckmann, J.; Meyer, W.; Radtke, E.; Reicherz, G.; Goertz, S.

    2006-01-01

    ESR spectroscopy is an important tool in polarized solid target material research, since it allows us to study the paramagnetic centers, which are used for the dynamic nuclear polarization (DNP). The polarization behavior of the different target materials is strongly affected by the properties of these centers, which are added to the diamagnetic materials by chemical doping or irradiation. In particular, the ESR linewidth of the paramagnetic centers is a very important parameter, especially concerning the deuterated target materials. In this paper, the results of the first precise ESR measurements of the deuterated target materials at a DNP-relevant magnetic field of 2.5 T are presented. Moreover, these results allowed us to experimentally study the correlation between ESR linewidth and maximum deuteron polarization, as given by the spin-temperature theory

  19. Verification of maximum radial power peaking factor due to insertion of FPM-LEU target in the core of RSG-GAS reactor

    Energy Technology Data Exchange (ETDEWEB)

    Setyawan, Daddy, E-mail: d.setyawan@bapeten.go.id [Center for Assessment of Regulatory System and Technology for Nuclear Installations and Materials, Indonesian Nuclear Energy Regulatory Agency (BAPETEN), Jl. Gajah Mada No. 8 Jakarta 10120 (Indonesia); Rohman, Budi [Licensing Directorate for Nuclear Installations and Materials, Indonesian Nuclear Energy Regulatory Agency (BAPETEN), Jl. Gajah Mada No. 8 Jakarta 10120 (Indonesia)

    2014-09-30

    Verification of Maximum Radial Power Peaking Factor due to insertion of FPM-LEU target in the core of RSG-GAS Reactor. Radial Power Peaking Factor in RSG-GAS Reactor is a very important parameter for the safety of RSG-GAS reactor during operation. Data of radial power peaking factor due to the insertion of Fission Product Molybdenum with Low Enriched Uranium (FPM-LEU) was reported by PRSG to BAPETEN through the Safety Analysis Report RSG-GAS for FPM-LEU target irradiation. In order to support the evaluation of the Safety Analysis Report incorporated in the submission, the assessment unit of BAPETEN is carrying out independent assessment in order to verify safety related parameters in the SAR including neutronic aspect. The work includes verification to the maximum radial power peaking factor change due to the insertion of FPM-LEU target in RSG-GAS Reactor by computational method using MCNP5and ORIGEN2. From the results of calculations, the new maximum value of the radial power peaking factor due to the insertion of FPM-LEU target is 1.27. The results of calculations in this study showed a smaller value than 1.4 the limit allowed in the SAR.

  20. SU-F-T-188: A Robust Treatment Planning Technique for Proton Pencil Beam Scanning Cranial Spinal Irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, M; Mehta, M; Badiyan, S; Young, K; Malyapa, R; Regine, W; Langen, K [University of Maryland School of Medicine, Baltimore, MD (United States); Yam, M [University of Florida Proton Therapy Institute, Jacksonville, FL (United States)

    2016-06-15

    Purpose: To propose a proton pencil beam scanning (PBS) cranial spinal irradiation (CSI) treatment planning technique robust against patient roll, isocenter offset and proton range uncertainty. Method: Proton PBS plans were created (Eclipse V11) for three previously treated CSI patients to 36 Gy (1.8 Gy/fractions). The target volume was separated into three regions: brain, upper spine and lower spine. One posterior-anterior (PA) beam was used for each spine region, and two posterior-oblique beams (15° apart from PA direction, denoted as 2PO-15) for the brain region. For comparison, another plan using one PA beam for the brain target (denoted as 1PA) was created. Using the same optimization objectives, 98% CTV was optimized to receive the prescription dose. To evaluate plan robustness against patient roll, the gantry angle was increased by 3° and dose was recalculated without changing the proton spot weights. On the re-calculated plan, doses were then calculated using 12 scenarios that are combinations of isocenter shift (±3mm in X, Y, and Z directions) and proton range variation (±3.5%). The worst-case-scenario (WCS) brain CTV dosimetric metrics were compared to the nominal plan. Results: For both beam arrangements, the brain field(s) and upper-spine field overlap in the T2–T5 region depending on patient anatomy. The maximum monitor unit per spot were 48.7%, 47.2%, and 40.0% higher for 1PA plans than 2PO-15 plans for the three patients. The 2PO-15 plans have better dose conformity. At the same level of CTV coverage, the 2PO-15 plans have lower maximum dose and higher minimum dose to the CTV. The 2PO-15 plans also showed lower WCS maximum dose to CTV, while the WCS minimum dose to CTV were comparable between the two techniques. Conclusion: Our method of using two posterior-oblique beams for brain target provides improved dose conformity and homogeneity, and plan robustness including patient roll.

  1. Maximum flow approach to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv from protein-protein interaction network.

    Science.gov (United States)

    Melak, Tilahun; Gakkhar, Sunita

    2015-12-01

    In spite of the implementations of several strategies, tuberculosis (TB) is overwhelmingly a serious global public health problem causing millions of infections and deaths every year. This is mainly due to the emergence of drug-resistance varieties of TB. The current treatment strategies for the drug-resistance TB are of longer duration, more expensive and have side effects. This highlights the importance of identification and prioritization of targets for new drugs. This study has been carried out to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv based on their flow to resistance genes. The weighted proteome interaction network of the pathogen was constructed using a dataset from STRING database. Only a subset of the dataset with interactions that have a combined score value ≥770 was considered. Maximum flow approach has been used to prioritize potential drug targets. The potential drug targets were obtained through comparative genome and network centrality analysis. The curated set of resistance genes was retrieved from literatures. Detail literature review and additional assessment of the method were also carried out for validation. A list of 537 proteins which are essential to the pathogen and non-homologous with human was obtained from the comparative genome analysis. Through network centrality measures, 131 of them were found within the close neighborhood of the centre of gravity of the proteome network. These proteins were further prioritized based on their maximum flow value to resistance genes and they are proposed as reliable drug targets of the pathogen. Proteins which interact with the host were also identified in order to understand the infection mechanism. Potential drug targets of Mycobacterium tuberculosis H37Rv were successfully prioritized based on their flow to resistance genes of existing drugs which is believed to increase the druggability of the targets since inhibition of a protein that has a maximum flow to

  2. Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise

    Directory of Open Access Journals (Sweden)

    Bowen Hou

    2017-11-01

    Full Text Available As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model.

  3. Targeting the maximum heat recovery for systems with heat losses and heat gains

    International Nuclear Information System (INIS)

    Wan Alwi, Sharifah Rafidah; Lee, Carmen Kar Mun; Lee, Kim Yau; Abd Manan, Zainuddin; Fraser, Duncan M.

    2014-01-01

    Graphical abstract: Illustration of heat gains and losses from process streams. - Highlights: • Maximising energy savings through heat losses or gains. • Identifying location where insulation can be avoided. • Heuristics to maximise heat losses or gains. • Targeting heat losses or gains using the extended STEP technique and HEAT diagram. - Abstract: Process Integration using the Pinch Analysis technique has been widely used as a tool for the optimal design of heat exchanger networks (HENs). The Composite Curves and the Stream Temperature versus Enthalpy Plot (STEP) are among the graphical tools used to target the maximum heat recovery for a HEN. However, these tools assume that heat losses and heat gains are negligible. This work presents an approach that considers heat losses and heat gains during the establishment of the minimum utility targets. The STEP method, which is plotted based on the individual, as opposed to the composite streams, has been extended to consider the effect of heat losses and heat gains during stream matching. Several rules to guide the proper location of pipe insulation, and the appropriate procedure for stream shifting have been introduced in order to minimise the heat losses and maximise the heat gains. Application of the method on two case studies shows that considering heat losses and heat gains yield more realistic utility targets and help reduce both the insulation capital cost and utility cost of a HEN

  4. Robust, fast and accurate vision-based localization of a cooperative target used for space robotic arm

    Science.gov (United States)

    Wen, Zhuoman; Wang, Yanjie; Luo, Jun; Kuijper, Arjan; Di, Nan; Jin, Minghe

    2017-07-01

    When a space robotic arm deploys a payload, usually the pose between the cooperative target fixed on the payload and the hand-eye camera installed on the arm is calculated in real-time. A high-precision robust visual cooperative target localization method is proposed. Combing a circle, a line and dots as markers, a target that guarantees high detection rates is designed. Given an image, single-pixel-width smooth edges are drawn by a novel linking method. Circles are then quickly extracted using isophotes curvature. Around each circle, a square boundary in a pre-calculated proportion to the circle radius is set. In the boundary, the target is identified if certain numbers of lines exist. Based on the circle, the lines, and the target foreground and background intensities, markers are localized. Finally, the target pose is calculated by the Point-3-Perspective algorithm. The algorithm processes 8 frames per second with the target distance ranging from 0.3m to 1.5 m. It generated high-precision poses of above 97.5% on over 100,000 images regardless of camera background, target pose, illumination and motion blur. At 0.3 m, the rotation and translation errors were less than 0.015° and 0.2 mm. The proposed algorithm is very suitable for real-time visual measurement that requires high precision in aerospace.

  5. Shock ignition: a brief overview and progress in the design of robust targets

    International Nuclear Information System (INIS)

    Atzeni, S; Marocchino, A; Schiavi, A

    2015-01-01

    Shock ignition is a laser direct-drive inertial confinement fusion (ICF) scheme in which the stages of compression and hot spot formation are partly separated. The fuel is first imploded at a lower velocity than in conventional ICF, reducing the threats due to Rayleigh–Taylor instability. Close to stagnation, an intense laser spike drives a strong converging shock, which contributes to hot spot formation. This paper starts with a brief overview of the theoretical studies, target design and experimental results on shock ignition. The second part of the paper illustrates original work aiming at the design of robust targets and computation of the relevant gain curves. Following Chang et al (2010 Phys. Rev. Lett. 104 135002) a safety factor for high gain, ITF* (analogous to the ignition threshold factor ITF introduced by Clark et al (2008 Phys. Plasmas 15 056305)), is evaluated by means of parametric 1D simulations with artificially reduced reactivity. SI designs scaled as in Atzeni et al (2013 New J. Phys. 15 045004) are found to have nearly the same ITF*. For a given target, such ITF* increases with implosion velocity and laser spike power. A gain curve with a prescribed ITF* can then be simply generated by upscaling a reference target with that value of ITF*. An interesting option is scaling in size by reducing the implosion velocity to keep the ratio of implosion velocity to self-ignition velocity constant. At a given total laser energy, targets with higher ITF* are driven to higher implosion velocity and achieve a somewhat lower gain. However, a 1D gain higher than 100 is achieved at an (incident) energy below 1 MJ, an implosion velocity below 300 km s −1 and a peak incident power below 400 TW. 2D simulations of mispositioned targets show that targets with a higher ITF* indeed tolerate larger displacements. (paper)

  6. Evaluation of the maximum-likelihood adaptive neural system (MLANS) applications to noncooperative IFF

    Science.gov (United States)

    Chernick, Julian A.; Perlovsky, Leonid I.; Tye, David M.

    1994-06-01

    This paper describes applications of maximum likelihood adaptive neural system (MLANS) to the characterization of clutter in IR images and to the identification of targets. The characterization of image clutter is needed to improve target detection and to enhance the ability to compare performance of different algorithms using diverse imagery data. Enhanced unambiguous IFF is important for fratricide reduction while automatic cueing and targeting is becoming an ever increasing part of operations. We utilized MLANS which is a parametric neural network that combines optimal statistical techniques with a model-based approach. This paper shows that MLANS outperforms classical classifiers, the quadratic classifier and the nearest neighbor classifier, because on the one hand it is not limited to the usual Gaussian distribution assumption and can adapt in real time to the image clutter distribution; on the other hand MLANS learns from fewer samples and is more robust than the nearest neighbor classifiers. Future research will address uncooperative IFF using fused IR and MMW data.

  7. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    Science.gov (United States)

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  8. Handling Occlusions for Robust Augmented Reality Systems

    Directory of Open Access Journals (Sweden)

    Maidi Madjid

    2010-01-01

    Full Text Available Abstract In Augmented Reality applications, the human perception is enhanced with computer-generated graphics. These graphics must be exactly registered to real objects in the scene and this requires an effective Augmented Reality system to track the user's viewpoint. In this paper, a robust tracking algorithm based on coded fiducials is presented. Square targets are identified and pose parameters are computed using a hybrid approach based on a direct method combined with the Kalman filter. An important factor for providing a robust Augmented Reality system is the correct handling of targets occlusions by real scene elements. To overcome tracking failure due to occlusions, we extend our method using an optical flow approach to track visible points and maintain virtual graphics overlaying when targets are not identified. Our proposed real-time algorithm is tested with different camera viewpoints under various image conditions and shows to be accurate and robust.

  9. Maximum Power Point Tracking Using Sliding Mode Control for Photovoltaic Array

    Directory of Open Access Journals (Sweden)

    J. Ghazanfari

    2013-09-01

    Full Text Available In this paper, a robust Maximum Power Point Tracking (MPPT for PV array has been proposed using sliding mode control by defining a new formulation for sliding surface which is based on increment conductance (INC method. The stability and robustness of the proposed controller are investigated to load variations and environment changes. Three different types of DC-DC converter are used in Maximum Power Point (MPP system and the results obtained are given. The simulation results confirm the effectiveness of the proposed method in the presence of load variations and environment changes for different types of DC-DC converter topologies.

  10. Progress towards a high-gain and robust target design for heavy ion fusion

    Energy Technology Data Exchange (ETDEWEB)

    Henestroza, Enrique; Grant Logan, B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States)

    2012-07-15

    would not reach the ignition zone in time to affect the burning process. Also, preliminary HYDRA calculations, using a higher resolution mesh to study the shear flow of the DT fuel along the X-target walls, indicate that metal-mixed fuel produced near the walls would not be transferred to the DT ignition zone (at maximum {rho}R) located at the vertex of the X-target.

  11. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  12. Comparing Four Instructional Techniques for Promoting Robust Knowledge

    Science.gov (United States)

    Richey, J. Elizabeth; Nokes-Malach, Timothy J.

    2015-01-01

    Robust knowledge serves as a common instructional target in academic settings. Past research identifying characteristics of experts' knowledge across many domains can help clarify the features of robust knowledge as well as ways of assessing it. We review the expertise literature and identify three key features of robust knowledge (deep,…

  13. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  14. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    International Nuclear Information System (INIS)

    McGowan, S E; Albertini, F; Lomax, A J; Thomas, S J

    2015-01-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties. (paper)

  15. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    Science.gov (United States)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  16. A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure.

    Science.gov (United States)

    Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L

    2018-01-01

    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.

  17. Robust Manufacturing Control

    CERN Document Server

    2013-01-01

    This contributed volume collects research papers, presented at the CIRP Sponsored Conference Robust Manufacturing Control: Innovative and Interdisciplinary Approaches for Global Networks (RoMaC 2012, Jacobs University, Bremen, Germany, June 18th-20th 2012). These research papers present the latest developments and new ideas focusing on robust manufacturing control for global networks. Today, Global Production Networks (i.e. the nexus of interconnected material and information flows through which products and services are manufactured, assembled and distributed) are confronted with and expected to adapt to: sudden and unpredictable large-scale changes of important parameters which are occurring more and more frequently, event propagation in networks with high degree of interconnectivity which leads to unforeseen fluctuations, and non-equilibrium states which increasingly characterize daily business. These multi-scale changes deeply influence logistic target achievement and call for robust planning and control ...

  18. Robust aptamer–polydopamine-functionalized M-PLGA–TPGS nanoparticles for targeted delivery of docetaxel and enhanced cervical cancer therapy

    Directory of Open Access Journals (Sweden)

    Xu GJ

    2016-06-01

    Full Text Available Guojun Xu,1–3,* Xinghua Yu,2,* Jinxie Zhang,1,2,* Yingchao Sheng,4 Gan Liu,2 Wei Tao,1,2 Lin Mei1,2 1School of Life Sciences, Tsinghua University, Beijing, 2Graduate School at Shenzhen, Tsinghua University, Shenzhen, 3School of Materials Science and Engineering, Tsinghua University, Beijing, 4Department of Orthopedic Surgery, Changshu Hospital of TCM, Changshu, People’s Republic of China *These authors contributed equally to this work Abstract: One limitation of current biodegradable polymeric nanoparticles (NPs is the contradiction between functional modification and maintaining formerly excellent bioproperties with simple procedures. Here, we reported a robust aptamer–polydopamine-functionalized mannitol-functionalized poly(lactide-co-glycolide (M-PLGA–D-α-tocopheryl polyethylene glycol 1000 succinate (TPGS nanoformulation (Apt-pD-NPs for the delivery of docetaxel (DTX with enhanced cervical cancer therapy effects. The novel DTX-loaded Apt-pD-NPs possess satisfactory advantages: 1 increased drug loading content and encapsulation efficiency induced by star-shaped copolymer M-PLGA–TPGS; 2 significant active targeting effect caused by conjugated AS1411 aptamers; and 3 excellent long-term compatibility by incorporation of TPGS. Therefore, with simple preparation procedures and excellent bioproperties, the new functionalized Apt-pD-NPs could maximally increase the local effective drug concentration on tumor sites, achieving enhanced treatment effectiveness and minimizing side effects. In a word, the robust DTX-loaded Apt-pD-NPs could be used as potential nanotherapeutics for cervical cancer treatment, and the aptamer–polydopamine modification strategy could be a promising method for active targeting of cancer therapy with simple procedures. Keywords: dopamine, AS1411 aptamer, active targeting, polymeric NPs, enhanced cervical chemotherapy

  19. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    Science.gov (United States)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  20. Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor.

    Science.gov (United States)

    Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun

    2016-05-09

    The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers' misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors.

  1. Robust estimation of the noise variance from background MR data

    NARCIS (Netherlands)

    Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.

    2006-01-01

    In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum

  2. Robustness Envelopes of Networks

    NARCIS (Netherlands)

    Trajanovski, S.; Martín-Hernández, J.; Winterbach, W.; Van Mieghem, P.

    2013-01-01

    We study the robustness of networks under node removal, considering random node failure, as well as targeted node attacks based on network centrality measures. Whilst both of these have been studied in the literature, existing approaches tend to study random failure in terms of average-case

  3. Optimal robust control strategy of a solid oxide fuel cell system

    Science.gov (United States)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  4. A new robustness analysis for climate policy evaluations: A CGE application for the EU 2020 targets

    International Nuclear Information System (INIS)

    Hermeling, Claudia; Löschel, Andreas; Mennel, Tim

    2013-01-01

    This paper introduces a new method for stochastic sensitivity analysis for computable general equilibrium (CGE) model based on Gauss Quadrature and applies it to check the robustness of a large-scale climate policy evaluation. The revised version of the Gauss-quadrature approach to sensitivity analysis reduces computations considerably vis-à-vis the commonly applied Monte-Carlo methods; this allows for a stochastic sensitivity analysis also for large scale models and multi-dimensional changes of parameters. In the application, an impact assessment of EU2020 climate policy, we focus on sectoral elasticities that are part of the basic parameters of the model and have been recently determined by econometric estimation, alongside with standard errors. The impact assessment is based on the large scale CGE model PACE. We show the applicability of the Gauss-quadrature approach and confirm the robustness of the impact assessment with the PACE model. The variance of the central model outcomes is smaller than their mean by order four to eight, depending on the aggregation level (i.e. aggregate variables such as GDP show a smaller variance than sectoral output). - Highlights: ► New, simplified method for stochastic sensitivity analysis for CGE analysis. ► Gauss quadrature with orthogonal polynomials. ► Application to climate policy—the case of the EU 2020 targets

  5. Robustness analysis of geodetic networks in the case of correlated observations

    Directory of Open Access Journals (Sweden)

    Mevlut Yetkin

    Full Text Available GPS (or GNSS networks are invaluable tools for monitoring natural hazards such as earthquakes. However, blunders in GPS observations may be mistakenly interpreted as deformation. Therefore, robust networks are needed in deformation monitoring using GPS networks. Robustness analysis is a natural merger of reliability and strain and defined as the ability to resist deformations caused by the maximum undetecle errors as determined from internal reliability analysis. However, to obtain rigorously correct results; the correlations among the observations must be considered while computing maximum undetectable errors. Therefore, we propose to use the normalized reliability numbers instead of redundancy numbers (Baarda's approach in robustness analysis of a GPS network. A simple mathematical relation showing the ratio between uncorrelated and correlated cases for maximum undetectable error is derived. The same ratio is also valid for the displacements. Numerical results show that if correlations among observations are ignored, dramatically different displacements can be obtained depending on the size of multiple correlation coefficients. Furthermore, when normalized reliability numbers are small, displacements get large, i.e., observations with low reliability numbers cause bigger displacements compared to observations with high reliability numbers.

  6. Novel methods for estimating lithium-ion battery state of energy and maximum available energy

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhu, Jianguo; Wang, Guoxiu; He, Tingting; Wei, Yiying

    2016-01-01

    Highlights: • Study on temperature, current, aging dependencies of maximum available energy. • Study on the various factors dependencies of relationships between SOE and SOC. • A quantitative relationship between SOE and SOC is proposed for SOE estimation. • Estimate maximum available energy by means of moving-window energy-integral. • The robustness and feasibility of the proposed approaches are systematic evaluated. - Abstract: The battery state of energy (SOE) allows a direct determination of the ratio between the remaining and maximum available energy of a battery, which is critical for energy optimization and management in energy storage systems. In this paper, the ambient temperature, battery discharge/charge current rate and cell aging level dependencies of battery maximum available energy and SOE are comprehensively analyzed. An explicit quantitative relationship between SOE and state of charge (SOC) for LiMn_2O_4 battery cells is proposed for SOE estimation, and a moving-window energy-integral technique is incorporated to estimate battery maximum available energy. Experimental results show that the proposed approaches can estimate battery maximum available energy and SOE with high precision. The robustness of the proposed approaches against various operation conditions and cell aging levels is systematically evaluated.

  7. Effects of methodology and analysis strategy on robustness of pestivirus phylogeny.

    Science.gov (United States)

    Liu, Lihong; Xia, Hongyan; Baule, Claudia; Belák, Sándor; Wahlberg, Niklas

    2010-01-01

    Phylogenetic analysis of pestiviruses is a useful tool for classifying novel pestiviruses and for revealing their phylogenetic relationships. In this study, robustness of pestivirus phylogenies has been compared by analyses of the 5'UTR, and complete N(pro) and E2 gene regions separately and combined, performed by four methods: neighbour-joining (NJ), maximum parsimony (MP), maximum likelihood (ML), and Bayesian inference (BI). The strategy of analysing the combined sequence dataset by BI, ML, and MP methods resulted in a single, well-supported tree topology, indicating a reliable and robust pestivirus phylogeny. By contrast, the single-gene analysis strategy resulted in 12 trees of different topologies, revealing different relationships among pestiviruses. These results indicate that the strategies and methodologies are two vital aspects affecting the robustness of the pestivirus phylogeny. The strategy and methodologies outlined in this paper may have a broader application in inferring phylogeny of other RNA viruses.

  8. Building a Robust Tumor Profiling Program: Synergy between Next-Generation Sequencing and Targeted Single-Gene Testing.

    Directory of Open Access Journals (Sweden)

    Matthew C Hiemenz

    Full Text Available Next-generation sequencing (NGS is a powerful platform for identifying cancer mutations. Routine clinical adoption of NGS requires optimized quality control metrics to ensure accurate results. To assess the robustness of our clinical NGS pipeline, we analyzed the results of 304 solid tumor and hematologic malignancy specimens tested simultaneously by NGS and one or more targeted single-gene tests (EGFR, KRAS, BRAF, NPM1, FLT3, and JAK2. For samples that passed our validated tumor percentage and DNA quality and quantity thresholds, there was perfect concordance between NGS and targeted single-gene tests with the exception of two FLT3 internal tandem duplications that fell below the stringent pre-established reporting threshold but were readily detected by manual inspection. In addition, NGS identified clinically significant mutations not covered by single-gene tests. These findings confirm NGS as a reliable platform for routine clinical use when appropriate quality control metrics, such as tumor percentage and DNA quality cutoffs, are in place. Based on our findings, we suggest a simple workflow that should facilitate adoption of clinical oncologic NGS services at other institutions.

  9. New robust chaotic system with exponential quadratic term

    International Nuclear Information System (INIS)

    Bao Bocheng; Li Chunbiao; Liu Zhong; Xu Jianping

    2008-01-01

    This paper proposes a new robust chaotic system of three-dimensional quadratic autonomous ordinary differential equations by introducing an exponential quadratic term. This system can display a double-scroll chaotic attractor with only two equilibria, and can be found to be robust chaotic in a very wide parameter domain with positive maximum Lyapunov exponent. Some basic dynamical properties and chaotic behaviour of novel attractor are studied. By numerical simulation, this paper verifies that the three-dimensional system can also evolve into periodic and chaotic behaviours by a constant controller. (general)

  10. Computing the maximum volume inscribed ellipsoid of a polytopic projection

    NARCIS (Netherlands)

    Zhen, Jianzhe; den Hertog, Dick

    We introduce a novel scheme based on a blending of Fourier-Motzkin elimination (FME) and adjustable robust optimization techniques to compute the maximum volume inscribed ellipsoid (MVE) in a polytopic projection. It is well-known that deriving an explicit description of a projected polytope is

  11. Computing the Maximum Volume Inscribed Ellipsoid of a Polytopic Projection

    NARCIS (Netherlands)

    Zhen, J.; den Hertog, D.

    2015-01-01

    We introduce a novel scheme based on a blending of Fourier-Motzkin elimination (FME) and adjustable robust optimization techniques to compute the maximum volume inscribed ellipsoid (MVE) in a polytopic projection. It is well-known that deriving an explicit description of a projected polytope is

  12. Combination of surface and borehole seismic data for robust target-oriented imaging

    Science.gov (United States)

    Liu, Yi; van der Neut, Joost; Arntsen, Børge; Wapenaar, Kees

    2016-05-01

    A novel application of seismic interferometry (SI) and Marchenko imaging using both surface and borehole data is presented. A series of redatuming schemes is proposed to combine both data sets for robust deep local imaging in the presence of velocity uncertainties. The redatuming schemes create a virtual acquisition geometry where both sources and receivers lie at the horizontal borehole level, thus only a local velocity model near the borehole is needed for imaging, and erroneous velocities in the shallow area have no effect on imaging around the borehole level. By joining the advantages of SI and Marchenko imaging, a macrovelocity model is no longer required and the proposed schemes use only single-component data. Furthermore, the schemes result in a set of virtual data that have fewer spurious events and internal multiples than previous virtual source redatuming methods. Two numerical examples are shown to illustrate the workflow and to demonstrate the benefits of the method. One is a synthetic model and the other is a realistic model of a field in the North Sea. In both tests, improved local images near the boreholes are obtained using the redatumed data without accurate velocities, because the redatumed data are close to the target.

  13. Neural networks, cellular automata, and robust approach applications for vertex localization in the opera target tracker detector

    International Nuclear Information System (INIS)

    Dmitrievskij, S.G.; Gornushkin, Yu.A.; Ososkov, G.A.

    2005-01-01

    A neural-network (NN) approach for neutrino interaction vertex reconstruction in the OPERA experiment with the help of the Target Tracker (TT) detector is described. A feed-forward NN with the standard back propagation option is used. The energy functional minimization of the network is performed by the method of conjugate gradients. Data preprocessing by means of cellular automaton algorithm is performed. The Hough transform is applied for muon track determination and the robust fitting method is used for shower axis reconstruction. A comparison of the proposed approach with earlier studies, based on the use of the neural network package SNNS, shows their similar performance. The further development of the approach is underway

  14. Systematic and robust design of photonic crystal waveguides by topology optimization

    DEFF Research Database (Denmark)

    Wang, Fengwen; Jensen, Jakob Søndergaard; Sigmund, Ole

    2010-01-01

    on a threshold projection. The objective is formulated to minimize the maximum error between actual group indices and a prescribed group index among these three designs. Novel photonic crystal waveguide facilitating slow light with a group index of n(g) = 40 is achieved by the robust optimization approach......A robust topology optimization method is presented to consider manufacturing uncertainties in tailoring dispersion properties of photonic crystal waveguides. The under, normal and over-etching scenarios in manufacturing process are represented by dilated, intermediate and eroded designs based....... The numerical result illustrates that the robust topology optimization provides a systematic and robust design methodology for photonic crystal waveguide design....

  15. Robust Utility Maximization Under Convex Portfolio Constraints

    International Nuclear Information System (INIS)

    Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed

    2015-01-01

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle

  16. Spot-Scanning Proton Arc (SPArc) Therapy: The First Robust and Delivery-Efficient Spot-Scanning Proton Arc Therapy

    International Nuclear Information System (INIS)

    Ding, Xuanfeng; Li, Xiaoqiang; Zhang, J. Michele; Kabolizadeh, Peyman; Stevens, Craig; Yan, Di

    2016-01-01

    Purpose: To present a novel robust and delivery-efficient spot-scanning proton arc (SPArc) therapy technique. Methods and Materials: A SPArc optimization algorithm was developed that integrates control point resampling, energy layer redistribution, energy layer filtration, and energy layer resampling. The feasibility of such a technique was evaluated using sample patients: 1 patient with locally advanced head and neck oropharyngeal cancer with bilateral lymph node coverage, and 1 with a nonmobile lung cancer. Plan quality, robustness, and total estimated delivery time were compared with the robust optimized multifield step-and-shoot arc plan without SPArc optimization (Arc_m_u_l_t_i_-_f_i_e_l_d) and the standard robust optimized intensity modulated proton therapy (IMPT) plan. Dose-volume histograms of target and organs at risk were analyzed, taking into account the setup and range uncertainties. Total delivery time was calculated on the basis of a 360° gantry room with 1 revolutions per minute gantry rotation speed, 2-millisecond spot switching time, 1-nA beam current, 0.01 minimum spot monitor unit, and energy layer switching time of 0.5 to 4 seconds. Results: The SPArc plan showed potential dosimetric advantages for both clinical sample cases. Compared with IMPT, SPArc delivered 8% and 14% less integral dose for oropharyngeal and lung cancer cases, respectively. Furthermore, evaluating the lung cancer plan compared with IMPT, it was evident that the maximum skin dose, the mean lung dose, and the maximum dose to ribs were reduced by 60%, 15%, and 35%, respectively, whereas the conformity index was improved from 7.6 (IMPT) to 4.0 (SPArc). The total treatment delivery time for lung and oropharyngeal cancer patients was reduced by 55% to 60% and 56% to 67%, respectively, when compared with Arc_m_u_l_t_i_-_f_i_e_l_d plans. Conclusion: The SPArc plan is the first robust and delivery-efficient proton spot-scanning arc therapy technique, which could potentially be

  17. Spot-Scanning Proton Arc (SPArc) Therapy: The First Robust and Delivery-Efficient Spot-Scanning Proton Arc Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Xuanfeng, E-mail: Xuanfeng.ding@beaumont.org; Li, Xiaoqiang; Zhang, J. Michele; Kabolizadeh, Peyman; Stevens, Craig; Yan, Di

    2016-12-01

    Purpose: To present a novel robust and delivery-efficient spot-scanning proton arc (SPArc) therapy technique. Methods and Materials: A SPArc optimization algorithm was developed that integrates control point resampling, energy layer redistribution, energy layer filtration, and energy layer resampling. The feasibility of such a technique was evaluated using sample patients: 1 patient with locally advanced head and neck oropharyngeal cancer with bilateral lymph node coverage, and 1 with a nonmobile lung cancer. Plan quality, robustness, and total estimated delivery time were compared with the robust optimized multifield step-and-shoot arc plan without SPArc optimization (Arc{sub multi-field}) and the standard robust optimized intensity modulated proton therapy (IMPT) plan. Dose-volume histograms of target and organs at risk were analyzed, taking into account the setup and range uncertainties. Total delivery time was calculated on the basis of a 360° gantry room with 1 revolutions per minute gantry rotation speed, 2-millisecond spot switching time, 1-nA beam current, 0.01 minimum spot monitor unit, and energy layer switching time of 0.5 to 4 seconds. Results: The SPArc plan showed potential dosimetric advantages for both clinical sample cases. Compared with IMPT, SPArc delivered 8% and 14% less integral dose for oropharyngeal and lung cancer cases, respectively. Furthermore, evaluating the lung cancer plan compared with IMPT, it was evident that the maximum skin dose, the mean lung dose, and the maximum dose to ribs were reduced by 60%, 15%, and 35%, respectively, whereas the conformity index was improved from 7.6 (IMPT) to 4.0 (SPArc). The total treatment delivery time for lung and oropharyngeal cancer patients was reduced by 55% to 60% and 56% to 67%, respectively, when compared with Arc{sub multi-field} plans. Conclusion: The SPArc plan is the first robust and delivery-efficient proton spot-scanning arc therapy technique, which could potentially be implemented

  18. Enhanced echolocation via robust statistics and super-resolution of sonar images

    Science.gov (United States)

    Kim, Kio

    Echolocation is a process in which an animal uses acoustic signals to exchange information with environments. In a recent study, Neretti et al. have shown that the use of robust statistics can significantly improve the resiliency of echolocation against noise and enhance its accuracy by suppressing the development of sidelobes in the processing of an echo signal. In this research, the use of robust statistics is extended to problems in underwater explorations. The dissertation consists of two parts. Part I describes how robust statistics can enhance the identification of target objects, which in this case are cylindrical containers filled with four different liquids. Particularly, this work employs a variation of an existing robust estimator called an L-estimator, which was first suggested by Koenker and Bassett. As pointed out by Au et al.; a 'highlight interval' is an important feature, and it is closely related with many other important features that are known to be crucial for dolphin echolocation. A varied L-estimator described in this text is used to enhance the detection of highlight intervals, which eventually leads to a successful classification of echo signals. Part II extends the problem into 2 dimensions. Thanks to the advances in material and computer technology, various sonar imaging modalities are available on the market. By registering acoustic images from such video sequences, one can extract more information on the region of interest. Computer vision and image processing allowed application of robust statistics to the acoustic images produced by forward looking sonar systems, such as Dual-frequency Identification Sonar and ProViewer. The first use of robust statistics for sonar image enhancement in this text is in image registration. Random Sampling Consensus (RANSAC) is widely used for image registration. The registration algorithm using RANSAC is optimized for sonar image registration, and the performance is studied. The second use of robust

  19. The Crane Robust Control

    Directory of Open Access Journals (Sweden)

    Marek Hicar

    2004-01-01

    Full Text Available The article is about a control design for complete structure of the crane: crab, bridge and crane uplift.The most important unknown parameters for simulations are burden weight and length of hanging rope. We will use robustcontrol for crab and bridge control to ensure adaptivity for burden weight and rope length. Robust control will be designed for current control of the crab and bridge, necessary is to know the range of unknown parameters. Whole robust will be splitto subintervals and after correct identification of unknown parameters the most suitable robust controllers will be chosen.The most important condition at the crab and bridge motion is avoiding from burden swinging in the final position. Crab and bridge drive is designed by asynchronous motor fed from frequency converter. We will use crane uplift with burden weightobserver in combination for uplift, crab and bridge drive with cooperation of their parameters: burden weight, rope length and crab and bridge position. Controllers are designed by state control method. We will use preferably a disturbance observerwhich will identify burden weight as a disturbance. The system will be working in both modes at empty hook as well asat maximum load: burden uplifting and dropping down.

  20. Investigation of a measure of robustness in inductively coupled plasma mass spectrometry

    Science.gov (United States)

    Makonnen, Yoseif; Beauchemin, Diane

    2015-01-01

    In industrial/commercial settings where operators often have minimal expertise in inductively coupled plasma (ICP) mass spectrometry (MS), there is a prevalent need for a response factor indicating robust plasma conditions, which is analogous to the Mg II/Mg I ratio in ICP optical emission spectrometry (OES), whereby a Mg II/Mg I ratio of 10 constitutes robust conditions. While minimizing the oxide ratio usually corresponds to robust conditions, there is no specific target value that is widely accepted as indicating robust conditions. Furthermore, tuning for low oxide ratios does not necessarily guarantee minimal matrix effects, as they really address polyatomic interferences. From experiments, conducted in parallel for both MS and OES, there were some element pairs of similar mass and very different ionization potential that were exploited for such a purpose, the rationale being that, if these elements were ionized to the same extent, then that could be indicative of a robust plasma. The Be II/Li I intensity ratio was directly related to the Mg II/Mg I ratio in OES. Moreover, the 9Be+/7Li+ ratio was inversely related to the CeO+/Ce+ and LaO+/La+ oxide ratios in MS. The effects of different matrices (i.e. 0.01-0.1 M Na) were also investigated and compared to a conventional argon plasma optimized for maximum sensitivity. The suppression effect of these matrices was significantly reduced, if not eliminated in the case of 0.01 M Na, when the 9Be+/7Li+ ratio was around 0.30 on the Varian 820 MS instrument. Moreover, a very similar ratio (0.28) increased robustness to the same extent on a completely different ICP-MS instrument (PerkinElmer NEXION). Much greater robustness was achieved using a mixed-gas plasma with nitrogen in the outer gas and either nitrogen or hydrogen as a sheathing gas, as the 9Be+/7Li+ ratio was then around 1.70. To the best of our knowledge, this is the first report on using a simple analyte intensity ratio, 9Be+/7Li+, to gauge plasma robustness.

  1. Robust Geometric Control of a Distillation Column

    DEFF Research Database (Denmark)

    Kymmel, Mogens; Andersen, Henrik Weisberg

    1987-01-01

    A frequency domain method, which makes it possible to adjust multivariable controllers with respect to both nominal performance and robustness, is presented. The basic idea in the approach is that the designer assigns objectives such as steady-state tracking, maximum resonance peaks, bandwidth, m...... is used to examine and improve geometric control of a binary distillation column....

  2. Efficacy of robust optimization plan with partial-arc VMAT for photon volumetric-modulated arc therapy: A phantom study.

    Science.gov (United States)

    Miura, Hideharu; Ozawa, Shuichi; Nagata, Yasushi

    2017-09-01

    This study investigated position dependence in planning target volume (PTV)-based and robust optimization plans using full-arc and partial-arc volumetric modulated arc therapy (VMAT). The gantry angles at the periphery, intermediate, and center CTV positions were 181°-180° (full-arc VMAT) and 181°-360° (partial-arc VMAT). A PTV-based optimization plan was defined by 5 mm margin expansion of the CTV to a PTV volume, on which the dose constraints were applied. The robust optimization plan consisted of a directly optimized dose to the CTV under a maximum-uncertainties setup of 5 mm. The prescription dose was normalized to the CTV D 99% (the minimum relative dose that covers 99% of the volume of the CTV) as an original plan. The isocenter was rigidly shifted at 1 mm intervals in the anterior-posterior (A-P), superior-inferior (S-I), and right-left (R-L) directions from the original position to the maximum-uncertainties setup of 5 mm in the original plan, yielding recalculated dose distributions. It was found that for the intermediate and center positions, the uncertainties in the D 99% doses to the CTV for all directions did not significantly differ when comparing the PTV-based and robust optimization plans (P > 0.05). For the periphery position, uncertainties in the D 99% doses to the CTV in the R-L direction for the robust optimization plan were found to be lower than those in the PTV-based optimization plan (P plan's efficacy using partial-arc VMAT depends on the periphery CTV position. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  3. Proceedings of the First International Symposium on Robust Design 2014

    DEFF Research Database (Denmark)

    The symposium concerns the topic of robust design from a practical and industry orientated perspective. During the 2 day symposium we will share our understanding of the need of industry with respect to the control of variance, reliability issues and approaches to robust design. The target audience...

  4. Validation of a 4D-PET Maximum Intensity Projection for Delineation of an Internal Target Volume

    International Nuclear Information System (INIS)

    Callahan, Jason; Kron, Tomas; Schneider-Kolsky, Michal; Dunn, Leon; Thompson, Mick; Siva, Shankar; Aarons, Yolanda; Binns, David; Hicks, Rodney J.

    2013-01-01

    Purpose: The delineation of internal target volumes (ITVs) in radiation therapy of lung tumors is currently performed by use of either free-breathing (FB) 18 F-fluorodeoxyglucose-positron emission tomography-computed tomography (FDG-PET/CT) or 4-dimensional (4D)-CT maximum intensity projection (MIP). In this report we validate the use of 4D-PET-MIP for the delineation of target volumes in both a phantom and in patients. Methods and Materials: A phantom with 3 hollow spheres was prepared surrounded by air then water. The spheres and water background were filled with a mixture of 18 F and radiographic contrast medium. A 4D-PET/CT scan was performed of the phantom while moving in 4 different breathing patterns using a programmable motion device. Nine patients with an FDG-avid lung tumor who underwent FB and 4D-PET/CT and >5 mm of tumor motion were included for analysis. The 3 spheres and patient lesions were contoured by 2 contouring methods (40% of maximum and PET edge) on the FB-PET, FB-CT, 4D-PET, 4D-PET-MIP, and 4D-CT-MIP. The concordance between the different contoured volumes was calculated using a Dice coefficient (DC). The difference in lung tumor volumes between FB-PET and 4D-PET volumes was also measured. Results: The average DC in the phantom using 40% and PET edge, respectively, was lowest for FB-PET/CT (DCAir = 0.72/0.67, DCBackground 0.63/0.62) and highest for 4D-PET/CT-MIP (DCAir = 0.84/0.83, DCBackground = 0.78/0.73). The average DC in the 9 patients using 40% and PET edge, respectively, was also lowest for FB-PET/CT (DC = 0.45/0.44) and highest for 4D-PET/CT-MIP (DC = 0.72/0.73). In the 9 lesions, the target volumes of the FB-PET using 40% and PET edge, respectively, were on average 40% and 45% smaller than the 4D-PET-MIP. Conclusion: A 4D-PET-MIP produces volumes with the highest concordance with 4D-CT-MIP across multiple breathing patterns and lesion sizes in both a phantom and among patients. Freebreathing PET/CT consistently underestimates ITV

  5. Validation of a 4D-PET Maximum Intensity Projection for Delineation of an Internal Target Volume

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, Jason, E-mail: jason.callahan@petermac.org [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Kron, Tomas [Department of Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia); Schneider-Kolsky, Michal [Department of Medical Imaging and Radiation Science, Monash University, Clayton, Victoria (Australia); Dunn, Leon [Department of Applied Physics, RMIT University, Melbourne (Australia); Thompson, Mick [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Siva, Shankar [Department of Radiation Oncology, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Aarons, Yolanda [Department of Radiation Oncology, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia); Binns, David [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Hicks, Rodney J. [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia)

    2013-07-15

    Purpose: The delineation of internal target volumes (ITVs) in radiation therapy of lung tumors is currently performed by use of either free-breathing (FB) {sup 18}F-fluorodeoxyglucose-positron emission tomography-computed tomography (FDG-PET/CT) or 4-dimensional (4D)-CT maximum intensity projection (MIP). In this report we validate the use of 4D-PET-MIP for the delineation of target volumes in both a phantom and in patients. Methods and Materials: A phantom with 3 hollow spheres was prepared surrounded by air then water. The spheres and water background were filled with a mixture of {sup 18}F and radiographic contrast medium. A 4D-PET/CT scan was performed of the phantom while moving in 4 different breathing patterns using a programmable motion device. Nine patients with an FDG-avid lung tumor who underwent FB and 4D-PET/CT and >5 mm of tumor motion were included for analysis. The 3 spheres and patient lesions were contoured by 2 contouring methods (40% of maximum and PET edge) on the FB-PET, FB-CT, 4D-PET, 4D-PET-MIP, and 4D-CT-MIP. The concordance between the different contoured volumes was calculated using a Dice coefficient (DC). The difference in lung tumor volumes between FB-PET and 4D-PET volumes was also measured. Results: The average DC in the phantom using 40% and PET edge, respectively, was lowest for FB-PET/CT (DCAir = 0.72/0.67, DCBackground 0.63/0.62) and highest for 4D-PET/CT-MIP (DCAir = 0.84/0.83, DCBackground = 0.78/0.73). The average DC in the 9 patients using 40% and PET edge, respectively, was also lowest for FB-PET/CT (DC = 0.45/0.44) and highest for 4D-PET/CT-MIP (DC = 0.72/0.73). In the 9 lesions, the target volumes of the FB-PET using 40% and PET edge, respectively, were on average 40% and 45% smaller than the 4D-PET-MIP. Conclusion: A 4D-PET-MIP produces volumes with the highest concordance with 4D-CT-MIP across multiple breathing patterns and lesion sizes in both a phantom and among patients. Freebreathing PET/CT consistently

  6. Robustness of power systems under a democratic-fiber-bundle-like model.

    Science.gov (United States)

    Yağan, Osman

    2015-06-01

    We consider a power system with N transmission lines whose initial loads (i.e., power flows) L(1),...,L(N) are independent and identically distributed with P(L)(x)=P[L≤x]. The capacity C(i) defines the maximum flow allowed on line i and is assumed to be given by C(i)=(1+α)L(i), with α>0. We study the robustness of this power system against random attacks (or failures) that target a p fraction of the lines, under a democratic fiber-bundle-like model. Namely, when a line fails, the load it was carrying is redistributed equally among the remaining lines. Our contributions are as follows. (i) We show analytically that the final breakdown of the system always takes place through a first-order transition at the critical attack size p(☆)=1-(E[L]/max(x)(P[L>x](αx+E[L|L>x])), where E[·] is the expectation operator; (ii) we derive conditions on the distribution P(L)(x) for which the first-order breakdown of the system occurs abruptly without any preceding diverging rate of failure; (iii) we provide a detailed analysis of the robustness of the system under three specific load distributions-uniform, Pareto, and Weibull-showing that with the minimum load L(min) and mean load E[L] fixed, Pareto distribution is the worst (in terms of robustness) among the three, whereas Weibull distribution is the best with shape parameter selected relatively large; (iv) we provide numerical results that confirm our mean-field analysis; and (v) we show that p(☆) is maximized when the load distribution is a Dirac delta function centered at E[L], i.e., when all lines carry the same load. This last finding is particularly surprising given that heterogeneity is known to lead to high robustness against random failures in many other systems.

  7. Robust Forecasting for Energy Efficiency of Wireless Multimedia Sensor Networks.

    Science.gov (United States)

    Wang, Xue; Ma, Jun-Jie; Ding, Liang; Bi, Dao-Wei

    2007-11-15

    An important criterion of wireless sensor network is the energy efficiency inspecified applications. In this wireless multimedia sensor network, the observations arederived from acoustic sensors. Focused on the energy problem of target tracking, this paperproposes a robust forecasting method to enhance the energy efficiency of wirelessmultimedia sensor networks. Target motion information is acquired by acoustic sensornodes while a distributed network with honeycomb configuration is constructed. Thereby,target localization is performed by multiple sensor nodes collaboratively through acousticsignal processing. A novel method, combining autoregressive moving average (ARMA)model and radial basis function networks (RBFNs), is exploited to perform robust targetposition forecasting during target tracking. Then sensor nodes around the target areawakened according to the forecasted target position. With committee decision of sensornodes, target localization is performed in a distributed manner and the uncertainty ofdetection is reduced. Moreover, a sensor-to-observer routing approach of the honeycombmesh network is investigated to solve the data reporting considering the residual energy ofsensor nodes. Target localization and forecasting are implemented in experiments.Meanwhile, sensor node awakening and dynamic routing are evaluated. Experimentalresults verify that energy efficiency of wireless multimedia sensor network is enhanced bythe proposed target tracking method.

  8. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential...

  9. Manipulation Robustness of Collaborative Filtering

    OpenAIRE

    Benjamin Van Roy; Xiang Yan

    2010-01-01

    A collaborative filtering system recommends to users products that similar users like. Collaborative filtering systems influence purchase decisions and hence have become targets of manipulation by unscrupulous vendors. We demonstrate that nearest neighbors algorithms, which are widely used in commercial systems, are highly susceptible to manipulation and introduce new collaborative filtering algorithms that are relatively robust.

  10. Robust Optimization of Fourth Party Logistics Network Design under Disruptions

    Directory of Open Access Journals (Sweden)

    Jia Li

    2015-01-01

    Full Text Available The Fourth Party Logistics (4PL network faces disruptions of various sorts under the dynamic and complex environment. In order to explore the robustness of the network, the 4PL network design with consideration of random disruptions is studied. The purpose of the research is to construct a 4PL network that can provide satisfactory service to customers at a lower cost when disruptions strike. Based on the definition of β-robustness, a robust optimization model of 4PL network design under disruptions is established. Based on the NP-hard characteristic of the problem, the artificial fish swarm algorithm (AFSA and the genetic algorithm (GA are developed. The effectiveness of the algorithms is tested and compared by simulation examples. By comparing the optimal solutions of the 4PL network for different robustness level, it is indicated that the robust optimization model can evade the market risks effectively and save the cost in the maximum limit when it is applied to 4PL network design.

  11. New robust statistical procedures for the polytomous logistic regression models.

    Science.gov (United States)

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  12. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  13. Delivery of TLR7 agonist to monocytes and dendritic cells by DCIR targeted liposomes induces robust production of anti-cancer cytokines

    DEFF Research Database (Denmark)

    Klauber, Thomas Christopher Bogh; Laursen, Janne Marie; Zucker, Daniel

    2017-01-01

    Tumor immune escape is today recognized as an important cancer hallmark and is therefore a major focus area in cancer therapy. Monocytes and dendritic cells (DCs), which are central to creating a robust anti-tumor immune response and establishing an anti-tumorigenic microenvironment, are directly...... targeted by the tumor escape mechanisms to develop immunosuppressive phenotypes. Providing activated monocytes and DCs to the tumor tissue is therefore an attractive way to break the tumor-derived immune suppression and reinstate cancer immune surveillance. To activate monocytes and DCs with high...... as their immune activating potential in blood-derived monocytes, myeloid DCs (mDCs), and plasmacytoid DCs (pDCs). Monocytes and mDCs were targeted with high specificity over lymphocytes, and exhibited potent TLR7-specific secretion of the anti-cancer cytokines IL-12p70, IFN-α 2a, and IFN-γ. This delivery system...

  14. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  15. Information theory perspective on network robustness

    International Nuclear Information System (INIS)

    Schieber, Tiago A.; Carpi, Laura; Frery, Alejandro C.; Rosso, Osvaldo A.; Pardalos, Panos M.; Ravetti, Martín G.

    2016-01-01

    A crucial challenge in network theory is the study of the robustness of a network when facing a sequence of failures. In this work, we propose a dynamical definition of network robustness based on Information Theory, that considers measurements of the structural changes caused by failures of the network's components. Failures are defined here as a temporal process defined in a sequence. Robustness is then evaluated by measuring dissimilarities between topologies after each time step of the sequence, providing a dynamical information about the topological damage. We thoroughly analyze the efficiency of the method in capturing small perturbations by considering different probability distributions on networks. In particular, we find that distributions based on distances are more consistent in capturing network structural deviations, as better reflect the consequences of the failures. Theoretical examples and real networks are used to study the performance of this methodology. - Highlights: • A novel methodology to measure the robustness of a network to component failure or targeted attacks is proposed. • The use of the network's distance PDF allows a precise analysis. • The method provides a dynamic robustness profile showing the response of the topology to each failure event. • The measure is capable to detect network's critical elements.

  16. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    Science.gov (United States)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  17. Experimental demonstration of the maximum likelihood-based chromatic dispersion estimator for coherent receivers

    DEFF Research Database (Denmark)

    Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk

    2014-01-01

    We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR...

  18. Robust visual tracking via multiscale deep sparse networks

    Science.gov (United States)

    Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo

    2017-04-01

    In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.

  19. Comparison of Extremum-Seeking Control Techniques for Maximum Power Point Tracking in Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Chen-Han Wu

    2011-12-01

    Full Text Available Due to Japan’s recent nuclear crisis and petroleum price hikes, the search for renewable energy sources has become an issue of immediate concern. A promising candidate attracting much global attention is solar energy, as it is green and also inexhaustible. A maximum power point tracking (MPPT controller is employed in such a way that the output power provided by a photovoltaic (PV system is boosted to its maximum level. However, in the context of abrupt changes in irradiance, conventional MPPT controller approaches suffer from insufficient robustness against ambient variation, inferior transient response and a loss of output power as a consequence of the long duration required of tracking procedures. Accordingly, in this work the maximum power point tracking is carried out successfully using a sliding mode extremum-seeking control (SMESC method, and the tracking performances of three controllers are compared by simulations, that is, an extremum-seeking controller, a sinusoidal extremum-seeking controller and a sliding mode extremum-seeking controller. Being able to track the maximum power point promptly in the case of an abrupt change in irradiance, the SMESC approach is proven by simulations to be superior in terms of system dynamic and steady state responses, and an excellent robustness along with system stability is demonstrated as well.

  20. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    International Nuclear Information System (INIS)

    De Silva, T; Ketcha, M; Siewerdsen, J H; Uneri, A; Reaungamornrat, S; Vogt, S; Kleinszig, G; Lo, S F; Wolinsky, J P; Gokaslan, Z L; Aygun, N

    2015-01-01

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperative mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such

  1. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    Energy Technology Data Exchange (ETDEWEB)

    De Silva, T; Ketcha, M; Siewerdsen, J H [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD (United States); Uneri, A; Reaungamornrat, S [Department of Computer Science, Johns Hopkins University, Baltimore, MD (United States); Vogt, S; Kleinszig, G [Siemens Healthcare XP Division, Erlangen, DE (Germany); Lo, S F; Wolinsky, J P; Gokaslan, Z L [Department of Neurosurgery, The Johns Hopkins Hospital, Baltimore, MD (United States); Aygun, N [Department of Raiology and Radiological Sciences, The Johns Hopkins Hospital, Baltimore, MD (United States)

    2015-06-15

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperative mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such

  2. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  3. Robustness of climate metrics under climate policy ambiguity

    International Nuclear Information System (INIS)

    Ekholm, Tommi; Lindroos, Tomi J.; Savolainen, Ilkka

    2013-01-01

    Highlights: • We assess the economic impacts of using different climate metrics. • The setting is cost-efficient scenarios for three interpretations of the 2C target. • With each target setting, the optimal metric is different. • Therefore policy ambiguity prevents the selection of an optimal metric. • Robust metric values that perform well with multiple policy targets however exist. -- Abstract: A wide array of alternatives has been proposed as the common metrics with which to compare the climate impacts of different emission types. Different physical and economic metrics and their parameterizations give diverse weights between e.g. CH 4 and CO 2 , and fixing the metric from one perspective makes it sub-optimal from another. As the aims of global climate policy involve some degree of ambiguity, it is not possible to determine a metric that would be optimal and consistent with all policy aims. This paper evaluates the cost implications of using predetermined metrics in cost-efficient mitigation scenarios. Three formulations of the 2 °C target, including both deterministic and stochastic approaches, shared a wide range of metric values for CH 4 with which the mitigation costs are only slightly above the cost-optimal levels. Therefore, although ambiguity in current policy might prevent us from selecting an optimal metric, it can be possible to select robust metric values that perform well with multiple policy targets

  4. SU-F-T-205: Effectiveness of Robust Treatment Planning to Account for Inter- Fractional Variation in Intensity Modulated Proton Therapy for Head Neck Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Li, X; Zhang, J; Qin, A; Liang, J; Zhou, J; Yan, D; Chen, P; Krauss, D; Ding, X [Beaumont Health Systeml, Royal Oak, Michigan (United States)

    2016-06-15

    Purpose: To evaluate the potential benefits of robust optimization in intensity modulated proton therapy(IMPT) treatment planning to account for inter-fractional variation for Head Neck Cancer(HNC). Methods: One patient with bilateral HNC previous treated at our institution was used in this study. Ten daily CBCTs were selected. The CT numbers of the CBCTs were corrected by mapping the CT numbers from simulation CT via Deformable Image Registration. The planning target volumes(PTVs) were defined by a 3mm expansion from clinical target volumes(CTVs). The prescription was 70Gy, 54Gy to CTV1, CTV2, and PTV1, PTV2 for robust optimized(RO) and conventionally optimized(CO) plans respectively. Both techniques were generated by RayStation with the same beam angles: two anterior oblique and two posterior oblique angles. The similar dose constraints were used to achieve 99% of CTV1 received 100% prescription dose while kept the hotspots less than 110% of the prescription. In order to evaluate the dosimetric result through the course of treatment, the contours were deformed from simulation CT to daily CBCTs, modified, and approved by a radiation oncologist. The initial plan on the simulation CT was re-replayed on the daily CBCTs followed the bony alignment. The target coverage was evaluated using the daily doses and the cumulative dose. Results: Eight of 10 daily deliveries with using RO plan achieved at least 95% prescription dose to CTV1 and CTV2, while still kept maximum hotspot less than 112% of prescription compared with only one of 10 for the CO plan to achieve the same standards. For the cumulative doses, the target coverage for both RO and CO plans was quite similar, which was due to the compensation of cold and hot spots. Conclusion: Robust optimization can be effectively applied to compensate for target dose deficit caused by inter-fractional target geometric variation in IMPT treatment planning.

  5. Using the Nova target chamber for high-yield targets

    International Nuclear Information System (INIS)

    Pitts, J.H.

    1987-01-01

    The existing 2.2-m-radius Nova aluminum target chamber, coated and lined with boron-seeded carbon shields, is proposed for use with 1000-MJ-yield targets in the next laser facility. The laser beam and diagnostic holes in the target chamber are left open and the desired 10 -2 Torr vacuum is maintained both inside and outside the target chamber; a larger target chamber room is the vacuum barrier to the atmosphere. The hole area available is three times that necessary to maintain a maximum fluence below 12 J/cm 2 on optics placed at a radius of 10 m. Maximum stress in the target chamber wall is 73 MPa, which complies with the intent of the ASME Pressure Vessel Code. However, shock waves passing through the inner carbon shield could cause it to comminute. We propose tests and analyses to ensure that the inner carbon shield survives the environment. 13 refs

  6. Robust Control Design via Linear Programming

    Science.gov (United States)

    Keel, L. H.; Bhattacharyya, S. P.

    1998-01-01

    This paper deals with the problem of synthesizing or designing a feedback controller of fixed dynamic order. The closed loop specifications considered here are given in terms of a target performance vector representing a desired set of closed loop transfer functions connecting various signals. In general these point targets are unattainable with a fixed order controller. By enlarging the target from a fixed point set to an interval set the solvability conditions with a fixed order controller are relaxed and a solution is more easily enabled. Results from the parametric robust control literature can be used to design the interval target family so that the performance deterioration is acceptable, even when plant uncertainty is present. It is shown that it is possible to devise a computationally simple linear programming approach that attempts to meet the desired closed loop specifications.

  7. Food supply chain network robustness : a literature review and research agenda

    NARCIS (Netherlands)

    Vlajic, J.V.; Hendrix, E.M.T.; Vorst, van der J.G.A.J.

    2008-01-01

    Today’s business environment is characterized by challenges of strong global competition where companies tend to achieve leanness and maximum responsiveness. However, lean supply chain networks (SCNs) become more vulnerable to all kind of disruptions. Food SCNs have to become robust, i.e. they

  8. Influence of the Target Vessel on the Location and Area of Maximum Skin Dose during Percutaneous Coronary Intervention

    International Nuclear Information System (INIS)

    Chida, K.; Fuda, K.; Kagaya, Y.; Saito, H.; Takai, Y.; Kohzuki, M.; Takahash i, S.; Yamada, S.; Zuguchi, M.

    2007-01-01

    Background: A number of cases involving radiation-associated patient skin injury attributable to percutaneous coronary intervention (PCI) have been reported. Knowledge of the location and area of the patient's maximum skin dose (MSD) in PCI is necessary to reduce the risk of skin injury. Purpose: To determine the location and area of the MSD in PCI, and separately analyze the effects of different target vessels. Material and Methods: 197 consecutive PCI procedures were studied, and the location and area of the MSD were calculated by a skin-dose mapping software program: Caregraph. The target vessels of the PCI procedures were divided into four groups based on the American Heart Association (AHA) classification. Results: The sites of the MSD for AHA no.1-3, AHA no.4, and AHA no.11-15 were located mainly on the right back skin, the lower right or center back skin, and the upper back skin areas, respectively, whereas the MSD sites for the AHA no. 5-10 PCI were widely spread. The MSD area for the AHA no. 4 PCI was larger than that for the AHA no. 11-15 PCI (P<0.0001). Conclusion: Although the radiation associated with PCI can be widely spread and variable, we observed a tendency regarding the location and area of the MSD when we separately analyzed the data for different target vessels. We recommend the use of a smaller radiation field size and the elimination of overlapping fields during PCI

  9. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  10. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  11. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  12. Studies on the robustness of shock-ignited laser fusion targets

    International Nuclear Information System (INIS)

    Atzeni, S; Schiavi, A; Marocchino, A

    2011-01-01

    Several aspects of the sensitivity of a shock-ignited inertial fusion target to variation of parameters and errors or imperfections are studied by means of one-dimensional and two-dimensional numerical simulations. The study refers to a simple all-DT target, initially proposed for fast ignition (Atzeni et al 2007 Phys. Plasmas 7 052702) and subsequently shown to be also suitable for shock ignition (Ribeyre et al 2009 Plasma Phys. Control. Fusion 51 015013). It is shown that the growth of both Richtmyer-Meshkov and Rayleigh-Taylor instability (RTI) at the ablation front is reduced by laser pulses with an adiabat-shaping picket. An operating window for the parameters of the ignition laser spike is described; the threshold power depends on beam focusing and synchronization with the compression pulse. The time window for spike launch widens with beam power, while the minimum spike energy is independent of spike power. A large parametric scan indicates good tolerance (at the level of a few percent) to target mass and laser power errors. 2D simulations indicate that the strong igniting shock wave plays an important role in reducing deceleration-phase RTI growth. Instead, the high hot-spot convergence ratio (ratio of initial target radius to hot-spot radius at ignition) makes ignition highly sensitive to target mispositioning.

  13. A scoring mechanism for the rank aggregation of network robustness

    Science.gov (United States)

    Yazdani, Alireza; Dueñas-Osorio, Leonardo; Li, Qilin

    2013-10-01

    To date, a number of metrics have been proposed to quantify inherent robustness of network topology against failures. However, each single metric usually only offers a limited view of network vulnerability to different types of random failures and targeted attacks. When applied to certain network configurations, different metrics rank network topology robustness in different orders which is rather inconsistent, and no single metric fully characterizes network robustness against different modes of failure. To overcome such inconsistency, this work proposes a multi-metric approach as the basis of evaluating aggregate ranking of network topology robustness. This is based on simultaneous utilization of a minimal set of distinct robustness metrics that are standardized so to give way to a direct comparison of vulnerability across networks with different sizes and configurations, hence leading to an initial scoring of inherent topology robustness. Subsequently, based on the inputs of initial scoring a rank aggregation method is employed to allocate an overall ranking of robustness to each network topology. A discussion is presented in support of the presented multi-metric approach and its applications to more realistically assess and rank network topology robustness.

  14. Robust and efficient walking with spring-like legs

    Energy Technology Data Exchange (ETDEWEB)

    Rummel, J; Blum, Y; Seyfarth, A, E-mail: juergen.rummel@uni-jena.d, E-mail: andre.seyfarth@uni-jena.d [Lauflabor Locomotion Laboratory, University of Jena, Dornburger Strasse 23, 07743 Jena (Germany)

    2010-12-15

    The development of bipedal walking robots is inspired by human walking. A way of implementing walking could be performed by mimicking human leg dynamics. A fundamental model, representing human leg dynamics during walking and running, is the bipedal spring-mass model which is the basis for this paper. The aim of this study is the identification of leg parameters leading to a compromise between robustness and energy efficiency in walking. It is found that, compared to asymmetric walking, symmetric walking with flatter angles of attack reveals such a compromise. With increasing leg stiffness, energy efficiency increases continuously. However, robustness is the maximum at moderate leg stiffness and decreases slightly with increasing stiffness. Hence, an adjustable leg compliance would be preferred, which is adaptable to the environment. If the ground is even, a high leg stiffness leads to energy efficient walking. However, if external perturbations are expected, e.g. when the robot walks on uneven terrain, the leg should be softer and the angle of attack flatter. In the case of underactuated robots with constant physical springs, the leg stiffness should be larger than k-tilde = 14 in order to use the most robust gait. Soft legs, however, lack in both robustness and efficiency.

  15. Robust and efficient walking with spring-like legs

    International Nuclear Information System (INIS)

    Rummel, J; Blum, Y; Seyfarth, A

    2010-01-01

    The development of bipedal walking robots is inspired by human walking. A way of implementing walking could be performed by mimicking human leg dynamics. A fundamental model, representing human leg dynamics during walking and running, is the bipedal spring-mass model which is the basis for this paper. The aim of this study is the identification of leg parameters leading to a compromise between robustness and energy efficiency in walking. It is found that, compared to asymmetric walking, symmetric walking with flatter angles of attack reveals such a compromise. With increasing leg stiffness, energy efficiency increases continuously. However, robustness is the maximum at moderate leg stiffness and decreases slightly with increasing stiffness. Hence, an adjustable leg compliance would be preferred, which is adaptable to the environment. If the ground is even, a high leg stiffness leads to energy efficient walking. However, if external perturbations are expected, e.g. when the robot walks on uneven terrain, the leg should be softer and the angle of attack flatter. In the case of underactuated robots with constant physical springs, the leg stiffness should be larger than k-tilde = 14 in order to use the most robust gait. Soft legs, however, lack in both robustness and efficiency.

  16. Impact of marine reserve on maximum sustainable yield in a traditional prey-predator system

    Science.gov (United States)

    Paul, Prosenjit; Kar, T. K.; Ghorai, Abhijit

    2018-01-01

    Multispecies fisheries management requires managers to consider the impact of fishing activities on several species as fishing impacts both targeted and non-targeted species directly or indirectly in several ways. The intended goal of traditional fisheries management is to achieve maximum sustainable yield (MSY) from the targeted species, which on many occasions affect the targeted species as well as the entire ecosystem. Marine reserves are often acclaimed as the marine ecosystem management tool. Few attempts have been made to generalize the ecological effects of marine reserve on MSY policy. We examine here how MSY and population level in a prey-predator system are affected by the low, medium and high reserve size under different possible scenarios. Our simulation works shows that low reserve area, the value of MSY for prey exploitation is maximum when both prey and predator species have fast movement rate. For medium reserve size, our analysis revealed that the maximum value of MSY for prey exploitation is obtained when prey population has fast movement rate and predator population has slow movement rate. For high reserve area, the maximum value of MSY for prey's exploitation is very low compared to the maximum value of MSY for prey's exploitation in case of low and medium reserve. On the other hand, for low and medium reserve area, MSY for predator exploitation is maximum when both the species have fast movement rate.

  17. Robust estimation of the correlation matrix of longitudinal data

    KAUST Repository

    Maadooliat, Mehdi

    2011-09-23

    We propose a double-robust procedure for modeling the correlation matrix of a longitudinal dataset. It is based on an alternative Cholesky decomposition of the form Σ=DLL⊤D where D is a diagonal matrix proportional to the square roots of the diagonal entries of Σ and L is a unit lower-triangular matrix determining solely the correlation matrix. The first robustness is with respect to model misspecification for the innovation variances in D, and the second is robustness to outliers in the data. The latter is handled using heavy-tailed multivariate t-distributions with unknown degrees of freedom. We develop a Fisher scoring algorithm for computing the maximum likelihood estimator of the parameters when the nonredundant and unconstrained entries of (L,D) are modeled parsimoniously using covariates. We compare our results with those based on the modified Cholesky decomposition of the form LD2L⊤ using simulations and a real dataset. © 2011 Springer Science+Business Media, LLC.

  18. Robustness Recipes for Minimax Robust Optimization in Intensity Modulated Proton Therapy for Oropharyngeal Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Voort, Sebastian van der [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Water, Steven van de [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Perkó, Zoltán [Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Heijmen, Ben [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Lathouwers, Danny [Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Hoogeman, Mischa, E-mail: m.hoogeman@erasmusmc.nl [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands)

    2016-05-01

    Purpose: We aimed to derive a “robustness recipe” giving the range robustness (RR) and setup robustness (SR) settings (ie, the error values) that ensure adequate clinical target volume (CTV) coverage in oropharyngeal cancer patients for given gaussian distributions of systematic setup, random setup, and range errors (characterized by standard deviations of Σ, σ, and ρ, respectively) when used in minimax worst-case robust intensity modulated proton therapy (IMPT) optimization. Methods and Materials: For the analysis, contoured computed tomography (CT) scans of 9 unilateral and 9 bilateral patients were used. An IMPT plan was considered robust if, for at least 98% of the simulated fractionated treatments, 98% of the CTV received 95% or more of the prescribed dose. For fast assessment of the CTV coverage for given error distributions (ie, different values of Σ, σ, and ρ), polynomial chaos methods were used. Separate recipes were derived for the unilateral and bilateral cases using one patient from each group, and all 18 patients were included in the validation of the recipes. Results: Treatment plans for bilateral cases are intrinsically more robust than those for unilateral cases. The required RR only depends on the ρ, and SR can be fitted by second-order polynomials in Σ and σ. The formulas for the derived robustness recipes are as follows: Unilateral patients need SR = −0.15Σ{sup 2} + 0.27σ{sup 2} + 1.85Σ − 0.06σ + 1.22 and RR=3% for ρ = 1% and ρ = 2%; bilateral patients need SR = −0.07Σ{sup 2} + 0.19σ{sup 2} + 1.34Σ − 0.07σ + 1.17 and RR=3% and 4% for ρ = 1% and 2%, respectively. For the recipe validation, 2 plans were generated for each of the 18 patients corresponding to Σ = σ = 1.5 mm and ρ = 0% and 2%. Thirty-four plans had adequate CTV coverage in 98% or more of the simulated fractionated treatments; the remaining 2 had adequate coverage in 97.8% and 97.9%. Conclusions: Robustness recipes were derived that can

  19. Using spatial information about recurrence risk for robust optimization of dose-painting prescription functions

    International Nuclear Information System (INIS)

    Bender, Edward T.

    2012-01-01

    Purpose: To develop a robust method for deriving dose-painting prescription functions using spatial information about the risk for disease recurrence. Methods: Spatial distributions of radiobiological model parameters are derived from distributions of recurrence risk after uniform irradiation. These model parameters are then used to derive optimal dose-painting prescription functions given a constant mean biologically effective dose. Results: An estimate for the optimal dose distribution can be derived based on spatial information about recurrence risk. Dose painting based on imaging markers that are moderately or poorly correlated with recurrence risk are predicted to potentially result in inferior disease control when compared the same mean biologically effective dose delivered uniformly. A robust optimization approach may partially mitigate this issue. Conclusions: The methods described here can be used to derive an estimate for a robust, patient-specific prescription function for use in dose painting. Two approximate scaling relationships were observed: First, the optimal choice for the maximum dose differential when using either a linear or two-compartment prescription function is proportional to R, where R is the Pearson correlation coefficient between a given imaging marker and recurrence risk after uniform irradiation. Second, the predicted maximum possible gain in tumor control probability for any robust optimization technique is nearly proportional to the square of R.

  20. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    Science.gov (United States)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  1. Engineering Robustness of Microbial Cell Factories.

    Science.gov (United States)

    Gong, Zhiwei; Nielsen, Jens; Zhou, Yongjin J

    2017-10-01

    Metabolic engineering and synthetic biology offer great prospects in developing microbial cell factories capable of converting renewable feedstocks into fuels, chemicals, food ingredients, and pharmaceuticals. However, prohibitively low production rate and mass concentration remain the major hurdles in industrial processes even though the biosynthetic pathways are comprehensively optimized. These limitations are caused by a variety of factors unamenable for host cell survival, such as harsh industrial conditions, fermentation inhibitors from biomass hydrolysates, and toxic compounds including metabolic intermediates and valuable target products. Therefore, engineered microbes with robust phenotypes is essential for achieving higher yield and productivity. In this review, the recent advances in engineering robustness and tolerance of cell factories is described to cope with these issues and briefly introduce novel strategies with great potential to enhance the robustness of cell factories, including metabolic pathway balancing, transporter engineering, and adaptive laboratory evolution. This review also highlights the integration of advanced systems and synthetic biology principles toward engineering the harmony of overall cell function, more than the specific pathways or enzymes. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Mutational robustness of gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Aalt D J van Dijk

    Full Text Available Mutational robustness of gene regulatory networks refers to their ability to generate constant biological output upon mutations that change network structure. Such networks contain regulatory interactions (transcription factor-target gene interactions but often also protein-protein interactions between transcription factors. Using computational modeling, we study factors that influence robustness and we infer several network properties governing it. These include the type of mutation, i.e. whether a regulatory interaction or a protein-protein interaction is mutated, and in the case of mutation of a regulatory interaction, the sign of the interaction (activating vs. repressive. In addition, we analyze the effect of combinations of mutations and we compare networks containing monomeric with those containing dimeric transcription factors. Our results are consistent with available data on biological networks, for example based on evolutionary conservation of network features. As a novel and remarkable property, we predict that networks are more robust against mutations in monomer than in dimer transcription factors, a prediction for which analysis of conservation of DNA binding residues in monomeric vs. dimeric transcription factors provides indirect evidence.

  3. Use of Maximum Intensity Projections (MIPs) for target outlining in 4DCT radiotherapy planning.

    Science.gov (United States)

    Muirhead, Rebecca; McNee, Stuart G; Featherstone, Carrie; Moore, Karen; Muscat, Sarah

    2008-12-01

    Four-dimensional computed tomography (4DCT) is currently being introduced to radiotherapy centers worldwide, for use in radical radiotherapy planning for non-small cell lung cancer (NSCLC). A significant drawback is the time required to delineate 10 individual CT scans for each patient. Every department will hence ask the question if the single Maximum Intensity Projection (MIP) scan can be used as an alternative. Although the problems regarding the use of the MIP in node-positive disease have been discussed in the literature, a comprehensive study assessing its use has not been published. We compared an internal target volume (ITV) created using the MIP to an ITV created from the composite volume of 10 clinical target volumes (CTVs) delineated on the 10 phases of the 4DCT. 4DCT data was collected from 14 patients with NSCLC. In each patient, the ITV was delineated on the MIP image (ITV_MIP) and a composite ITV created from the 10 CTVs delineated on each of the 10 scans in the dataset. The structures were compared by assessment of volumes of overlap and exclusion. There was a median of 19.0% (range, 5.5-35.4%) of the volume of ITV_10phase not enclosed by the ITV_MIP, demonstrating that the use of the MIP could result in under-treatment of disease. In contrast only a very small amount of the ITV_MIP was not enclosed by the ITV_10phase (median of 2.3%, range, 0.4-9.8%), indicating the ITV_10phase covers almost all of the tumor tissue as identified by MIP. Although there were only two Stage I patients, both demonstrated very similar ITV_10phase and ITV_MIP volumes. These findings suggest that Stage I NSCLC tumors could be outlined on the MIP alone. In Stage II and III tumors the ITV_10phase would be more reliable. To prevent under-treatment of disease, the MIP image can only be used for delineation in Stage I tumors.

  4. Robust optimization methods for cardiac sparing in tangential breast IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Mahmoudzadeh, Houra, E-mail: houra@mie.utoronto.ca [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8 (Canada); Lee, Jenny [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Chan, Timothy C. Y. [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada); Purdie, Thomas G. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada)

    2015-05-15

    Purpose: In left-sided tangential breast intensity modulated radiation therapy (IMRT), the heart may enter the radiation field and receive excessive radiation while the patient is breathing. The patient’s breathing pattern is often irregular and unpredictable. We verify the clinical applicability of a heart-sparing robust optimization approach for breast IMRT. We compare robust optimized plans with clinical plans at free-breathing and clinical plans at deep inspiration breath-hold (DIBH) using active breathing control (ABC). Methods: Eight patients were included in the study with each patient simulated using 4D-CT. The 4D-CT image acquisition generated ten breathing phase datasets. An average scan was constructed using all the phase datasets. Two of the eight patients were also imaged at breath-hold using ABC. The 4D-CT datasets were used to calculate the accumulated dose for robust optimized and clinical plans based on deformable registration. We generated a set of simulated breathing probability mass functions, which represent the fraction of time patients spend in different breathing phases. The robust optimization method was applied to each patient using a set of dose-influence matrices extracted from the 4D-CT data and a model of the breathing motion uncertainty. The goal of the optimization models was to minimize the dose to the heart while ensuring dose constraints on the target were achieved under breathing motion uncertainty. Results: Robust optimized plans were improved or equivalent to the clinical plans in terms of heart sparing for all patients studied. The robust method reduced the accumulated heart dose (D10cc) by up to 801 cGy compared to the clinical method while also improving the coverage of the accumulated whole breast target volume. On average, the robust method reduced the heart dose (D10cc) by 364 cGy and improved the optBreast dose (D99%) by 477 cGy. In addition, the robust method had smaller deviations from the planned dose to the

  5. Robustness of a Neural Network Model for Power Peak Factor Estimation in Protection Systems

    International Nuclear Information System (INIS)

    Souza, Rose Mary G.P.; Moreira, Joao M.L.

    2006-01-01

    This work presents results of robustness verification of artificial neural network correlations that improve the real time prediction of the power peak factor for reactor protection systems. The input variables considered in the correlation are those available in the reactor protection systems, namely, the axial power differences obtained from measured ex-core detectors, and the position of control rods. The correlations, based on radial basis function (RBF) and multilayer perceptron (MLP) neural networks, estimate the power peak factor, without faulty signals, with average errors between 0.13%, 0.19% and 0.15%, and maximum relative error of 2.35%. The robustness verification was performed for three different neural network correlations. The results show that they are robust against signal degradation, producing results with faulty signals with a maximum error of 6.90%. The average error associated to faulty signals for the MLP network is about half of that of the RBF network, and the maximum error is about 1% smaller. These results demonstrate that MLP neural network correlation is more robust than the RBF neural network correlation. The results also show that the input variables present redundant information. The axial power difference signals compensate the faulty signal for the position of a given control rod, and improves the results by about 10%. The results show that the errors in the power peak factor estimation by these neural network correlations, even in faulty conditions, are smaller than the current PWR schemes which may have uncertainties as high as 8%. Considering the maximum relative error of 2.35%, these neural network correlations would allow decreasing the power peak factor safety margin by about 5%. Such a reduction could be used for operating the reactor with a higher power level or with more flexibility. The neural network correlation has to meet requirements of high integrity software that performs safety grade actions. It is shown that the

  6. Robustness and structure of complex networks

    Science.gov (United States)

    Shao, Shuai

    This dissertation covers the two major parts of my PhD research on statistical physics and complex networks: i) modeling a new type of attack -- localized attack, and investigating robustness of complex networks under this type of attack; ii) discovering the clustering structure in complex networks and its influence on the robustness of coupled networks. Complex networks appear in every aspect of our daily life and are widely studied in Physics, Mathematics, Biology, and Computer Science. One important property of complex networks is their robustness under attacks, which depends crucially on the nature of attacks and the structure of the networks themselves. Previous studies have focused on two types of attack: random attack and targeted attack, which, however, are insufficient to describe many real-world damages. Here we propose a new type of attack -- localized attack, and study the robustness of complex networks under this type of attack, both analytically and via simulation. On the other hand, we also study the clustering structure in the network, and its influence on the robustness of a complex network system. In the first part, we propose a theoretical framework to study the robustness of complex networks under localized attack based on percolation theory and generating function method. We investigate the percolation properties, including the critical threshold of the phase transition pc and the size of the giant component Pinfinity. We compare localized attack with random attack and find that while random regular (RR) networks are more robust against localized attack, Erdoḧs-Renyi (ER) networks are equally robust under both types of attacks. As for scale-free (SF) networks, their robustness depends crucially on the degree exponent lambda. The simulation results show perfect agreement with theoretical predictions. We also test our model on two real-world networks: a peer-to-peer computer network and an airline network, and find that the real-world networks

  7. Exploratory Study of 4D versus 3D Robust Optimization in Intensity Modulated Proton Therapy for Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wei, E-mail: Liu.Wei@mayo.edu [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Schild, Steven E. [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Chang, Joe Y.; Liao, Zhongxing [Department of Radiation Oncology, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Chang, Yu-Hui [Division of Health Sciences Research, Mayo Clinic Arizona, Phoenix, Arizona (United States); Wen, Zhifei [Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Shen, Jiajian; Stoker, Joshua B.; Ding, Xiaoning; Hu, Yanle [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Sahoo, Narayan [Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Herman, Michael G. [Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, Minnesota (United States); Vargas, Carlos; Keole, Sameer; Wong, William; Bues, Martin [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States)

    2016-05-01

    Purpose: The purpose of this study was to compare the impact of uncertainties and interplay on 3-dimensional (3D) and 4D robustly optimized intensity modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods and Materials: IMPT plans were created for 11 nonrandomly selected non-small cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D computed tomography (CT) to irradiate clinical target volume (CTV). Regular fractionation (66 Gy [relative biological effectiveness; RBE] in 33 fractions) was considered. In 4D optimization, the CTV of individual phases received nonuniform doses to achieve a uniform cumulative dose. The root-mean-square dose-volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram (DVH) indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed rank test. Results: 4D robust optimization plans led to smaller AUC for CTV (14.26 vs 18.61, respectively; P=.001), better CTV coverage (Gy [RBE]) (D{sub 95%} CTV: 60.6 vs 55.2, respectively; P=.001), and better CTV homogeneity (D{sub 5%}-D{sub 95%} CTV: 10.3 vs 17.7, resspectively; P=.002) in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage (D{sub 95%} CTV: 64.5 vs 63.8, respectively; P=.0068), comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions: Our exploratory methodology study showed

  8. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    Science.gov (United States)

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. TU-AB-BRB-00: New Methods to Ensure Target Coverage

    International Nuclear Information System (INIS)

    2015-01-01

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. The treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand

  10. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history.

    Science.gov (United States)

    Cherry, Joshua L

    2017-02-23

    Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data. The algorithm is applied to bacterial data sets containing up to nearly 2000 genomes with several thousand variable nucleotide sites. Run times are several seconds or less. Computational experiments show that maximum compatibility is less sensitive than maximum parsimony to the inclusion of nucleotide data that, though derived from actual sequence reads, has been identified as likely to be misleading. Maximum compatibility is a useful tool for certain phylogenetic problems, such as inferring the relationships among closely-related bacteria from whole-genome sequence data. The algorithm presented here rapidly solves fairly large problems of this type, and provides robustness against misleading characters than can pollute large-scale sequencing data.

  11. An Evolutionary Approach for Robust Layout Synthesis of MEMS

    DEFF Research Database (Denmark)

    Fan, Zhun; Wang, Jiachuan; Goodman, Erik

    2005-01-01

    The paper introduces a robust design method for layout synthesis of MEM resonators subject to inherent geometric uncertainties such as the fabrication error on the sidewall of the structure. The robust design problem is formulated as a multi-objective constrained optimisation problem after certain...... assumptions and treated with multiobjective genetic algorithm (MOGA), a special type of evolutionary computing approaches. Case study based on layout synthesis of a comb-driven MEM resonator shows that the approach proposed in this paper can lead to design results that meet the target performance and are less...

  12. Data-driven quantification of the robustness and sensitivity of cell signaling networks

    International Nuclear Information System (INIS)

    Mukherjee, Sayak; Seok, Sang-Cheol; Vieland, Veronica J; Das, Jayajit

    2013-01-01

    Robustness and sensitivity of responses generated by cell signaling networks has been associated with survival and evolvability of organisms. However, existing methods analyzing robustness and sensitivity of signaling networks ignore the experimentally observed cell-to-cell variations of protein abundances and cell functions or contain ad hoc assumptions. We propose and apply a data-driven maximum entropy based method to quantify robustness and sensitivity of Escherichia coli (E. coli) chemotaxis signaling network. Our analysis correctly rank orders different models of E. coli chemotaxis based on their robustness and suggests that parameters regulating cell signaling are evolutionary selected to vary in individual cells according to their abilities to perturb cell functions. Furthermore, predictions from our approach regarding distribution of protein abundances and properties of chemotactic responses in individual cells based on cell population averaged data are in excellent agreement with their experimental counterparts. Our approach is general and can be used to evaluate robustness as well as generate predictions of single cell properties based on population averaged experimental data in a wide range of cell signaling systems. (paper)

  13. Catastrophic Disruption Threshold and Maximum Deflection from Kinetic Impact

    Science.gov (United States)

    Cheng, A. F.

    2017-12-01

    The use of a kinetic impactor to deflect an asteroid on a collision course with Earth was described in the NASA Near-Earth Object Survey and Deflection Analysis of Alternatives (2007) as the most mature approach for asteroid deflection and mitigation. The NASA DART mission will demonstrate asteroid deflection by kinetic impact at the Potentially Hazardous Asteroid 65803 Didymos in October, 2022. The kinetic impactor approach is considered to be applicable with warning times of 10 years or more and with hazardous asteroid diameters of 400 m or less. In principle, a larger kinetic impactor bringing greater kinetic energy could cause a larger deflection, but input of excessive kinetic energy will cause catastrophic disruption of the target, leaving possibly large fragments still on collision course with Earth. Thus the catastrophic disruption threshold limits the maximum deflection from a kinetic impactor. An often-cited rule of thumb states that the maximum deflection is 0.1 times the escape velocity before the target will be disrupted. It turns out this rule of thumb does not work well. A comparison to numerical simulation results shows that a similar rule applies in the gravity limit, for large targets more than 300 m, where the maximum deflection is roughly the escape velocity at momentum enhancement factor β=2. In the gravity limit, the rule of thumb corresponds to pure momentum coupling (μ=1/3), but simulations find a slightly different scaling μ=0.43. In the smaller target size range that kinetic impactors would apply to, the catastrophic disruption limit is strength-controlled. A DART-like impactor won't disrupt any target asteroid down to significantly smaller size than the 50 m below which a hazardous object would not penetrate the atmosphere in any case unless it is unusually strong.

  14. Holistic metrology qualification extension and its application to characterize overlay targets with asymmetric effects

    Science.gov (United States)

    Dos Santos Ferreira, Olavio; Sadat Gousheh, Reza; Visser, Bart; Lie, Kenrick; Teuwen, Rachel; Izikson, Pavel; Grzela, Grzegorz; Mokaberi, Babak; Zhou, Steve; Smith, Justin; Husain, Danish; Mandoy, Ram S.; Olvera, Raul

    2018-03-01

    Ever increasing need for tighter on-product overlay (OPO), as well as enhanced accuracy in overlay metrology and methodology, is driving semiconductor industry's technologists to innovate new approaches to OPO measurements. In case of High Volume Manufacturing (HVM) fabs, it is often critical to strive for both accuracy and robustness. Robustness, in particular, can be challenging in metrology since overlay targets can be impacted by proximity of other structures next to the overlay target (asymmetric effects), as well as symmetric stack changes such as photoresist height variations. Both symmetric and asymmetric contributors have impact on robustness. Furthermore, tweaking or optimizing wafer processing parameters for maximum yield may have an adverse effect on physical target integrity. As a result, measuring and monitoring physical changes or process abnormalities/artefacts in terms of new Key Performance Indicators (KPIs) is crucial for the end goal of minimizing true in-die overlay of the integrated circuits (ICs). IC manufacturing fabs often relied on CD-SEM in the past to capture true in-die overlay. Due to destructive and intrusive nature of CD-SEMs on certain materials, it's desirable to characterize asymmetry effects for overlay targets via inline KPIs utilizing YieldStar (YS) metrology tools. These KPIs can also be integrated as part of (μDBO) target evaluation and selection for final recipe flow. In this publication, the Holistic Metrology Qualification (HMQ) flow was extended to account for process induced (asymmetric) effects such as Grating Imbalance (GI) and Bottom Grating Asymmetry (BGA). Local GI typically contributes to the intrafield OPO whereas BGA typically impacts the interfield OPO, predominantly at the wafer edge. Stack height variations highly impact overlay metrology accuracy, in particular in case of multi-layer LithoEtch Litho-Etch (LELE) overlay control scheme. Introducing a GI impact on overlay (in nm) KPI check quantifies the

  15. Revisiting the case for intensity targets: Better incentives and less uncertainty for developing countries

    International Nuclear Information System (INIS)

    Marschinski, Robert; Edenhofer, Ottmar

    2010-01-01

    In the debate on post-Kyoto global climate policy, intensity targets, which set a maximum amount of emissions per GDP, figure as prominent alternative to Kyoto-style absolute emission targets, especially for developing countries. This paper re-examines the case for intensity targets by critically assessing several of its properties, namely (i) reduction of cost-uncertainty, (ii) reduction of 'hot air', (iii) compatibility with international emissions trading, (iv) incentive to decouple carbon emissions and economic output (decarbonization), and, (v) use as a substitute for banking/borrowing. Relying on simple analytical models, it is shown that the effect on cost-uncertainty is ambiguous and depends on parameter values, and that the same holds for the risk of 'hot air'; that the intensity target distorts international emissions trading; that despite potential asymmetries in the choice of abatement technology between absolute and intensity target, the incentive for a lasting transformation of the energy system is not necessarily stronger under the latter; and, finally, that only a well-working intensity target could substitute banking/borrowing to some extent-but also vice versa. Overall, the results suggest that due to the increased complexity and the potentially only modest benefits of an intensity target, absolute targets remain a robust choice for a cautious policy maker.

  16. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  17. Robust, Causal, and Incremental Approaches to Investigating Linguistic Adaptation

    Science.gov (United States)

    Roberts, Seán G.

    2018-01-01

    This paper discusses the maximum robustness approach for studying cases of adaptation in language. We live in an age where we have more data on more languages than ever before, and more data to link it with from other domains. This should make it easier to test hypotheses involving adaptation, and also to spot new patterns that might be explained by adaptation. However, there is not much discussion of the overall approach to research in this area. There are outstanding questions about how to formalize theories, what the criteria are for directing research and how to integrate results from different methods into a clear assessment of a hypothesis. This paper addresses some of those issues by suggesting an approach which is causal, incremental and robust. It illustrates the approach with reference to a recent claim that dry environments select against the use of precise contrasts in pitch. Study 1 replicates a previous analysis of the link between humidity and lexical tone with an alternative dataset and finds that it is not robust. Study 2 performs an analysis with a continuous measure of tone and finds no significant correlation. Study 3 addresses a more recent analysis of the link between humidity and vowel use and finds that it is robust, though the effect size is small and the robustness of the measurement of vowel use is low. Methodological robustness of the general theory is addressed by suggesting additional approaches including iterated learning, a historical case study, corpus studies, and studying individual speech. PMID:29515487

  18. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    Science.gov (United States)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures

  19. Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter

    KAUST Repository

    Luo, Xiaodong; Hoteit, Ibrahim

    2011-01-01

    A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used

  20. Robust design optimization using the price of robustness, robust least squares and regularization methods

    Science.gov (United States)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  1. Progress on LMJ targets for ignition

    Energy Technology Data Exchange (ETDEWEB)

    Cherfils-Clerouin, C; Boniface, C; Bonnefille, M; Dattolo, E; Galmiche, D; Gauthier, P; Giorla, J; Laffite, S; Liberatore, S; Loiseau, P; Malinie, G; Masse, L; Masson-Laborde, P E; Monteil, M C; Poggi, F; Seytor, P; Wagon, F; Willien, J L, E-mail: catherine.cherfils@cea.f [CEA, DAM, DIF, F-91297 Arpajon (France)

    2009-12-15

    Targets designed to produce ignition on the Laser Megajoule (LMJ) are being simulated in order to set specifications for target fabrication. The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 160 laser beams, delivering up to 1.4 MJ and 380 TW. New targets needing reduced laser energy with only a small decrease in robustness have then been designed for this purpose. Working specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, has led to the design of a rugby-ball shaped cocktail hohlraum; with these improvements, a target based on the 240-beam A1040 capsule can be included in the 160-beam laser energy-power space. Robustness evaluations of these different targets shed light on critical points for ignition, which can trade off by tightening some specifications or by preliminary experimental and numerical tuning experiments.

  2. Progress on LMJ targets for ignition

    International Nuclear Information System (INIS)

    Cherfils-Clerouin, C; Boniface, C; Bonnefille, M; Dattolo, E; Galmiche, D; Gauthier, P; Giorla, J; Laffite, S; Liberatore, S; Loiseau, P; Malinie, G; Masse, L; Masson-Laborde, P E; Monteil, M C; Poggi, F; Seytor, P; Wagon, F; Willien, J L

    2009-01-01

    Targets designed to produce ignition on the Laser Megajoule (LMJ) are being simulated in order to set specifications for target fabrication. The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 160 laser beams, delivering up to 1.4 MJ and 380 TW. New targets needing reduced laser energy with only a small decrease in robustness have then been designed for this purpose. Working specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, has led to the design of a rugby-ball shaped cocktail hohlraum; with these improvements, a target based on the 240-beam A1040 capsule can be included in the 160-beam laser energy-power space. Robustness evaluations of these different targets shed light on critical points for ignition, which can trade off by tightening some specifications or by preliminary experimental and numerical tuning experiments.

  3. Attack robustness and centrality of complex networks.

    Directory of Open Access Journals (Sweden)

    Swami Iyer

    Full Text Available Many complex systems can be described by networks, in which the constituent components are represented by vertices and the connections between the components are represented by edges between the corresponding vertices. A fundamental issue concerning complex networked systems is the robustness of the overall system to the failure of its constituent parts. Since the degree to which a networked system continues to function, as its component parts are degraded, typically depends on the integrity of the underlying network, the question of system robustness can be addressed by analyzing how the network structure changes as vertices are removed. Previous work has considered how the structure of complex networks change as vertices are removed uniformly at random, in decreasing order of their degree, or in decreasing order of their betweenness centrality. Here we extend these studies by investigating the effect on network structure of targeting vertices for removal based on a wider range of non-local measures of potential importance than simply degree or betweenness. We consider the effect of such targeted vertex removal on model networks with different degree distributions, clustering coefficients and assortativity coefficients, and for a variety of empirical networks.

  4. Maximum likelihood pixel labeling using a spatially variant finite mixture model

    International Nuclear Information System (INIS)

    Gopal, S.S.; Hebert, T.J.

    1996-01-01

    We propose a spatially-variant mixture model for pixel labeling. Based on this spatially-variant mixture model we derive an expectation maximization algorithm for maximum likelihood estimation of the pixel labels. While most algorithms using mixture models entail the subsequent use of a Bayes classifier for pixel labeling, the proposed algorithm yields maximum likelihood estimates of the labels themselves and results in unambiguous pixel labels. The proposed algorithm is fast, robust, easy to implement, flexible in that it can be applied to any arbitrary image data where the number of classes is known and, most importantly, obviates the need for an explicit labeling rule. The algorithm is evaluated both quantitatively and qualitatively on simulated data and on clinical magnetic resonance images of the human brain

  5. Fathead minnow steroidogenesis: in silico analyses reveals tradeoffs between nominal target efficacy and robustness to cross-talk

    Directory of Open Access Journals (Sweden)

    Villeneuve Daniel L

    2010-06-01

    elucidation but microarray evidence shows that homeostatic regulation of the steroidogenic network is likely maintained by a mildly sensitive interaction. We hypothesize that effective network elucidation must consider both the sensitivity of the target as well as the target's robustness to biological noise (in this case, to cross-talk when identifying possible points of regulation.

  6. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    Science.gov (United States)

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental

  7. MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.

  8. Robust control design verification using the modular modeling system

    International Nuclear Information System (INIS)

    Edwards, R.M.; Ben-Abdennour, A.; Lee, K.Y.

    1991-01-01

    The Modular Modeling System (B ampersand W MMS) is being used as a design tool to verify robust controller designs for improving power plant performance while also providing fault-accommodating capabilities. These controllers are designed based on optimal control theory and are thus model based controllers which are targeted for implementation in a computer based digital control environment. The MMS is being successfully used to verify that the controllers are tolerant of uncertainties between the plant model employed in the controller and the actual plant; i.e., that they are robust. The two areas in which the MMS is being used for this purpose is in the design of (1) a reactor power controller with improved reactor temperature response, and (2) the design of a multiple input multiple output (MIMO) robust fault-accommodating controller for a deaerator level and pressure control problem

  9. Radiation pressure acceleration: The factors limiting maximum attainable ion energy

    Energy Technology Data Exchange (ETDEWEB)

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)

    2016-05-15

    Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.

  10. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    Science.gov (United States)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  11. Robust optimization based upon statistical theory.

    Science.gov (United States)

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose

  12. Robust inference in the negative binomial regression model with an application to falls data.

    Science.gov (United States)

    Aeberhard, William H; Cantoni, Eva; Heritier, Stephane

    2014-12-01

    A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.

  13. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  14. Progress on LMJ targets for ignition

    International Nuclear Information System (INIS)

    Cherfils-Clerouin, C; Boniface, C; Bonnefille, M; Fremerye, P; Galmiche, D; Gauthier, P; Giorla, J; Lambert, F; Laffite, S; Liberatore, S; Loiseau, P; Malinie, G; Masse, L; Masson-Laborde, P E; Monteil, M C; Poggi, F; Seytor, P; Wagon, F; Willien, J L

    2010-01-01

    Targets designed to produce ignition on the Laser MegaJoule are presented. The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 160 laser beams, delivering up to 1.4MJ and 380TW. New targets needing reduced laser energy with only a small decrease in robustness have then been designed for this purpose. Working specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, has led to the design of a rugby-shaped cocktail hohlraum. 1D and 2D robustness evaluations of these different targets shed light on critical points for ignition, that can be traded off by tightening some specifications or by preliminary experimental and numerical tuning experiments.

  15. Progress on LMJ targets for ignition

    Energy Technology Data Exchange (ETDEWEB)

    Cherfils-Clerouin, C; Boniface, C; Bonnefille, M; Fremerye, P; Galmiche, D; Gauthier, P; Giorla, J; Lambert, F; Laffite, S; Liberatore, S; Loiseau, P; Malinie, G; Masse, L; Masson-Laborde, P E; Monteil, M C; Poggi, F; Seytor, P; Wagon, F; Willien, J L, E-mail: catherine.cherfils@cea.f [CEA, DAM, DIF, F-91297 Arpajon (France)

    2010-08-01

    Targets designed to produce ignition on the Laser MegaJoule are presented. The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 160 laser beams, delivering up to 1.4MJ and 380TW. New targets needing reduced laser energy with only a small decrease in robustness have then been designed for this purpose. Working specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, has led to the design of a rugby-shaped cocktail hohlraum. 1D and 2D robustness evaluations of these different targets shed light on critical points for ignition, that can be traded off by tightening some specifications or by preliminary experimental and numerical tuning experiments.

  16. Impact of Spot Size and Spacing on the Quality of Robustly Optimized Intensity Modulated Proton Therapy Plans for Lung Cancer.

    Science.gov (United States)

    Liu, Chenbin; Schild, Steven E; Chang, Joe Y; Liao, Zhongxing; Korte, Shawn; Shen, Jiajian; Ding, Xiaoning; Hu, Yanle; Kang, Yixiu; Keole, Sameer R; Sio, Terence T; Wong, William W; Sahoo, Narayan; Bues, Martin; Liu, Wei

    2018-06-01

    To investigate how spot size and spacing affect plan quality, robustness, and interplay effects of robustly optimized intensity modulated proton therapy (IMPT) for lung cancer. Two robustly optimized IMPT plans were created for 10 lung cancer patients: first by a large-spot machine with in-air energy-dependent large spot size at isocenter (σ: 6-15 mm) and spacing (1.3 σ), and second by a small-spot machine with in-air energy-dependent small spot size (σ: 2-6 mm) and spacing (5 mm). Both plans were generated by optimizing radiation dose to internal target volume on averaged 4-dimensional computed tomography scans using an in-house-developed IMPT planning system. The dose-volume histograms band method was used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effects with randomized starting phases for each field per fraction. Patient anatomy voxels were mapped phase-to-phase via deformable image registration, and doses were scored using in-house-developed software. Dose-volume histogram indices, including internal target volume dose coverage, homogeneity, and organs at risk (OARs) sparing, were compared using the Wilcoxon signed-rank test. Compared with the large-spot machine, the small-spot machine resulted in significantly lower heart and esophagus mean doses, with comparable target dose coverage, homogeneity, and protection of other OARs. Plan robustness was comparable for targets and most OARs. With interplay effects considered, significantly lower heart and esophagus mean doses with comparable target dose coverage and homogeneity were observed using smaller spots. Robust optimization with a small spot-machine significantly improves heart and esophagus sparing, with comparable plan robustness and interplay effects compared with robust optimization with a large-spot machine. A small-spot machine uses a larger number of spots to cover the same tumors compared with a large

  17. EnTracked: Energy-Efficient Robust Position Tracking for Mobile Devices

    DEFF Research Database (Denmark)

    Kjærgaard, Mikkel Baun; Jensen, Jakob Langdal; Godsk, Torben

    2009-01-01

    conditions and mobility, schedules position updates to both minimize energy consumption and optimize robustness. The realized system tracks pedestrian targets equipped with GPS-enabled devices. The system is configurable to realize different trade-offs between energy consumption and robustness. We provide...... of the mobile device. Furthermore, tracking has to robustly deliver position updates when faced with changing conditions such as delays due to positioning and communication, and changing positioning accuracy. This work proposes EnTracked --- a system that, based on the estimation and prediction of system...... extensive experimental results by profiling how devices consume power, by emulation on collected data and by validation in several real-world deployments. Results from this profiling show how a device consumes power while tracking its position. Results from the emulation indicate that the system can...

  18. Robustness of the p53 network and biological hackers.

    Science.gov (United States)

    Dartnell, Lewis; Simeonidis, Evangelos; Hubank, Michael; Tsoka, Sophia; Bogle, I David L; Papageorgiou, Lazaros G

    2005-06-06

    The p53 protein interaction network is crucial in regulating the metazoan cell cycle and apoptosis. Here, the robustness of the p53 network is studied by analyzing its degeneration under two modes of attack. Linear Programming is used to calculate average path lengths among proteins and the network diameter as measures of functionality. The p53 network is found to be robust to random loss of nodes, but vulnerable to a targeted attack against its hubs, as a result of its architecture. The significance of the results is considered with respect to mutational knockouts of proteins and the directed attacks mounted by tumour inducing viruses.

  19. Robust distributed two-way relay beamforming in cognitive radio networks

    KAUST Repository

    Pandarakkottilil, Ubaidulla

    2012-04-01

    In this paper, we present distributed beamformer designs for a cognitive radio network (CRN) consisting of a pair of cognitive (or secondary) transceiver nodes communicating with each other through a set of secondary non-regenerative two-way relays. The secondary network shares the spectrum with a licensed primary user (PU), and operates under a constraint on the maximum interference to the PU, in addition to its own resource and quality of service (QoS) constraints. We propose beamformer designs assuming that the available channel state information (CSI) is imperfect, which reflects realistic scenarios. The performance of proposed designs is robust to the CSI errors. Such robustness is critical in CRNs given the difficulty in acquiring perfect CSI due to loose cooperation between the PUs and the secondary users (SUs), and the need for strict enforcement of PU interference limit. We consider a mean-square error (MSE)-constrained beamformer that minimizes the total relay transmit power and an MSE-balancing beamformer with a constraint on the total relay transmit power. We show that the proposed designs can be reformulated as convex optimization problems that can be solved efficiently. Through numerical simulations, we illustrate the improved performance of the proposed robust designs compared to non-robust designs. © 2012 IEEE.

  20. Ensemble Modeling for Robustness Analysis in engineering non-native metabolic pathways.

    Science.gov (United States)

    Lee, Yun; Lafontaine Rivera, Jimmy G; Liao, James C

    2014-09-01

    Metabolic pathways in cells must be sufficiently robust to tolerate fluctuations in expression levels and changes in environmental conditions. Perturbations in expression levels may lead to system failure due to the disappearance of a stable steady state. Increasing evidence has suggested that biological networks have evolved such that they are intrinsically robust in their network structure. In this article, we presented Ensemble Modeling for Robustness Analysis (EMRA), which combines a continuation method with the Ensemble Modeling approach, for investigating the robustness issue of non-native pathways. EMRA investigates a large ensemble of reference models with different parameters, and determines the effects of parameter drifting until a bifurcation point, beyond which a stable steady state disappears and system failure occurs. A pathway is considered to have high bifurcational robustness if the probability of system failure is low in the ensemble. To demonstrate the utility of EMRA, we investigate the bifurcational robustness of two synthetic central metabolic pathways that achieve carbon conservation: non-oxidative glycolysis and reverse glyoxylate cycle. With EMRA, we determined the probability of system failure of each design and demonstrated that alternative designs of these pathways indeed display varying degrees of bifurcational robustness. Furthermore, we demonstrated that target selection for flux improvement should consider the trade-offs between robustness and performance. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  1. A Probabilistic Approach for Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    of Structures and a probabilistic modelling of the timber material proposed in the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS). Due to the framework in the Danish Code the timber structure has to be evaluated with respect to the following criteria where at least one shall...... to criteria a) and b) the timber frame structure has one column with a reliability index a bit lower than an assumed target level. By removal three columns one by one no significant extensive failure of the entire structure or significant parts of it are obatined. Therefore the structure can be considered......A probabilistic based robustness analysis has been performed for a glulam frame structure supporting the roof over the main court in a Norwegian sports centre. The robustness analysis is based on the framework for robustness analysis introduced in the Danish Code of Practice for the Safety...

  2. Self-organization principles result in robust control of flexible manufacturing systems

    DEFF Research Database (Denmark)

    Nature shows us in our daily life how robust, flexible and optimal self-organized modular constructions work in complex physical, chemical and biological systems, which successfully adapt to new and unexpected situations. A promising strategy is therefore to use such self-organization and pattern...... problems with several autonomous robots and several targets are considered as model of flexible manufacturing systems. Each manufacturing target has to be served in a given time interval by one and only one robot and the total working costs have to be minimized (or total winnings maximized). A specifically...... constructed dynamical system approach (coupled selection equations) is used which is based on pattern formation principles and results in fault resistant and robust behaviour. An important feature is that this type of control also guarantees feasiblitiy of the assignment solutions. In previous work...

  3. Exploring high-density baryonic matter: Maximum freeze-out density

    Energy Technology Data Exchange (ETDEWEB)

    Randrup, Joergen [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Cleymans, Jean [University of Cape Town, UCT-CERN Research Centre and Department of Physics, Rondebosch (South Africa)

    2016-08-15

    The hadronic freeze-out line is calculated in terms of the net baryon density and the energy density instead of the usual T and μ{sub B}. This analysis makes it apparent that the freeze-out density exhibits a maximum as the collision energy is varied. This maximum freeze-out density has μ{sub B} = 400 - 500 MeV, which is above the critical value, and it is reached for a fixed-target bombarding energy of 20-30 GeV/N well within the parameters of the proposed NICA collider facility. (orig.)

  4. SU-E-T-625: Robustness Evaluation and Robust Optimization of IMPT Plans Based on Per-Voxel Standard Deviation of Dose Distributions.

    Science.gov (United States)

    Liu, W; Mohan, R

    2012-06-01

    Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD

  5. Observer-Based Robust Control for Spacecraft Rendezvous with Thrust Saturation

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2014-01-01

    Full Text Available This paper proposes an observer-based robust guaranteed cost control method for thrust-limited rendezvous in near-circular orbits. Treating the noncircularity of the target orbit as a parametric uncertainty, a linearized motion model derived from the two-body problem is adopted as the controlled plant. Based on this model, a robust guaranteed cost observer-controller is synthesized with a less conservative saturation control law, and sufficient condition for the existence of this observer-based rendezvous controller is derived. Finally, an illustrative example with immeasurable velocity states is presented to demonstrate the advantages and effectiveness of the control scheme.

  6. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  7. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  8. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  9. Robustness leads close to the edge of chaos in coupled map networks: toward the understanding of biological networks

    International Nuclear Information System (INIS)

    Saito, Nen; Kikuchi, Macoto

    2013-01-01

    Dynamics in biological networks are, in general, robust against several perturbations. We investigate a coupled map network as a model motivated by gene regulatory networks and design systems that are robust against phenotypic perturbations (perturbations in dynamics), as well as systems that are robust against mutation (perturbations in network structure). To achieve such a design, we apply a multicanonical Monte Carlo method. Analysis based on the maximum Lyapunov exponent and parameter sensitivity shows that systems with marginal stability, which are regarded as systems at the edge of chaos, emerge when robustness against network perturbations is required. This emergence of the edge of chaos is a self-organization phenomenon and does not need a fine tuning of parameters. (paper)

  10. SU-E-T-07: 4DCT Robust Optimization for Esophageal Cancer Using Intensity Modulated Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Liao, L [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Department of Industrial Engineering, University of Houston, Houston, TX (United States); Yu, J; Zhu, X; Li, H; Zhang, X [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Li, Y [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Varian Medical Systems, Houston, TX (United States); Lim, G [Department of Industrial Engineering, University of Houston, Houston, TX (United States)

    2015-06-15

    Purpose: To develop a 4DCT robust optimization method to reduce the dosimetric impact from respiratory motion in intensity modulated proton therapy (IMPT) for esophageal cancer. Methods: Four esophageal cancer patients were selected for this study. The different phases of CT from a set of 4DCT were incorporated into the worst-case dose distribution robust optimization algorithm. 4DCT robust treatment plans were designed and compared with the conventional non-robust plans. Result doses were calculated on the average and maximum inhale/exhale phases of 4DCT. Dose volume histogram (DVH) band graphic and ΔD95%, ΔD98%, ΔD5%, ΔD2% of CTV between different phases were used to evaluate the robustness of the plans. Results: Compare to the IMPT plans optimized using conventional methods, the 4DCT robust IMPT plans can achieve the same quality in nominal cases, while yield a better robustness to breathing motion. The mean ΔD95%, ΔD98%, ΔD5% and ΔD2% of CTV are 6%, 3.2%, 0.9% and 1% for the robustly optimized plans vs. 16.2%, 11.8%, 1.6% and 3.3% from the conventional non-robust plans. Conclusion: A 4DCT robust optimization method was proposed for esophageal cancer using IMPT. We demonstrate that the 4DCT robust optimization can mitigate the dose deviation caused by the diaphragm motion.

  11. 76 FR 34953 - Funding Opportunity Title: Risk Management Education in Targeted States (Targeted States Program...

    Science.gov (United States)

    2011-06-15

    ... Availability C. Location and Target Audience D. Maximum Award E. Project Period F. Description of Agreement..., 2011. C. Location and Target Audience The RMA Regional Offices that service the Targeted States are... marketing systems to pursue new markets. D. Purpose The purpose of the Targeted States Program is to provide...

  12. Burn performance of deuterium-tritium, deuterium-deuterium, and catalyzed deuterium ICF targets

    International Nuclear Information System (INIS)

    Harris, D.B.; Blue, T.E.

    1983-01-01

    The University of Illinois hydrodynamic burn code, AFBURN, has been used to model the performance of homogeneous D-T, D 2 , and catalyzed deuterium ICF targets. Yields and gains are compared for power-producing targets. AFBURN is a one-dimensional, two-temperature, single-fluid hydrodynamic code with non-local fusion product energy deposition. The initial conditions for AFBURN are uniformly compressed targets with central hot spots. AFBURN predicts that maximum D 2 target gains are obtained for target rhoR and spark rhoR about seven times larger than the target and spark rhoR for maximum D-T target gains, that the maximum D 2 target gain is approximately one third of the maximum D-T target gain, and that the corresponding yields are approximately equal. By recycling tritium and 3 He from previous targets, D 2 target performance can be improved by about 10%. (author)

  13. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  14. Preliminary investigation of solid target geometry

    Energy Technology Data Exchange (ETDEWEB)

    Haga, Katsuhiro; Kaminaga, Masanori; Hino, Ryutaro; Takada, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Shafiqul, I.M.; Tsuji, Nobumasa; Okamoto, Hutoshi; Kumasaka, Katsuyuki; Hayashi, Katsumi

    1997-11-01

    In this report, we introduce the developing plan for a solid metal target structure. Supposing tantalum as the target material, the temperature distribution and the maximum thermal stress in a tantalum plate of a solid metal target was evaluated under a water cooling condition, using the heat generation rate calculated with the JAERI`s neutron transport code. The calculation results showed that the water velocity was higher than 10 m/s in order to cool the 3mm-thick target plate down to 200degC when the target surface was smooth and heat transfer rate was calculated with the Dittus-Boelter equation. In this case, the maximum thermal stress is 50 MPa at the target plate surface. The coolant water flow distribution in a target vessel was also evaluated for ISIS-type flow channels and the parallel flow channels. In the ISIS-type flow channels, at least 25mm height of the coolant plenum is needed for a uniform flow distribution. The maximum flow velocity difference between the flow gaps in the parallel flow channels was 30%. A heat transfer augmentation experiment was conducted using ribbed-surface flow channel. The heat transfer rate was confirmed to increase up to twice the value of that for a smooth surface. (author)

  15. Contributions to robust methods of creep analysis

    International Nuclear Information System (INIS)

    Penny, B.K.

    1991-01-01

    Robust methods for the predictions of deformations and lifetimes of components operating in the creep range are presented. The ingredients used for this are well-tried numerical techniques combined with the concepts of continuum damage and so-called reference stresses. The methods described are derived in order to obtain the maximum benefit during the early stages of design where broad assessments of the influences of material choice, loadings and geometry need to be made quickly and with economical use of computers. It is also intended that the same methods will be of value during operation if estimates of damage or if exercises in life extension or inspection timing are required. (orig.)

  16. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  17. Autonomous Vehicles Navigation with Visual Target Tracking: Technical Approaches

    Directory of Open Access Journals (Sweden)

    Zhen Jia

    2008-12-01

    Full Text Available This paper surveys the developments of last 10 years in the area of vision based target tracking for autonomous vehicles navigation. First, the motivations and applications of using vision based target tracking for autonomous vehicles navigation are presented in the introduction section. It can be concluded that it is very necessary to develop robust visual target tracking based navigation algorithms for the broad applications of autonomous vehicles. Then this paper reviews the recent techniques in three different categories: vision based target tracking for the applications of land, underwater and aerial vehicles navigation. Next, the increasing trends of using data fusion for visual target tracking based autonomous vehicles navigation are discussed. Through data fusion the tracking performance is improved and becomes more robust. Based on the review, the remaining research challenges are summarized and future research directions are investigated.

  18. Robust through-the-wall radar image classification using a target-model alignment procedure.

    Science.gov (United States)

    Smith, Graeme E; Mobasseri, Bijan G

    2012-02-01

    A through-the-wall radar image (TWRI) bears little resemblance to the equivalent optical image, making it difficult to interpret. To maximize the intelligence that may be obtained, it is desirable to automate the classification of targets in the image to support human operators. This paper presents a technique for classifying stationary targets based on the high-range resolution profile (HRRP) extracted from 3-D TWRIs. The dependence of the image on the target location is discussed using a system point spread function (PSF) approach. It is shown that the position dependence will cause a classifier to fail, unless the image to be classified is aligned to a classifier-training location. A target image alignment technique based on deconvolution of the image with the system PSF is proposed. Comparison of the aligned target images with measured images shows the alignment process introducing normalized mean squared error (NMSE) ≤ 9%. The HRRP extracted from aligned target images are classified using a naive Bayesian classifier supported by principal component analysis. The classifier is tested using a real TWRI of canonical targets behind a concrete wall and shown to obtain correct classification rates ≥ 97%. © 2011 IEEE

  19. H∞ Robust Control of a Large-Piston MEMS Micromirror for Compact Fourier Transform Spectrometer Systems

    Directory of Open Access Journals (Sweden)

    Huipeng Chen

    2018-02-01

    Full Text Available Incorporating linear-scanning micro-electro-mechanical systems (MEMS micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM spectral linewidth of 96 cm−1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions.

  20. H∞ Robust Control of a Large-Piston MEMS Micromirror for Compact Fourier Transform Spectrometer Systems.

    Science.gov (United States)

    Chen, Huipeng; Li, Mengyuan; Zhang, Yi; Xie, Huikai; Chen, Chang; Peng, Zhangming; Su, Shaohui

    2018-02-08

    Incorporating linear-scanning micro-electro-mechanical systems (MEMS) micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS) possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID) closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM) spectral linewidth of 96 cm -1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions.

  1. Robust and Effective Component-based Banknote Recognition for the Blind.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, Yingli

    2012-11-01

    We develop a novel camera-based computer vision technology to automatically recognize banknotes for assisting visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate, 2) robustness: handles a variety of currency designs and bills in various conditions, 3) high efficiency: recognizes banknotes quickly, and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using Speeded Up Robust Features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system is evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm, achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users.

  2. 75 FR 8902 - Funding Opportunity Title: Crop Insurance Education in Targeted States (Targeted States Program)

    Science.gov (United States)

    2010-02-26

    ... and Target Audience D. Maximum Award E. Project Period F. Description of Agreement Award--Awardee.... Location and Target Audience Targeted States serviced by RMA Regional Offices are listed below. Staff from... established farmers or ranchers who are converting production and marketing systems to pursue new markets. D...

  3. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation

    Directory of Open Access Journals (Sweden)

    Xi Liu

    2016-09-01

    Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  4. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    International Nuclear Information System (INIS)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by χ 2 -minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. They compare this method with a χ 2 -minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than ∼20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers

  5. Robust Bayesian Algorithm for Targeted Compound Screening in Forensic Toxicology

    NARCIS (Netherlands)

    Woldegebriel, M.; Gonsalves, J.; van Asten, A.; Vivó-Truyols, G.

    2016-01-01

    As part of forensic toxicological investigation of cases involving unexpected death of an individual, targeted or untargeted xenobiotic screening of post-mortem samples is normally conducted. To this end, liquid chromatography (LC) coupled to high-resolution mass spectrometry (MS) is typically

  6. SU-E-T-452: Impact of Respiratory Motion On Robustly-Optimized Intensity-Modulated Proton Therapy to Treat Lung Cancers

    International Nuclear Information System (INIS)

    Liu, W; Schild, S; Bues, M; Liao, Z; Sahoo, N; Park, P; Li, H; Li, Y; Li, X; Shen, J; Anand, A; Dong, L; Zhu, X; Mohan, R

    2014-01-01

    Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from the internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly

  7. High-intensity, thin-target He-jet production source

    International Nuclear Information System (INIS)

    Bai, Y.; Vieira, D.J.; Wouters, J.M.; Butler, G.W.; Rosenauer, Dk; Loebner, K.E.G.; Lind, V.G.; Phillips, D.R.

    1996-01-01

    A thin-target He-jet system suited to the production and rapid transport of non-volatile radioactive species has been successfully operated with proton beam intensities of up to 700 μA. The system consists of a water-cooled, thin-target chamber, capillary gas transport system, moving tape/Ge detection system, and an aerosol generator/gas recirculator. The yields for a wide variety of uranium fission and deep spallation products have been measured and robust operation of the system demonstrated for several weeks. He-jet transport and collection efficiencies ranged between 15 and 25% with collection rates of 10 7 to 10 8 atoms/sec/isotope. The high-intensity, thin-target He-jet approach represents a robust production source for nonvolatile radioactive heavy ion beams

  8. Robust image obfuscation for privacy protection in Web 2.0 applications

    Science.gov (United States)

    Poller, Andreas; Steinebach, Martin; Liu, Huajian

    2012-03-01

    We present two approaches to robust image obfuscation based on permutation of image regions and channel intensity modulation. The proposed concept of robust image obfuscation is a step towards end-to-end security in Web 2.0 applications. It helps to protect the privacy of the users against threats caused by internet bots and web applications that extract biometric and other features from images for data-linkage purposes. The approaches described in this paper consider that images uploaded to Web 2.0 applications pass several transformations, such as scaling and JPEG compression, until the receiver downloads them. In contrast to existing approaches, our focus is on usability, therefore the primary goal is not a maximum of security but an acceptable trade-off between security and resulting image quality.

  9. A Fast Algorithm for Maximum Likelihood Estimation of Harmonic Chirp Parameters

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    . A statistically efficient estimator for extracting the parameters of the harmonic chirp model in additive white Gaussian noise is the maximum likelihood (ML) estimator which recently has been demonstrated to be robust to noise and accurate --- even when the model order is unknown. The main drawback of the ML......The analysis of (approximately) periodic signals is an important element in numerous applications. One generalization of standard periodic signals often occurring in practice are harmonic chirp signals where the instantaneous frequency increases/decreases linearly as a function of time...

  10. Including RNA secondary structures improves accuracy and robustness in reconstruction of phylogenetic trees.

    Science.gov (United States)

    Keller, Alexander; Förster, Frank; Müller, Tobias; Dandekar, Thomas; Schultz, Jörg; Wolf, Matthias

    2010-01-15

    In several studies, secondary structures of ribosomal genes have been used to improve the quality of phylogenetic reconstructions. An extensive evaluation of the benefits of secondary structure, however, is lacking. This is the first study to counter this deficiency. We inspected the accuracy and robustness of phylogenetics with individual secondary structures by simulation experiments for artificial tree topologies with up to 18 taxa and for divergency levels in the range of typical phylogenetic studies. We chose the internal transcribed spacer 2 of the ribosomal cistron as an exemplary marker region. Simulation integrated the coevolution process of sequences with secondary structures. Additionally, the phylogenetic power of marker size duplication was investigated and compared with sequence and sequence-structure reconstruction methods. The results clearly show that accuracy and robustness of Neighbor Joining trees are largely improved by structural information in contrast to sequence only data, whereas a doubled marker size only accounts for robustness. Individual secondary structures of ribosomal RNA sequences provide a valuable gain of information content that is useful for phylogenetics. Thus, the usage of ITS2 sequence together with secondary structure for taxonomic inferences is recommended. Other reconstruction methods as maximum likelihood, bayesian inference or maximum parsimony may equally profit from secondary structure inclusion. This article was reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber) and Eugene V. Koonin. Reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber) and Eugene V. Koonin. For the full reviews, please go to the Reviewers' comments section.

  11. Robust infrared target tracking using discriminative and generative approaches

    Science.gov (United States)

    Asha, C. S.; Narasimhadhan, A. V.

    2017-09-01

    The process of designing an efficient tracker for thermal infrared imagery is one of the most challenging tasks in computer vision. Although a lot of advancement has been achieved in RGB videos over the decades, textureless and colorless properties of objects in thermal imagery pose hard constraints in the design of an efficient tracker. Tracking of an object using a single feature or a technique often fails to achieve greater accuracy. Here, we propose an effective method to track an object in infrared imagery based on a combination of discriminative and generative approaches. The discriminative technique makes use of two complementary methods such as kernelized correlation filter with spatial feature and AdaBoost classifier with pixel intesity features to operate in parallel. After obtaining optimized locations through discriminative approaches, the generative technique is applied to determine the best target location using a linear search method. Unlike the baseline algorithms, the proposed method estimates the scale of the target by Lucas-Kanade homography estimation. To evaluate the proposed method, extensive experiments are conducted on 17 challenging infrared image sequences obtained from LTIR dataset and a significant improvement of mean distance precision and mean overlap precision is accomplished as compared with the existing trackers. Further, a quantitative and qualitative assessment of the proposed approach with the state-of-the-art trackers is illustrated to clearly demonstrate an overall increase in performance.

  12. Efficient and robust identification of cortical targets in concurrent TMS-fMRI experiments

    Science.gov (United States)

    Yau, Jeffrey M.; Hua, Jun; Liao, Diana A.; Desmond, John E.

    2014-01-01

    Transcranial magnetic stimulation (TMS) can be delivered during fMRI scans to evoke BOLD responses in distributed brain networks. While concurrent TMS-fMRI offers a potentially powerful tool for non-invasively investigating functional human neuroanatomy, the technique is currently limited by the lack of methods to rapidly and precisely localize targeted brain regions – a reliable procedure is necessary for validly relating stimulation targets to BOLD activation patterns, especially for cortical targets outside of motor and visual regions. Here we describe a convenient and practical method for visualizing coil position (in the scanner) and identifying the cortical location of TMS targets without requiring any calibration or any particular coil-mounting device. We quantified the precision and reliability of the target position estimates by testing the marker processing procedure on data from 9 scan sessions: Rigorous testing of the localization procedure revealed minimal variability in coil and target position estimates. We validated the marker processing procedure in concurrent TMS-fMRI experiments characterizing motor network connectivity. Together, these results indicate that our efficient method accurately and reliably identifies TMS targets in the MR scanner, which can be useful during scan sessions for optimizing coil placement and also for post-scan outlier identification. Notably, this method can be used generally to identify the position and orientation of MR-compatible hardware placed near the head in the MR scanner. PMID:23507384

  13. Robust Tracking Control for Rendezvous in Near-Circular Orbits

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2013-01-01

    Full Text Available This paper investigates a robust guaranteed cost tracking control problem for thrust-limited spacecraft rendezvous in near-circular orbits. Relative motion model is established based on the two-body problem with noncircularity of the target orbit described as a parameter uncertainty. A guaranteed cost tracking controller with input saturation is designed via a linear matrix inequality (LMI method, and sufficient conditions for the existence of the robust tracking controller are derived, which is more concise and less conservative compared with the previous works. Numerical examples are provided for both time-invariant and time-variant reference signals to illustrate the effectiveness of the proposed control scheme when applied to the terminal rendezvous and other astronautic missions with scheduled states signal.

  14. Targeting and Persuasive Advertising

    OpenAIRE

    Egli, Alain (Autor/in)

    2015-01-01

    Firms face a prisoner's dilemma when advertising in a competitive environment. In a Hotelling framework with persuasive advertisingfirms counteract this prisoner's dilemma with targeting. The firms even solve the prisoner's problem if targeted advertising is effective enough. Advertising turns from wasteful competition into profits. This is in contrast to wasteful competition as argument for regulations. A further result is maximum advertising differentiation: thefirms target their advertisin...

  15. Robust object tracking combining color and scale invariant features

    Science.gov (United States)

    Zhang, Shengping; Yao, Hongxun; Gao, Peipei

    2010-07-01

    Object tracking plays a very important role in many computer vision applications. However its performance will significantly deteriorate due to some challenges in complex scene, such as pose and illumination changes, clustering background and so on. In this paper, we propose a robust object tracking algorithm which exploits both global color and local scale invariant (SIFT) features in a particle filter framework. Due to the expensive computation cost of SIFT features, the proposed tracker adopts a speed-up variation of SIFT, SURF, to extract local features. Specially, the proposed method first finds matching points between the target model and target candidate, than the weight of the corresponding particle based on scale invariant features is computed as the the proportion of matching points of that particle to matching points of all particles, finally the weight of the particle is obtained by combining weights of color and SURF features with a probabilistic way. The experimental results on a variety of challenging videos verify that the proposed method is robust to pose and illumination changes and is significantly superior to the standard particle filter tracker and the mean shift tracker.

  16. Feasibility and robustness of dose painting by numbers in proton therapy with contour-driven plan optimization

    International Nuclear Information System (INIS)

    Barragán, A. M.; Differding, S.; Lee, J. A.; Sterpin, E.; Janssens, G.

    2015-01-01

    Purpose: To prove the ability of protons to reproduce a dose gradient that matches a dose painting by numbers (DPBN) prescription in the presence of setup and range errors, by using contours and structure-based optimization in a commercial treatment planning system. Methods: For two patients with head and neck cancer, voxel-by-voxel prescription to the target volume (GTV PET ) was calculated from 18 FDG-PET images and approximated with several discrete prescription subcontours. Treatments were planned with proton pencil beam scanning. In order to determine the optimal plan parameters to approach the DPBN prescription, the effects of the scanning pattern, number of fields, number of subcontours, and use of range shifter were separately tested on each patient. Different constant scanning grids (i.e., spot spacing = Δx = Δy = 3.5, 4, and 5 mm) and uniform energy layer separation [4 and 5 mm WED (water equivalent distance)] were analyzed versus a dynamic and automatic selection of the spots grid. The number of subcontours was increased from 3 to 11 while the number of beams was set to 3, 5, or 7. Conventional PTV-based and robust clinical target volumes (CTV)-based optimization strategies were considered and their robustness against range and setup errors assessed. Because of the nonuniform prescription, ensuring robustness for coverage of GTV PET inevitably leads to overdosing, which was compared for both optimization schemes. Results: The optimal number of subcontours ranged from 5 to 7 for both patients. All considered scanning grids achieved accurate dose painting (1% average difference between the prescribed and planned doses). PTV-based plans led to nonrobust target coverage while robust-optimized plans improved it considerably (differences between worst-case CTV dose and the clinical constraint was up to 3 Gy for PTV-based plans and did not exceed 1 Gy for robust CTV-based plans). Also, only 15% of the points in the GTV PET (worst case) were above 5% of DPBN

  17. Some scale-free networks could be robust under selective node attacks

    Science.gov (United States)

    Zheng, Bojin; Huang, Dan; Li, Deyi; Chen, Guisheng; Lan, Wenfei

    2011-04-01

    It is a mainstream idea that scale-free network would be fragile under the selective attacks. Internet is a typical scale-free network in the real world, but it never collapses under the selective attacks of computer viruses and hackers. This phenomenon is different from the deduction of the idea above because this idea assumes the same cost to delete an arbitrary node. Hence this paper discusses the behaviors of the scale-free network under the selective node attack with different cost. Through the experiments on five complex networks, we show that the scale-free network is possibly robust under the selective node attacks; furthermore, the more compact the network is, and the larger the average degree is, then the more robust the network is; with the same average degrees, the more compact the network is, the more robust the network is. This result would enrich the theory of the invulnerability of the network, and can be used to build robust social, technological and biological networks, and also has the potential to find the target of drugs.

  18. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  19. Robust Proton Pencil Beam Scanning Treatment Planning for Rectal Cancer Radiation Therapy

    International Nuclear Information System (INIS)

    Blanco Kiely, Janid Patricia; White, Benjamin M.

    2016-01-01

    Purpose: To investigate, in a treatment plan design and robustness study, whether proton pencil beam scanning (PBS) has the potential to offer advantages, relative to interfraction uncertainties, over photon volumetric modulated arc therapy (VMAT) in a locally advanced rectal cancer patient population. Methods and Materials: Ten patients received a planning CT scan, followed by an average of 4 weekly offline CT verification CT scans, which were rigidly co-registered to the planning CT. Clinical PBS plans were generated on the planning CT, using a single-field uniform-dose technique with single-posterior and parallel-opposed (LAT) fields geometries. The VMAT plans were generated on the planning CT using 2 6-MV, 220° coplanar arcs. Clinical plans were forward-calculated on verification CTs to assess robustness relative to anatomic changes. Setup errors were assessed by forward-calculating clinical plans with a ±5-mm (left–right, anterior–posterior, superior–inferior) isocenter shift on the planning CT. Differences in clinical target volume and organ at risk dose–volume histogram (DHV) indicators between plans were tested for significance using an appropriate Wilcoxon test (P<.05). Results: Dosimetrically, PBS plans were statistically different from VMAT plans, showing greater organ at risk sparing. However, the bladder was statistically identical among LAT and VMAT plans. The clinical target volume coverage was statistically identical among all plans. The robustness test found that all DVH indicators for PBS and VMAT plans were robust, except the LAT's genitalia (V5, V35). The verification CT plans showed that all DVH indicators were robust. Conclusions: Pencil beam scanning plans were found to be as robust as VMAT plans relative to interfractional changes during treatment when posterior beam angles and appropriate range margins are used. Pencil beam scanning dosimetric gains in the bowel (V15, V20) over VMAT suggest that using PBS to treat rectal cancer

  20. Robust Proton Pencil Beam Scanning Treatment Planning for Rectal Cancer Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Blanco Kiely, Janid Patricia, E-mail: jkiely@sas.upenn.edu; White, Benjamin M.

    2016-05-01

    Purpose: To investigate, in a treatment plan design and robustness study, whether proton pencil beam scanning (PBS) has the potential to offer advantages, relative to interfraction uncertainties, over photon volumetric modulated arc therapy (VMAT) in a locally advanced rectal cancer patient population. Methods and Materials: Ten patients received a planning CT scan, followed by an average of 4 weekly offline CT verification CT scans, which were rigidly co-registered to the planning CT. Clinical PBS plans were generated on the planning CT, using a single-field uniform-dose technique with single-posterior and parallel-opposed (LAT) fields geometries. The VMAT plans were generated on the planning CT using 2 6-MV, 220° coplanar arcs. Clinical plans were forward-calculated on verification CTs to assess robustness relative to anatomic changes. Setup errors were assessed by forward-calculating clinical plans with a ±5-mm (left–right, anterior–posterior, superior–inferior) isocenter shift on the planning CT. Differences in clinical target volume and organ at risk dose–volume histogram (DHV) indicators between plans were tested for significance using an appropriate Wilcoxon test (P<.05). Results: Dosimetrically, PBS plans were statistically different from VMAT plans, showing greater organ at risk sparing. However, the bladder was statistically identical among LAT and VMAT plans. The clinical target volume coverage was statistically identical among all plans. The robustness test found that all DVH indicators for PBS and VMAT plans were robust, except the LAT's genitalia (V5, V35). The verification CT plans showed that all DVH indicators were robust. Conclusions: Pencil beam scanning plans were found to be as robust as VMAT plans relative to interfractional changes during treatment when posterior beam angles and appropriate range margins are used. Pencil beam scanning dosimetric gains in the bowel (V15, V20) over VMAT suggest that using PBS to treat rectal

  1. Weighing Efficiency-Robustness in Supply Chain Disruption by Multi-Objective Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Tong Shu

    2016-03-01

    Full Text Available This paper investigates various supply chain disruptions in terms of scenario planning, including node disruption and chain disruption; namely, disruptions in distribution centers and disruptions between manufacturing centers and distribution centers. Meanwhile, it also focuses on the simultaneous disruption on one node or a number of nodes, simultaneous disruption in one chain or a number of chains and the corresponding mathematical models and exemplification in relation to numerous manufacturing centers and diverse products. Robustness of the design of the supply chain network is examined by weighing efficiency against robustness during supply chain disruptions. Efficiency is represented by operating cost; robustness is indicated by the expected disruption cost and the weighing issue is calculated by the multi-objective firefly algorithm for consistency in the results. It has been shown that the total cost achieved by the optimal target function is lower than that at the most effective time of supply chains. In other words, the decrease of expected disruption cost by improving robustness in supply chains is greater than the increase of operating cost by reducing efficiency, thus leading to cost advantage. Consequently, by approximating the Pareto Front Chart of weighing between efficiency and robustness, enterprises can choose appropriate efficiency and robustness for their longer-term development.

  2. Robustness of muscle synergies during visuomotor adaptation

    Directory of Open Access Journals (Sweden)

    Reinhard eGentner

    2013-09-01

    Full Text Available During visuomotor adaptation a novel mapping between visual targets and motor commands is gradually acquired. How muscle activation patterns are affected by this process is an open question. We tested whether the structure of muscle synergies is preserved during adaptation to a visuomotor rotation. Eight subjects applied targeted isometric forces on a handle instrumented with a force transducer while electromyographic (EMG activity was recorded from 13 shoulder and elbow muscles. The recorded forces were mapped into horizontal displacements of a virtual sphere with simulated mass, elasticity, and damping. The task consisted of moving the sphere to a target at one of eight equally spaced directions. Subjects performed three baseline blocks of 32 trials, followed by six blocks with a 45° CW rotation applied to the planar force, and finally three wash-out blocks without the perturbation. The sphere position at 100 ms after movement onset revealed significant directional error at the beginning of the rotation, a gradual learning in subsequent blocks, and aftereffects at the beginning of the wash-out. The change in initial force direction was closely related to the change in directional tuning of the initial EMG activity of most muscles. Throughout the experiment muscle synergies extracted using a non-negative matrix factorization algorithm from the muscle patterns recorded during the baseline blocks could reconstruct the muscle patterns of all other blocks with an accuracy significantly higher than chance indicating structural robustness. In addition, the synergies extracted from individual blocks remained similar to the baseline synergies throughout the experiment. Thus synergy structure is robust during visuomotor adaptation suggesting that changes in muscle patterns are obtained by rotating the directional tuning of the synergy recruitment.

  3. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  4. Construction and test of the Bonn frozen spin target

    International Nuclear Information System (INIS)

    Dutz, H.

    1989-04-01

    For γN→ΠN and γd→pn scattering experiments at the PHOENICS detector, a new 'bonn frozen spin target' (BOFROST) is developed. The target with a maximum volume of 30 cm 3 is cooled in a vertical 3 He- 4 He dilution kryostat. The lowest temperature of the dilution kryostat in the frozen spin mode should be 50 mk. In a first stage, the magnet system consist of two superconducting solenoids: A polarisation magnet with a maximum field of 7 T with a homogenity of 10 -5 over the target area and a 'vertical holding' magnet with a maximum field in the target area of 0.57 T. This work describes the construction and the set-up of the 'frozen spin target' in the laboratory and the first tests of the dilution kryostat and the superconducting magnet system. (orig.) [de

  5. Architecture and robustness tradeoffs in speed-scaled queues with application to energy management

    Science.gov (United States)

    Dinh, Tuan V.; Andrew, Lachlan L. H.; Nazarathy, Yoni

    2014-08-01

    We consider single-pass, lossless, queueing systems at steady-state subject to Poisson job arrivals at an unknown rate. Service rates are allowed to depend on the number of jobs in the system, up to a fixed maximum, and power consumption is an increasing function of speed. The goal is to control the state dependent service rates such that both energy consumption and delay are kept low. We consider a linear combination of the mean job delay and energy consumption as the performance measure. We examine both the 'architecture' of the system, which we define as a specification of the number of speeds that the system can choose from, and the 'design' of the system, which we define as the actual speeds available. Previous work has illustrated that when the arrival rate is precisely known, there is little benefit in introducing complex (multi-speed) architectures, yet in view of parameter uncertainty, allowing a variable number of speeds improves robustness. We quantify the tradeoffs of architecture specification with respect to robustness, analysing both global robustness and a newly defined measure which we call local robustness.

  6. Interference-Robust Air Interface for 5G Ultra-dense Small Cells

    DEFF Research Database (Denmark)

    Tavares, Fernando Menezes Leitão; Berardinelli, Gilberto; Mahmood, Nurul Huda

    2016-01-01

    An ultra-dense deployment of small cells is foreseen as the solution to cope with the exponential increase of the data rate demand targeted by the 5th Generation (5G) radio access technology. In this article, we propose an interference-robust air interface built upon the usage of advanced receivers...

  7. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  8. Adaptive robust Kalman filtering for precise point positioning

    International Nuclear Information System (INIS)

    Guo, Fei; Zhang, Xiaohong

    2014-01-01

    The optimality of precise point postioning (PPP) solution using a Kalman filter is closely connected to the quality of the a priori information about the process noise and the updated mesurement noise, which are sometimes difficult to obtain. Also, the estimation enviroment in the case of dynamic or kinematic applications is not always fixed but is subject to change. To overcome these problems, an adaptive robust Kalman filtering algorithm, the main feature of which introduces an equivalent covariance matrix to resist the unexpected outliers and an adaptive factor to balance the contribution of observational information and predicted information from the system dynamic model, is applied for PPP processing. The basic models of PPP including the observation model, dynamic model and stochastic model are provided first. Then an adaptive robust Kalmam filter is developed for PPP. Compared with the conventional robust estimator, only the observation with largest standardized residual will be operated by the IGG III function in each iteration to avoid reducing the contribution of the normal observations or even filter divergence. Finally, tests carried out in both static and kinematic modes have confirmed that the adaptive robust Kalman filter outperforms the classic Kalman filter by turning either the equivalent variance matrix or the adaptive factor or both of them. This becomes evident when analyzing the positioning errors in flight tests at the turns due to the target maneuvering and unknown process/measurement noises. (paper)

  9. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  10. Robust Visual Tracking via Online Discriminative and Low-Rank Dictionary Learning.

    Science.gov (United States)

    Zhou, Tao; Liu, Fanghui; Bhaskar, Harish; Yang, Jie

    2017-09-12

    In this paper, we propose a novel and robust tracking framework based on online discriminative and low-rank dictionary learning. The primary aim of this paper is to obtain compact and low-rank dictionaries that can provide good discriminative representations of both target and background. We accomplish this by exploiting the recovery ability of low-rank matrices. That is if we assume that the data from the same class are linearly correlated, then the corresponding basis vectors learned from the training set of each class shall render the dictionary to become approximately low-rank. The proposed dictionary learning technique incorporates a reconstruction error that improves the reliability of classification. Also, a multiconstraint objective function is designed to enable active learning of a discriminative and robust dictionary. Further, an optimal solution is obtained by iteratively computing the dictionary, coefficients, and by simultaneously learning the classifier parameters. Finally, a simple yet effective likelihood function is implemented to estimate the optimal state of the target during tracking. Moreover, to make the dictionary adaptive to the variations of the target and background during tracking, an online update criterion is employed while learning the new dictionary. Experimental results on a publicly available benchmark dataset have demonstrated that the proposed tracking algorithm performs better than other state-of-the-art trackers.

  11. Setting maximum sustainable yield targets when yield of one species affects that of other species

    DEFF Research Database (Denmark)

    Rindorf, Anna; Reid, David; Mackinson, Steve

    2012-01-01

    species. But how should we prioritize and identify most appropriate targets? Do we prefer to maximize by focusing on total yield in biomass across species, or are other measures targeting maximization of profits or preserving high living qualities more relevant? And how do we ensure that targets remain...

  12. A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network.

    Science.gov (United States)

    Qi, Jun; Liu, Guo-Ping

    2017-11-06

    This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS). The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN) node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF) module, which is only used for time synchronization between different nodes, with accuracy up to 1 μ s. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF) for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM). The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS) signal.

  13. A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Jun Qi

    2017-11-01

    Full Text Available This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS. The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF module, which is only used for time synchronization between different nodes, with accuracy up to 1 μs. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM. The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS signal.

  14. Robustness of IPSA optimized high-dose-rate prostate brachytherapy treatment plans to catheter displacements.

    Science.gov (United States)

    Poder, Joel; Whitaker, May

    2016-06-01

    Inverse planning simulated annealing (IPSA) optimized brachytherapy treatment plans are characterized with large isolated dwell times at the first or last dwell position of each catheter. The potential of catheter shifts relative to the target and organs at risk in these plans may lead to a more significant change in delivered dose to the volumes of interest relative to plans with more uniform dwell times. This study aims to determine if the Nucletron Oncentra dwell time deviation constraint (DTDC) parameter can be optimized to improve the robustness of high-dose-rate (HDR) prostate brachytherapy plans to catheter displacements. A set of 10 clinically acceptable prostate plans were re-optimized with a DTDC parameter of 0 and 0.4. For each plan, catheter displacements of 3, 7, and 14 mm were retrospectively applied and the change in dose volume histogram (DVH) indices and conformity indices analyzed. The robustness of clinically acceptable prostate plans to catheter displacements in the caudal direction was found to be dependent on the DTDC parameter. A DTDC value of 0 improves the robustness of planning target volume (PTV) coverage to catheter displacements, whereas a DTDC value of 0.4 improves the robustness of the plans to changes in hotspots. The results indicate that if used in conjunction with a pre-treatment catheter displacement correction protocol and a tolerance of 3 mm, a DTDC value of 0.4 may produce clinically superior plans. However, the effect of the DTDC parameter in plan robustness was not observed to be as strong as initially suspected.

  15. Complete achromatic and robustness electro-optic switch between two integrated optical waveguides

    Science.gov (United States)

    Huang, Wei; Kyoseva, Elica

    2018-01-01

    In this paper, we present a novel design of electro-optic modulator and optical switching device, based on current integrated optics technique. The advantages of our optical switching device are broadband of input light wavelength, robustness against varying device length and operation voltages, with reference to previous design. Conforming to our results of previous paper [Huang et al, phys. lett. a, 90, 053837], the coupling of the waveguides has a hyperbolic-secant shape. while detuning has a sign flip at maximum coupling, we called it as with a sign flip of phase mismatch model. The a sign flip of phase mismatch model can produce complete robust population transfer. In this paper, we enhance this device to switch light intensity controllable, by tuning external electric field based on electro-optic effect.

  16. Including RNA secondary structures improves accuracy and robustness in reconstruction of phylogenetic trees

    Directory of Open Access Journals (Sweden)

    Dandekar Thomas

    2010-01-01

    Full Text Available Abstract Background In several studies, secondary structures of ribosomal genes have been used to improve the quality of phylogenetic reconstructions. An extensive evaluation of the benefits of secondary structure, however, is lacking. Results This is the first study to counter this deficiency. We inspected the accuracy and robustness of phylogenetics with individual secondary structures by simulation experiments for artificial tree topologies with up to 18 taxa and for divergency levels in the range of typical phylogenetic studies. We chose the internal transcribed spacer 2 of the ribosomal cistron as an exemplary marker region. Simulation integrated the coevolution process of sequences with secondary structures. Additionally, the phylogenetic power of marker size duplication was investigated and compared with sequence and sequence-structure reconstruction methods. The results clearly show that accuracy and robustness of Neighbor Joining trees are largely improved by structural information in contrast to sequence only data, whereas a doubled marker size only accounts for robustness. Conclusions Individual secondary structures of ribosomal RNA sequences provide a valuable gain of information content that is useful for phylogenetics. Thus, the usage of ITS2 sequence together with secondary structure for taxonomic inferences is recommended. Other reconstruction methods as maximum likelihood, bayesian inference or maximum parsimony may equally profit from secondary structure inclusion. Reviewers This article was reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber and Eugene V. Koonin. Open peer review Reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber and Eugene V. Koonin. For the full reviews, please go to the Reviewers' comments section.

  17. An Analysis of Plan Robustness for Esophageal Tumors: Comparing Volumetric Modulated Arc Therapy Plans and Spot Scanning Proton Planning

    International Nuclear Information System (INIS)

    Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J.; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A.

    2016-01-01

    Purpose: Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods and Materials: For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)_5_0_G_y or PTV_6_2_._5_G_y (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results: SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D_9_8 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D_9_8 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D_9_8 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D_9_8 was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions: The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be

  18. An Analysis of Plan Robustness for Esophageal Tumors: Comparing Volumetric Modulated Arc Therapy Plans and Spot Scanning Proton Planning

    Energy Technology Data Exchange (ETDEWEB)

    Warren, Samantha, E-mail: samantha.warren@oncology.ox.ac.uk [Cancer Research UK/Medical Research Council Oxford Institute for Radiation Oncology, Gray Laboratories, University of Oxford, Oxford (United Kingdom); Partridge, Mike [Cancer Research UK/Medical Research Council Oxford Institute for Radiation Oncology, Gray Laboratories, University of Oxford, Oxford (United Kingdom); Bolsi, Alessandra; Lomax, Anthony J. [Centre for Proton Therapy, Paul Scherrer Institute, Villigen (Switzerland); Hurt, Chris [Wales Cancer Trials Unit, School of Medicine, Heath Park, Cardiff (United Kingdom); Crosby, Thomas [Velindre Cancer Centre, Velindre Hospital, Cardiff (United Kingdom); Hawkins, Maria A. [Cancer Research UK/Medical Research Council Oxford Institute for Radiation Oncology, Gray Laboratories, University of Oxford, Oxford (United Kingdom)

    2016-05-01

    Purpose: Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods and Materials: For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV){sub 50Gy} or PTV{sub 62.5Gy} (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results: SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D{sub 98} was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D{sub 98} was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D{sub 98} was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D{sub 98} was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions: The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup

  19. LQG and maximum entropy control design for the Hubble Space Telescope

    Science.gov (United States)

    Collins, Emmanuel G., Jr.; Richter, Stephen

    Solar array vibrations are responsible for serious pointing control problems on the Hubble Space Telescope (HST). The original HST control law was not designed to attenuate these disturbances because they were not perceived to be a problem prior to launch. However, significant solar array vibrations do occur due to large changes in the thermal environment as the HST orbits the earth. Using classical techniques, Marshall Space Flight Center in conjunction with Lockheed Missiles and Space Company developed modified HST controllers that were able to suppress the influence of the vibrations of the solar arrays on the line-of-sight (LOS) performance. Substantial LOS improvement was observed when two of these controllers were implemented on orbit. This paper describes the development of modified HST controllers by using modern control techniques, particularly linear-quadratic-gaussian (LQG) design and Maximum Entropy robust control design, a generalization of LQG that incorporates robustness constraints with respect to modal errors. The fundamental issues are discussed candidly and controllers designed using these modern techniques are described.

  20. Robust fractional order sliding mode control of doubly-fed induction generator (DFIG)-based wind turbines.

    Science.gov (United States)

    Ebrahimkhani, Sadegh

    2016-07-01

    Wind power plants have nonlinear dynamics and contain many uncertainties such as unknown nonlinear disturbances and parameter uncertainties. Thus, it is a difficult task to design a robust reliable controller for this system. This paper proposes a novel robust fractional-order sliding mode (FOSM) controller for maximum power point tracking (MPPT) control of doubly fed induction generator (DFIG)-based wind energy conversion system. In order to enhance the robustness of the control system, uncertainties and disturbances are estimated using a fractional order uncertainty estimator. In the proposed method a continuous control strategy is developed to achieve the chattering free fractional order sliding-mode control, and also no knowledge of the uncertainties and disturbances or their bound is assumed. The boundedness and convergence properties of the closed-loop signals are proven using Lyapunov׳s stability theory. Simulation results in the presence of various uncertainties were carried out to evaluate the effectiveness and robustness of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. LIBOR troubles: Anomalous movements detection based on maximum entropy

    Science.gov (United States)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  2. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  3. Can Targeted Intervention Mitigate Early Emotional and Behavioral Problems?: Generating Robust Evidence within Randomized Controlled Trials.

    Directory of Open Access Journals (Sweden)

    Orla Doyle

    Full Text Available This study examined the impact of a targeted Irish early intervention program on children's emotional and behavioral development using multiple methods to test the robustness of the results. Data on 164 Preparing for Life participants who were randomly assigned into an intervention group, involving home visits from pregnancy onwards, or a control group, was used to test the impact of the intervention on Child Behavior Checklist scores at 24-months. Using inverse probability weighting to account for differential attrition, permutation testing to address small sample size, and quantile regression to characterize the distributional impact of the intervention, we found that the few treatment effects were largely concentrated among boys most at risk of developing emotional and behavioral problems. The average treatment effect identified a 13% reduction in the likelihood of falling into the borderline clinical threshold for Total Problems. The interaction and subgroup analysis found that this main effect was driven by boys. The distributional analysis identified a 10-point reduction in the Externalizing Problems score for boys at the 90th percentile. No effects were observed for girls or for the continuous measures of Total, Internalizing, and Externalizing problems. These findings suggest that the impact of this prenatally commencing home visiting program may be limited to boys experiencing the most difficulties. Further adoption of the statistical methods applied here may help to improve the internal validity of randomized controlled trials and contribute to the field of evaluation science more generally.ISRCTN Registry ISRCTN04631728.

  4. Robust path planning for flexible needle insertion using Markov decision processes.

    Science.gov (United States)

    Tan, Xiaoyu; Yu, Pengqian; Lim, Kah-Bin; Chui, Chee-Kong

    2018-05-11

    Flexible needle has the potential to accurately navigate to a treatment region in the least invasive manner. We propose a new planning method using Markov decision processes (MDPs) for flexible needle navigation that can perform robust path planning and steering under the circumstance of complex tissue-needle interactions. This method enhances the robustness of flexible needle steering from three different perspectives. First, the method considers the problem caused by soft tissue deformation. The method then resolves the common needle penetration failure caused by patterns of targets, while the last solution addresses the uncertainty issues in flexible needle motion due to complex and unpredictable tissue-needle interaction. Computer simulation and phantom experimental results show that the proposed method can perform robust planning and generate a secure control policy for flexible needle steering. Compared with a traditional method using MDPs, the proposed method achieves higher accuracy and probability of success in avoiding obstacles under complicated and uncertain tissue-needle interactions. Future work will involve experiment with biological tissue in vivo. The proposed robust path planning method can securely steer flexible needle within soft phantom tissues and achieve high adaptability in computer simulation.

  5. Particle Filter-Based Target Tracking Algorithm for Magnetic Resonance-Guided Respiratory Compensation : Robustness and Accuracy Assessment

    NARCIS (Netherlands)

    Bourque, Alexandra E; Bedwani, Stéphane; Carrier, Jean-François; Ménard, Cynthia; Borman, Pim; Bos, Clemens; Raaymakers, Bas W; Mickevicius, Nikolai; Paulson, Eric; Tijssen, Rob H N

    PURPOSE: To assess overall robustness and accuracy of a modified particle filter-based tracking algorithm for magnetic resonance (MR)-guided radiation therapy treatments. METHODS AND MATERIALS: An improved particle filter-based tracking algorithm was implemented, which used a normalized

  6. Target-oriented chaos control

    International Nuclear Information System (INIS)

    Dattani, Justine; Blake, Jack C.H.; Hilker, Frank M.

    2011-01-01

    Designing intervention methods to control chaotic behavior in dynamical systems remains a challenging problem, in particular for systems that are difficult to access or to measure. We propose a simple, intuitive technique that modifies the values of the state variables directly toward a certain target. The intervention takes into account the difference to the target value, and is a combination of traditional proportional feedback and constant feedback methods. It proves particularly useful when the target corresponds to the equilibrium of the uncontrolled system, and is available or can be estimated from expert knowledge (e.g. in biology and economy). -- Highlights: → We propose a chaos control method that forces the system to a certain target. → The intervention takes into account the difference to the target value. → It can be seen as a combination of proportional and constant feedback methods. → The method is very robust and highly efficient in the long-term. → It is particularly applicable when suitable target values are known or available.

  7. STEGO TRANSFORMATION OF SPATIAL DOMAIN OF COVER IMAGE ROBUST AGAINST ATTACKS ON EMBEDDED MESSAGE

    Directory of Open Access Journals (Sweden)

    Kobozeva A.

    2014-04-01

    Full Text Available One of the main requirements to steganografic algorithm to be developed is robustness against disturbing influences, that is, to attacks against the embedded message. It was shown that guaranteeing the stego algorithm robustness does not depend on whether the additional information is embedded into the spatial or transformation domain of the cover image. Given the existing advantages of the spatial domain of the cover image in organization of embedding and extracting processes, a sufficient condition for ensuring robustness of such stego transformation was obtained in this work. It was shown that the amount of brightness correction related to the pixels of the cover image block is similar to the amount of correction related to the maximum singular value of the corresponding matrix of the block in case of embedding additional data that ensures robustness against attacks on the embedded message. Recommendations were obtained for selecting the size of the cover image block used in stego transformation as one of the parameters determining the calculation error of stego message. Given the inversely correspondence between the stego capacity of the stego channel being organized and the size of the cover image block, l=8 value was recommended.

  8. Scheduling with target start times

    NARCIS (Netherlands)

    Hoogeveen, J.A.; Velde, van de S.L.; Klein Haneveld, W.K.; Vrieze, O.J.; Kallenberg, L.C.M.

    1997-01-01

    We address the single-machine problem of scheduling n independent jobs subject to target start times. Target start times are essentially release times that may be violated at a certain cost. The goal is to minimize an objective function that is composed of total completion time and maximum

  9. Including robustness in multi-criteria optimization for intensity-modulated proton therapy

    Science.gov (United States)

    Chen, Wei; Unkelbach, Jan; Trofimov, Alexei; Madden, Thomas; Kooy, Hanne; Bortfeld, Thomas; Craft, David

    2012-02-01

    We present a method to include robustness in a multi-criteria optimization (MCO) framework for intensity-modulated proton therapy (IMPT). The approach allows one to simultaneously explore the trade-off between different objectives as well as the trade-off between robustness and nominal plan quality. In MCO, a database of plans each emphasizing different treatment planning objectives, is pre-computed to approximate the Pareto surface. An IMPT treatment plan that strikes the best balance between the different objectives can be selected by navigating on the Pareto surface. In our approach, robustness is integrated into MCO by adding robustified objectives and constraints to the MCO problem. Uncertainties (or errors) of the robust problem are modeled by pre-calculated dose-influence matrices for a nominal scenario and a number of pre-defined error scenarios (shifted patient positions, proton beam undershoot and overshoot). Objectives and constraints can be defined for the nominal scenario, thus characterizing nominal plan quality. A robustified objective represents the worst objective function value that can be realized for any of the error scenarios and thus provides a measure of plan robustness. The optimization method is based on a linear projection solver and is capable of handling large problem sizes resulting from a fine dose grid resolution, many scenarios, and a large number of proton pencil beams. A base-of-skull case is used to demonstrate the robust optimization method. It is demonstrated that the robust optimization method reduces the sensitivity of the treatment plan to setup and range errors to a degree that is not achieved by a safety margin approach. A chordoma case is analyzed in more detail to demonstrate the involved trade-offs between target underdose and brainstem sparing as well as robustness and nominal plan quality. The latter illustrates the advantage of MCO in the context of robust planning. For all cases examined, the robust optimization for

  10. SU-F-T-384: Step and Shoot IMRT, VMAT and Autoplan VMAT Nasopharnyx Plan Robustness to Linear Accelerator Delivery Errors

    International Nuclear Information System (INIS)

    Pogson, EM; Hansen, C; Blake, S; Thwaites, D; Arumugam, S; Holloway, L

    2016-01-01

    Purpose: To identify the robustness of different treatment techniques in respect to simulated linac errors on the dose distribution to the target volume and organs at risk for step and shoot IMRT (ssIMRT), VMAT and Autoplan generated VMAT nasopharynx plans. Methods: A nasopharynx patient dataset was retrospectively replanned with three different techniques: 7 beam ssIMRT, one arc manual generated VMAT and one arc automatically generated VMAT. Treatment simulated uncertainties: gantry, collimator, MLC field size and MLC shifts, were introduced into these plans at increments of 5,2,1,−1,−2 and −5 (degrees or mm) and recalculated in Pinnacle. The mean and maximum doses were calculated for the high dose PTV, parotids, brainstem, and spinal cord and then compared to the original baseline plan. Results: Simulated gantry angle errors have <1% effect on the PTV, ssIMRT is most sensitive. The small collimator errors (±1 and ±2 degrees) impacted the mean PTV dose by <2% for all techniques, however for the ±5 degree errors mean target varied by up to 7% for the Autoplan VMAT and 10% for the max dose to the spinal cord and brain stem, seen in all techniques. The simulated MLC shifts introduced the largest errors for the Autoplan VMAT, with the larger MLC modulation presumably being the cause. The most critical error observed, was the MLC field size error, where even small errors of 1 mm, caused significant changes to both the PTV and the OAR. The ssIMRT is the least sensitive and the Autoplan the most sensitive, with target errors of up to 20% over and under dosages observed. Conclusion: For a nasopharynx patient the plan robustness observed is highest for the ssIMRT plan and lowest for the Autoplan generated VMAT plan. This could be caused by the more complex MLC modulation seen for the VMAT plans. This project is supported by a grant from NSW Cancer Council.

  11. SU-F-T-384: Step and Shoot IMRT, VMAT and Autoplan VMAT Nasopharnyx Plan Robustness to Linear Accelerator Delivery Errors

    Energy Technology Data Exchange (ETDEWEB)

    Pogson, EM [Institute of Medical Physics, The University of Sydney, Sydney, New South Wales (Australia); Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW (United Kingdom); Ingham Institute for Applied Medical Research, Sydney, NSW (Australia); Hansen, C [Laboratory of Radiation Physics, Odense University Hospital, Odense (Denmark); Institute of Clinical Research, University of Southern Denmark, Odense (Denmark); Blake, S; Thwaites, D [Institute of Medical Physics, The University of Sydney, Sydney, New South Wales (Australia); Arumugam, S [Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW (United Kingdom); Holloway, L [Institute of Medical Physics, The University of Sydney, Sydney, New South Wales (Australia); Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW (United Kingdom); Laboratory of Radiation Physics, Odense University Hospital, Odense (Denmark); South Western Sydney Clinical School, University of New South Wales, Sydney, NSW (Australia); University of Wollongong, Wollongong, NSW (Australia)

    2016-06-15

    Purpose: To identify the robustness of different treatment techniques in respect to simulated linac errors on the dose distribution to the target volume and organs at risk for step and shoot IMRT (ssIMRT), VMAT and Autoplan generated VMAT nasopharynx plans. Methods: A nasopharynx patient dataset was retrospectively replanned with three different techniques: 7 beam ssIMRT, one arc manual generated VMAT and one arc automatically generated VMAT. Treatment simulated uncertainties: gantry, collimator, MLC field size and MLC shifts, were introduced into these plans at increments of 5,2,1,−1,−2 and −5 (degrees or mm) and recalculated in Pinnacle. The mean and maximum doses were calculated for the high dose PTV, parotids, brainstem, and spinal cord and then compared to the original baseline plan. Results: Simulated gantry angle errors have <1% effect on the PTV, ssIMRT is most sensitive. The small collimator errors (±1 and ±2 degrees) impacted the mean PTV dose by <2% for all techniques, however for the ±5 degree errors mean target varied by up to 7% for the Autoplan VMAT and 10% for the max dose to the spinal cord and brain stem, seen in all techniques. The simulated MLC shifts introduced the largest errors for the Autoplan VMAT, with the larger MLC modulation presumably being the cause. The most critical error observed, was the MLC field size error, where even small errors of 1 mm, caused significant changes to both the PTV and the OAR. The ssIMRT is the least sensitive and the Autoplan the most sensitive, with target errors of up to 20% over and under dosages observed. Conclusion: For a nasopharynx patient the plan robustness observed is highest for the ssIMRT plan and lowest for the Autoplan generated VMAT plan. This could be caused by the more complex MLC modulation seen for the VMAT plans. This project is supported by a grant from NSW Cancer Council.

  12. Fuzzy sliding mode control for maximum power point tracking of a photovoltaic pumping system

    Directory of Open Access Journals (Sweden)

    Sabah Miqoi

    2017-03-01

    Full Text Available In this paper a new maximum power point tracking method based on fuzzy sliding mode control is proposed, and employed in a PV water pumping system based on a DC-DC boost converter, to produce maximum power from the solar panel hence more speed in the DC motor and more water quantity. This method combines two different tracking techniques sliding mode control and fuzzy logic; our controller is based on sliding mode control, then to give better stability and enhance the power production a fuzzy logic technique was added. System modeling, sliding method definition and the new control method presentation are represented in this paper. The results of the simulation that are compared to both sliding mode controller and perturbation and observation method demonstrate effectiveness and robustness of the proposed controller.

  13. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Science.gov (United States)

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  14. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Sungho Kim

    2016-07-01

    Full Text Available Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR images or infrared (IR images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter and an asymmetric morphological closing filter (AMCF, post-filter into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic

  15. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  16. Analog Fixed Maximum Power Point Control for a PWM Step-downConverter for Water Pumping Installations

    DEFF Research Database (Denmark)

    Beltran, H.; Perez, E.; Chen, Zhe

    2009-01-01

    This paper describes a Fixed Maximum Power Point analog control used in a step-down Pulse Width Modulated power converter. The DC/DC converter drives a DC motor used in small water pumping installations, without any electric storage device. The power supply is provided by PV panels working around....... The proposed Optimal Power Point fix voltage control system is analyzed in comparison to other complex controls....... their maximum power point, with a fixed operating voltage value. The control circuit implementation is not only simple and cheap, but also robust and reliable. System protections and adjustments are also proposed. Simulations and hardware are reported in the paper for a 150W water pumping application system...

  17. Robust logistic regression to narrow down the winner's curse for rare and recessive susceptibility variants.

    Science.gov (United States)

    Kesselmeier, Miriam; Lorenzo Bermejo, Justo

    2017-11-01

    Logistic regression is the most common technique used for genetic case-control association studies. A disadvantage of standard maximum likelihood estimators of the genotype relative risk (GRR) is their strong dependence on outlier subjects, for example, patients diagnosed at unusually young age. Robust methods are available to constrain outlier influence, but they are scarcely used in genetic studies. This article provides a non-intimidating introduction to robust logistic regression, and investigates its benefits and limitations in genetic association studies. We applied the bounded Huber and extended the R package 'robustbase' with the re-descending Hampel functions to down-weight outlier influence. Computer simulations were carried out to assess the type I error rate, mean squared error (MSE) and statistical power according to major characteristics of the genetic study and investigated markers. Simulations were complemented with the analysis of real data. Both standard and robust estimation controlled type I error rates. Standard logistic regression showed the highest power but standard GRR estimates also showed the largest bias and MSE, in particular for associated rare and recessive variants. For illustration, a recessive variant with a true GRR=6.32 and a minor allele frequency=0.05 investigated in a 1000 case/1000 control study by standard logistic regression resulted in power=0.60 and MSE=16.5. The corresponding figures for Huber-based estimation were power=0.51 and MSE=0.53. Overall, Hampel- and Huber-based GRR estimates did not differ much. Robust logistic regression may represent a valuable alternative to standard maximum likelihood estimation when the focus lies on risk prediction rather than identification of susceptibility variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Robustness analysis of bogie suspension components Pareto optimised values

    Science.gov (United States)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  19. Compatibility of detached divertor operation with robust edge pedestal performance

    Energy Technology Data Exchange (ETDEWEB)

    Leonard, A.W., E-mail: leonard@fusion.gat.com [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States); Makowski, M.A.; McLean, A.G. [Lawrence Livermore National Laboratory, Livermore, CA (United States); Osborne, T.H.; Snyder, P.B. [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States)

    2015-08-15

    The compatibility of detached radiative divertor operation with a robust H-mode pedestal is examined in DIII-D. A density scan produced low temperature plasmas at the divertor target, T{sub e} ⩽ 2 eV, with high radiation leading to a factor of ⩾4 drop in peak divertor heat flux. The cold radiative plasma was confined to the divertor and did not extend across the separatrix in X-point region. A robust H-mode pedestal was maintained with a small degradation in pedestal pressure at the highest densities. The response of the pedestal pressure to increasing density is reproduced by the EPED pedestal model. However, agreement of the EPED model with experiment at high density requires an assumption of reduced diamagnetic stabilization of edge Peeling–Ballooning modes.

  20. Robust design method and thermostatic experiment for multiple piezoelectric vibration absorber system

    International Nuclear Information System (INIS)

    Nambu, Yohsuke; Takashima, Toshihide; Inagaki, Akiya

    2015-01-01

    This paper examines the effects of connecting multiplexing shunt circuits composed of inductors and resistors to piezoelectric transducers so as to improve the robustness of a piezoelectric vibration absorber (PVA). PVAs are well known to be effective at suppressing the vibration of an adaptive structure; their weakness is low robustness to changes in the dynamic parameters of the system, including the main structure and the absorber. In the application to space structures, the temperature-dependency of capacitance of piezoelectric ceramics is the factor that causes performance reduction. To improve robustness to the temperature-dependency of the capacitance, this paper proposes a multiple-PVA system that is composed of distributed piezoelectric transducers and several shunt circuits. The optimization problems that determine both the frequencies and the damping ratios of the PVAs are multi-objective problems, which are solved using a real-coded genetic algorithm in this paper. A clamped aluminum beam with four groups of piezoelectric ceramics attached was considered in simulations and experiments. Numerical simulations revealed that the PVA systems designed using the proposed method had tolerance to changes in the capacitances. Furthermore, experiments using a thermostatic bath were conducted to reveal the effectiveness and robustness of the PVA systems. The maximum peaks of the transfer functions of the beam with the open circuit, the single-PVA system, the double-PVA system, and the quadruple-PVA system at 20 °C were 14.3 dB, −6.91 dB, −7.47 dB, and −8.51 dB, respectively. The experimental results also showed that the multiple-PVA system is more robust than a single PVA in a variable temperature environment from −10 °C to 50 °C. In conclusion, the use of multiple PVAs results in an effective, robust vibration control method for adaptive structures. (paper)

  1. Untapped Therapeutic Targets in the Tumor Microenvironment

    Science.gov (United States)

    2017-08-01

    that harbors the resistant cancer cells is simultaneously targeted. Since activated carcinoma-associated fibroblasts (CAFs) have a prominent role in...epithelial cells (IOSE) or HEYA8 epithelial ovarian cancer cells (EOC) using a Transwell membrane. Inverse -log2 values of the Robust Multi-array Average...barrier for drug transport. Thus, simultaneous targeting of CAFs and cancer cells may be necessary for chemotherapeutic accessibility. To identify

  2. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  3. Dynamics robustness of cascading systems.

    Directory of Open Access Journals (Sweden)

    Jonathan T Young

    2017-03-01

    Full Text Available A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1 Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2 Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it

  4. CALiPER Report 20.3: Robustness of LED PAR38 Lamps

    Energy Technology Data Exchange (ETDEWEB)

    Poplawski, Michael E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Royer, Michael P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Brown, Charles C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-01

    Three samples of 40 of the Series 20 PAR38 lamps underwent multi-stress testing, whereby samples were subjected to increasing levels of simultaneous thermal, humidity, electrical, and vibrational stress. The results do not explicitly predict expected lifetime or reliability, but they can be compared with one another, as well as with benchmark conventional products, to assess the relative robustness of the product designs. On average, the 32 LED lamp models tested were substantially more robust than the conventional benchmark lamps. As with other performance attributes, however, there was great variability in the robustness and design maturity of the LED lamps. Several LED lamp samples failed within the first one or two levels of the ten-level stress plan, while all three samples of some lamp models completed all ten levels. One potential area of improvement is design maturity, given that more than 25% of the lamp models demonstrated a difference in failure level for the three samples that was greater than or equal to the maximum for the benchmarks. At the same time, the fact that nearly 75% of the lamp models exhibited better design maturity than the benchmarks is noteworthy, given the relative stage of development for the technology.

  5. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  6. Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models

    Directory of Open Access Journals (Sweden)

    Xiao Guo

    2018-03-01

    Full Text Available An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test.

  7. Fine-Grained Targets for Laser Synthesis of Carbon Nanotubes

    Science.gov (United States)

    Smith, Michael W. (Inventor); Park, Cheol (Inventor)

    2017-01-01

    A mechanically robust, binder-free, inexpensive target for laser synthesis of carbon nanotubes and a method for making same, comprising the steps of mixing prismatic edge natural flake graphite with a metal powder catalyst and pressing the graphite and metal powder mixture into a mold having a desired target shape.

  8. SU-C-19A-07: Influence of Immobilization On Plan Robustness in the Treatment of Head and Neck Cancer with IMPT

    International Nuclear Information System (INIS)

    Bues, M; Anand, A; Liu, W; Shen, J; Keole, S; Patel, S; Morse, B; Kruse, J

    2014-01-01

    Purpose: We evaluated the effect of interposing immobilization devices into the beam's path on the robustness of a head and neck plan. Methods: An anthropomorphic head phantom was placed into a preliminary prototype of a specialized head and neck immobilization device for proton beam therapy. The device consists of a hard low density shell, a custom mold insert, and thermoplastic mask to immobilize the patient's head in the shell. This device was provided by CIVCO Medical Solutions for the purpose of evaluation of suitability for proton beam therapy. See Figure 1. Two pairs of treatment plans were generated. The first plan in each pair was a reference plan including only the anthropomorphic phantom, and the second plan in each pair included the immobilization device. In all other respects the plans within the pair were identical. Results: In the case of the simple plan the degradation of plan robustness was found to be clinically insignificant. In this case, target coverage in the worst case scenario was reduced from 95% of the target volume receiving 96.5% of prescription dose to 95% of the target volume receiving 96.3% of prescription dose by introducing the immobilization device. In the case of the complex plan, target coverage of the boost volume in the worst case scenario was reduced from 95% of the boost target volume receiving 97% of prescription dose to 95% of the boost target volume receiving 83% of prescription dose by introducing the immobilization device. See Figure 2. Conclusion: Immobilization devices may have a deleterious effect on plan robustness. Evaluation of the preliminary prototype revealed a variable impact on the plan robustness depending of the complexity of the case. Brian Morse is an employee of CIVCO Medical Solutions

  9. Geometrical differences in target volumes based on 18F-fluorodeoxyglucose positron emission tomography/computed tomography and four-dimensional computed tomography maximum intensity projection images of primary thoracic esophageal cancer.

    Science.gov (United States)

    Guo, Y; Li, J; Wang, W; Zhang, Y; Wang, J; Duan, Y; Shang, D; Fu, Z

    2014-01-01

    The objective of the study was to compare geometrical differences of target volumes based on four-dimensional computed tomography (4DCT) maximum intensity projection (MIP) and 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) images of primary thoracic esophageal cancer for radiation treatment. Twenty-one patients with thoracic esophageal cancer sequentially underwent contrast-enhanced three-dimensional computed tomography (3DCT), 4DCT, and 18F-FDG PET/CT thoracic simulation scans during normal free breathing. The internal gross target volume defined as IGTVMIP was obtained by contouring on MIP images. The gross target volumes based on PET/CT images (GTVPET ) were determined with nine different standardized uptake value (SUV) thresholds and manual contouring: SUV≥2.0, 2.5, 3.0, 3.5 (SUVn); ≥20%, 25%, 30%, 35%, 40% of the maximum (percentages of SUVmax, SUVn%). The differences in volume ratio (VR), conformity index (CI), and degree of inclusion (DI) between IGTVMIP and GTVPET were investigated. The mean centroid distance between GTVPET and IGTVMIP ranged from 4.98 mm to 6.53 mm. The VR ranged from 0.37 to 1.34, being significantly (P<0.05) closest to 1 at SUV2.5 (0.94), SUV20% (1.07), or manual contouring (1.10). The mean CI ranged from 0.34 to 0.58, being significantly closest to 1 (P<0.05) at SUV2.0 (0.55), SUV2.5 (0.56), SUV20% (0.56), SUV25% (0.53), or manual contouring (0.58). The mean DI of GTVPET in IGTVMIP ranged from 0.61 to 0.91, and the mean DI of IGTVMIP in GTVPET ranged from 0.34 to 0.86. The SUV threshold setting of SUV2.5, SUV20% or manual contouring yields the best tumor VR and CI with internal-gross target volume contoured on MIP of 4DCT dataset, but 3DPET/CT and 4DCT MIP could not replace each other for motion encompassing target volume delineation for radiation treatment. © 2014 International Society for Diseases of the Esophagus.

  10. Target Matching Recognition for Satellite Images Based on the Improved FREAK Algorithm

    Directory of Open Access Journals (Sweden)

    Yantong Chen

    2016-01-01

    Full Text Available Satellite remote sensing image target matching recognition exhibits poor robustness and accuracy because of the unfit feature extractor and large data quantity. To address this problem, we propose a new feature extraction algorithm for fast target matching recognition that comprises an improved feature from accelerated segment test (FAST feature detector and a binary fast retina key point (FREAK feature descriptor. To improve robustness, we extend the FAST feature detector by applying scale space theory and then transform the feature vector acquired by the FREAK descriptor from decimal into binary. We reduce the quantity of data in the computer and improve matching accuracy by using the binary space. Simulation test results show that our algorithm outperforms other relevant methods in terms of robustness and accuracy.

  11. Long term memory for noise: evidence of robust encoding of very short temporal acoustic patterns.

    Directory of Open Access Journals (Sweden)

    Jayalakshmi Viswanathan

    2016-11-01

    Full Text Available Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs (the two halves of the noise were identical or 1-s plain random noises (Ns. Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin and scrambled (chopping sounds into 10- and 20-ms bits before shuffling versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant’s discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities.

  12. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  13. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    Directory of Open Access Journals (Sweden)

    Yuanshen Zhao

    2016-01-01

    Full Text Available Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  14. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion.

    Science.gov (United States)

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-29

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  15. Sliding Mode Extremum Seeking Control Scheme Based on PSO for Maximum Power Point Tracking in Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Her-Terng Yau

    2013-01-01

    Full Text Available An extremum seeking control (ESC scheme is proposed for maximum power point tracking (MPPT in photovoltaic power generation systems. The robustness of the proposed scheme toward irradiance changes is enhanced by implementing the ESC scheme using a sliding mode control (SMC law. In the proposed approach, the chattering phenomenon caused by high frequency switching is suppressed by means of a sliding layer concept. Moreover, in implementing the proposed controller, the optimal value of the gain constant is determined using a particle swarm optimization (PSO algorithm. The experimental and simulation results show that the proposed PSO-based sliding mode ESC (SMESC control scheme yields a better transient response, steady-state stability, and robustness than traditional MPPT schemes based on gradient detection methods.

  16. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  17. Stochastic analysis and robust optimization for a deck lid inner panel stamping

    International Nuclear Information System (INIS)

    Hou, Bo; Wang, Wurong; Li, Shuhui; Lin, Zhongqin; Xia, Z. Cedric

    2010-01-01

    FE-simulation and optimization are widely used in the stamping process to improve design quality and shorten development cycle. However, the current simulation and optimization may lead to non-robust results due to not considering the variation of material and process parameters. In this study, a novel stochastic analysis and robust optimization approach is proposed to improve the stamping robustness, where the uncertainties are involved to reflect manufacturing reality. A meta-model based stochastic analysis method is developed, where FE-simulation, uniform design and response surface methodology (RSM) are used to construct meta-model, based on which Monte-Carlo simulation is performed to predict the influence of input parameters variation on the final product quality. By applying the stochastic analysis, uniform design and RSM, the mean and the standard deviation (SD) of product quality are calculated as functions of the controllable process parameters. The robust optimization model composed of mean and SD is constructed and solved, the result of which is compared with the deterministic one to show its advantages. It is demonstrated that the product quality variations are reduced significantly, and quality targets (reject rate) are achieved under the robust optimal solution. The developed approach offers rapid and reliable results for engineers to deal with potential stamping problems during the early phase of product and tooling design, saving more time and resources.

  18. Enhancing proton acceleration by using composite targets

    Energy Technology Data Exchange (ETDEWEB)

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B.; Bulanov, S. V.; Esirkepov, T. Zh.; Kando, M.; Pegoraro, F.; Leemans, W. P.

    2015-07-10

    Efficient laser ion acceleration requires high laser intensities, which can only be obtained by tightly focusing laser radiation. In the radiation pressure acceleration regime, where the tightly focused laser driver leads to the appearance of the fundamental limit for the maximum attainable ion energy, this limit corresponds to the laser pulse group velocity as well as to another limit connected with the transverse expansion of the accelerated foil and consequent onset of the foil transparency. These limits can be relaxed by using composite targets, consisting of a thin foil followed by a near critical density slab. Such targets provide guiding of a laser pulse inside a self-generated channel and background electrons, being snowplowed by the pulse, compensate for the transverse expansion. The use of composite targets results in a significant increase in maximum ion energy, compared to a single foil target case.

  19. Robustness analysis of interdependent networks under multiple-attacking strategies

    Science.gov (United States)

    Gao, Yan-Li; Chen, Shi-Ming; Nie, Sen; Ma, Fei; Guan, Jun-Jie

    2018-04-01

    The robustness of complex networks under attacks largely depends on the structure of a network and the nature of the attacks. Previous research on interdependent networks has focused on two types of initial attack: random attack and degree-based targeted attack. In this paper, a deliberate attack function is proposed, where six kinds of deliberate attacking strategies can be derived by adjusting the tunable parameters. Moreover, the robustness of four types of interdependent networks (BA-BA, ER-ER, BA-ER and ER-BA) with different coupling modes (random, positive and negative correlation) is evaluated under different attacking strategies. Interesting conclusions could be obtained. It can be found that the positive coupling mode can make the vulnerability of the interdependent network to be absolutely dependent on the most vulnerable sub-network under deliberate attacks, whereas random and negative coupling modes make the vulnerability of interdependent network to be mainly dependent on the being attacked sub-network. The robustness of interdependent network will be enhanced with the degree-degree correlation coefficient varying from positive to negative. Therefore, The negative coupling mode is relatively more optimal than others, which can substantially improve the robustness of the ER-ER network and ER-BA network. In terms of the attacking strategies on interdependent networks, the degree information of node is more valuable than the betweenness. In addition, we found a more efficient attacking strategy for each coupled interdependent network and proposed the corresponding protection strategy for suppressing cascading failure. Our results can be very useful for safety design and protection of interdependent networks.

  20. Robustness Analyses of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Hald, Frederik

    2013-01-01

    The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many mo...... with respect to robustness of timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many...... modern building codes consider the need for the robustness of structures and provide strategies and methods to obtain robustness. Therefore, a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues...

  1. Robust parameter design for integrated circuit fabrication procedure with respect to categorical characteristic

    International Nuclear Information System (INIS)

    Sohn, S.Y.

    1999-01-01

    We consider a robust parameter design of the process for forming contact windows in complementary metal-oxide semiconductor circuits. Robust design is often used to find the optimal levels of process conditions which would provide the output of consistent quality as close to a target value. In this paper, we analyze the results of the fractional factorial design of nine factors: mask dimension, viscosity, bake temperature, spin speed, bake time, aperture, exposure time, developing time, etch time, where the outcome of the experiment is measured in terms of a categorized window size with five categories. Random effect analysis is employed to model both the mean and variance of categorized window size as functions of some controllable factors as well as random errors. Empirical Bayes' procedures are then utilized to fit both the models, and to eventually find the robust design of CMOS circuit process by means of a Bootstrap resampling approach

  2. Adaptive double-integral-sliding-mode-maximum-power-point tracker for a photovoltaic system

    Directory of Open Access Journals (Sweden)

    Bidyadhar Subudhi

    2015-10-01

    Full Text Available This study proposed an adaptive double-integral-sliding-mode-controller-maximum-power-point tracker (DISMC-MPPT for maximum-power-point (MPP tracking of a photovoltaic (PV system. The objective of this study is to design a DISMC-MPPT with a new adaptive double-integral-sliding surface in order that MPP tracking is achieved with reduced chattering and steady-state error in the output voltage or current. The proposed adaptive DISMC-MPPT possesses a very simple and efficient PWM-based control structure that keeps switching frequency constant. The controller is designed considering the reaching and stability conditions to provide robustness and stability. The performance of the proposed adaptive DISMC-MPPT is verified through both MATLAB/Simulink simulation and experiment using a 0.2 kW prototype PV system. From the obtained results, it is found out that this DISMC-MPPT is found to be more efficient compared with that of Tan's and Jiao's DISMC-MPPTs.

  3. Robust Short-Lag Spatial Coherence Imaging.

    Science.gov (United States)

    Nair, Arun Asokan; Tran, Trac Duy; Bell, Muyinatu A Lediju

    2018-03-01

    Short-lag spatial coherence (SLSC) imaging displays the spatial coherence between backscattered ultrasound echoes instead of their signal amplitudes and is more robust to noise and clutter artifacts when compared with traditional delay-and-sum (DAS) B-mode imaging. However, SLSC imaging does not consider the content of images formed with different lags, and thus does not exploit the differences in tissue texture at each short-lag value. Our proposed method improves SLSC imaging by weighting the addition of lag values (i.e., M-weighting) and by applying robust principal component analysis (RPCA) to search for a low-dimensional subspace for projecting coherence images created with different lag values. The RPCA-based projections are considered to be denoised versions of the originals that are then weighted and added across lags to yield a final robust SLSC (R-SLSC) image. Our approach was tested on simulation, phantom, and in vivo liver data. Relative to DAS B-mode images, the mean contrast, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) improvements with R-SLSC images are 21.22 dB, 2.54, and 2.36, respectively, when averaged over simulated, phantom, and in vivo data and over all lags considered, which corresponds to mean improvements of 96.4%, 121.2%, and 120.5%, respectively. When compared with SLSC images, the corresponding mean improvements with R-SLSC images were 7.38 dB, 1.52, and 1.30, respectively (i.e., mean improvements of 14.5%, 50.5%, and 43.2%, respectively). Results show great promise for smoothing out the tissue texture of SLSC images and enhancing anechoic or hypoechoic target visibility at higher lag values, which could be useful in clinical tasks such as breast cyst visualization, liver vessel tracking, and obese patient imaging.

  4. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  5. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  6. Robust Trust in Expert Testimony

    Directory of Open Access Journals (Sweden)

    Christian Dahlman

    2015-05-01

    Full Text Available The standard of proof in criminal trials should require that the evidence presented by the prosecution is robust. This requirement of robustness says that it must be unlikely that additional information would change the probability that the defendant is guilty. Robustness is difficult for a judge to estimate, as it requires the judge to assess the possible effect of information that the he or she does not have. This article is concerned with expert witnesses and proposes a method for reviewing the robustness of expert testimony. According to the proposed method, the robustness of expert testimony is estimated with regard to competence, motivation, external strength, internal strength and relevance. The danger of trusting non-robust expert testimony is illustrated with an analysis of the Thomas Quick Case, a Swedish legal scandal where a patient at a mental institution was wrongfully convicted for eight murders.

  7. Robustness Analysis of Timber Truss Structure

    DEFF Research Database (Denmark)

    Rajčić, Vlatka; Čizmar, Dean; Kirkegaard, Poul Henning

    2010-01-01

    The present paper discusses robustness of structures in general and the robustness requirements given in the codes. Robustness of timber structures is also an issues as this is closely related to Working group 3 (Robustness of systems) of the COST E55 project. Finally, an example of a robustness...... evaluation of a widespan timber truss structure is presented. This structure was built few years ago near Zagreb and has a span of 45m. Reliability analysis of the main members and the system is conducted and based on this a robustness analysis is preformed....

  8. Maximum margin semi-supervised learning with irrelevant data.

    Science.gov (United States)

    Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R

    2015-10-01

    Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright

  9. Comparison of least-squares vs. maximum likelihood estimation for standard spectrum technique of β−γ coincidence spectrum analysis

    International Nuclear Information System (INIS)

    Lowrey, Justin D.; Biegalski, Steven R.F.

    2012-01-01

    The spectrum deconvolution analysis tool (SDAT) software code was written and tested at The University of Texas at Austin utilizing the standard spectrum technique to determine activity levels of Xe-131m, Xe-133m, Xe-133, and Xe-135 in β–γ coincidence spectra. SDAT was originally written to utilize the method of least-squares to calculate the activity of each radionuclide component in the spectrum. Recently, maximum likelihood estimation was also incorporated into the SDAT tool. This is a robust statistical technique to determine the parameters that maximize the Poisson distribution likelihood function of the sample data. In this case it is used to parameterize the activity level of each of the radioxenon components in the spectra. A new test dataset was constructed utilizing Xe-131m placed on a Xe-133 background to compare the robustness of the least-squares and maximum likelihood estimation methods for low counting statistics data. The Xe-131m spectra were collected independently from the Xe-133 spectra and added to generate the spectra in the test dataset. The true independent counts of Xe-131m and Xe-133 are known, as they were calculated before the spectra were added together. Spectra with both high and low counting statistics are analyzed. Studies are also performed by analyzing only the 30 keV X-ray region of the β–γ coincidence spectra. Results show that maximum likelihood estimation slightly outperforms least-squares for low counting statistics data.

  10. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  11. Robust parameter design for integrated circuit fabrication procedure with respect to categorical characteristic

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, S.Y

    1999-12-01

    We consider a robust parameter design of the process for forming contact windows in complementary metal-oxide semiconductor circuits. Robust design is often used to find the optimal levels of process conditions which would provide the output of consistent quality as close to a target value. In this paper, we analyze the results of the fractional factorial design of nine factors: mask dimension, viscosity, bake temperature, spin speed, bake time, aperture, exposure time, developing time, etch time, where the outcome of the experiment is measured in terms of a categorized window size with five categories. Random effect analysis is employed to model both the mean and variance of categorized window size as functions of some controllable factors as well as random errors. Empirical Bayes' procedures are then utilized to fit both the models, and to eventually find the robust design of CMOS circuit process by means of a Bootstrap resampling approach.

  12. Investigation on changes of modularity and robustness by edge-removal mutations in signaling networks.

    Science.gov (United States)

    Truong, Cong-Doan; Kwon, Yung-Keun

    2017-12-21

    Biological networks consisting of molecular components and interactions are represented by a graph model. There have been some studies based on that model to analyze a relationship between structural characteristics and dynamical behaviors in signaling network. However, little attention has been paid to changes of modularity and robustness in mutant networks. In this paper, we investigated the changes of modularity and robustness by edge-removal mutations in three signaling networks. We first observed that both the modularity and robustness increased on average in the mutant network by the edge-removal mutations. However, the modularity change was negatively correlated with the robustness change. This implies that it is unlikely that both the modularity and the robustness values simultaneously increase by the edge-removal mutations. Another interesting finding is that the modularity change was positively correlated with the degree, the number of feedback loops, and the edge betweenness of the removed edges whereas the robustness change was negatively correlated with them. We note that these results were consistently observed in randomly structure networks. Additionally, we identified two groups of genes which are incident to the highly-modularity-increasing and the highly-robustness-decreasing edges with respect to the edge-removal mutations, respectively, and observed that they are likely to be central by forming a connected component of a considerably large size. The gene-ontology enrichment of each of these gene groups was significantly different from the rest of genes. Finally, we showed that the highly-robustness-decreasing edges can be promising edgetic drug-targets, which validates the usefulness of our analysis. Taken together, the analysis of changes of robustness and modularity against edge-removal mutations can be useful to unravel novel dynamical characteristics underlying in signaling networks.

  13. Intelligent and robust optimization frameworks for smart grids

    Science.gov (United States)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic

  14. Robustness in econometrics

    CERN Document Server

    Sriboonchitta, Songsak; Huynh, Van-Nam

    2017-01-01

    This book presents recent research on robustness in econometrics. Robust data processing techniques – i.e., techniques that yield results minimally affected by outliers – and their applications to real-life economic and financial situations are the main focus of this book. The book also discusses applications of more traditional statistical techniques to econometric problems. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. In day-by-day data, we often encounter outliers that do not reflect the long-term economic trends, e.g., unexpected and abrupt fluctuations. As such, it is important to develop robust data processing techniques that can accommodate these fluctuations.

  15. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  16. Robust Reliability or reliable robustness? - Integrated consideration of robustness and reliability aspects

    DEFF Research Database (Denmark)

    Kemmler, S.; Eifler, Tobias; Bertsche, B.

    2015-01-01

    products are and vice versa. For a comprehensive understanding and to use existing synergies between both domains, this paper discusses the basic principles of Reliability- and Robust Design theory. The development of a comprehensive model will enable an integrated consideration of both domains...

  17. On the validity and robustness of the scale error phenomenon in early childhood.

    Science.gov (United States)

    DeLoache, Judy S; LoBue, Vanessa; Vanderborght, Mieke; Chiong, Cynthia

    2013-02-01

    Scale errors is a term referring to very young children's serious efforts to perform actions on miniature replica objects that are impossible due to great differences in the size of the child's body and the size of the target objects. We report three studies providing further documentation of scale errors and investigating the validity and robustness of the phenomenon. In the first, we establish that 2-year-olds' behavior in response to prompts to "pretend" with miniature replica objects differs dramatically from scale errors. The second and third studies address the robustness of the phenomenon and its relative imperviousness to attempts to influence the rate of scale errors. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    Science.gov (United States)

    De Silva, T.; Uneri, A.; Ketcha, M. D.; Reaungamornrat, S.; Kleinszig, G.; Vogt, S.; Aygun, N.; Lo, S.-F.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-04-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14% however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric

  19. Limited Impact of Setup and Range Uncertainties, Breathing Motion, and Interplay Effects in Robustly Optimized Intensity Modulated Proton Therapy for Stage III Non-small Cell Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, Tatsuya [Department of Radiology, Juntendo University Urayasu Hospital, Chiba (Japan); Widder, Joachim; Dijk, Lisanne V. van [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Takegawa, Hideki [Department of Radiation Oncology, Kansai Medical University Hirakata Hospital, Osaka (Japan); Koizumi, Masahiko; Takashina, Masaaki [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Usui, Keisuke; Kurokawa, Chie; Sugimoto, Satoru [Department of Radiation Oncology, Juntendo University Graduate School of Medicine, Tokyo (Japan); Saito, Anneyuko I. [Department of Radiology, Juntendo University Urayasu Hospital, Chiba (Japan); Department of Radiation Oncology, Juntendo University Graduate School of Medicine, Tokyo (Japan); Sasai, Keisuke [Department of Radiation Oncology, Juntendo University Graduate School of Medicine, Tokyo (Japan); Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Korevaar, Erik W., E-mail: e.w.korevaar@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2016-11-01

    Purpose: To investigate the impact of setup and range uncertainties, breathing motion, and interplay effects using scanning pencil beams in robustly optimized intensity modulated proton therapy (IMPT) for stage III non-small cell lung cancer (NSCLC). Methods and Materials: Three-field IMPT plans were created using a minimax robust optimization technique for 10 NSCLC patients. The plans accounted for 5- or 7-mm setup errors with ±3% range uncertainties. The robustness of the IMPT nominal plans was evaluated considering (1) isotropic 5-mm setup errors with ±3% range uncertainties; (2) breathing motion; (3) interplay effects; and (4) a combination of items 1 and 2. The plans were calculated using 4-dimensional and average intensity projection computed tomography images. The target coverage (TC, volume receiving 95% of prescribed dose) and homogeneity index (D{sub 2} − D{sub 98}, where D{sub 2} and D{sub 98} are the least doses received by 2% and 98% of the volume) for the internal clinical target volume, and dose indexes for lung, esophagus, heart and spinal cord were compared with that of clinical volumetric modulated arc therapy plans. Results: The TC and homogeneity index for all plans were within clinical limits when considering the breathing motion and interplay effects independently. The setup and range uncertainties had a larger effect when considering their combined effect. The TC decreased to <98% (clinical threshold) in 3 of 10 patients for robust 5-mm evaluations. However, the TC remained >98% for robust 7-mm evaluations for all patients. The organ at risk dose parameters did not significantly vary between the respective robust 5-mm and robust 7-mm evaluations for the 4 error types. Compared with the volumetric modulated arc therapy plans, the IMPT plans showed better target homogeneity and mean lung and heart dose parameters reduced by about 40% and 60%, respectively. Conclusions: In robustly optimized IMPT for stage III NSCLC, the setup and range

  20. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    Science.gov (United States)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  1. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  2. Validation approach for a fast and simple targeted screening method for 75 antibiotics in meat and aquaculture products using LC-MS/MS.

    Science.gov (United States)

    Dubreil, Estelle; Gautier, Sophie; Fourmond, Marie-Pierre; Bessiral, Mélaine; Gaugain, Murielle; Verdon, Eric; Pessel, Dominique

    2017-04-01

    An approach is described to validate a fast and simple targeted screening method for antibiotic analysis in meat and aquaculture products by LC-MS/MS. The strategy of validation was applied for a panel of 75 antibiotics belonging to different families, i.e., penicillins, cephalosporins, sulfonamides, macrolides, quinolones and phenicols. The samples were extracted once with acetonitrile, concentrated by evaporation and injected into the LC-MS/MS system. The approach chosen for the validation was based on the Community Reference Laboratory (CRL) guidelines for the validation of screening qualitative methods. The aim of the validation was to prove sufficient sensitivity of the method to detect all the targeted antibiotics at the level of interest, generally the maximum residue limit (MRL). A robustness study was also performed to test the influence of different factors. The validation showed that the method is valid to detect and identify 73 antibiotics of the 75 antibiotics studied in meat and aquaculture products at the validation levels.

  3. Maximum Recoverable Gas from Hydrate Bearing Sediments by Depressurization

    KAUST Repository

    Terzariol, Marco

    2017-11-13

    The estimation of gas production rates from hydrate bearing sediments requires complex numerical simulations. This manuscript presents a set of simple and robust analytical solutions to estimate the maximum depressurization-driven recoverable gas. These limiting-equilibrium solutions are established when the dissociation front reaches steady state conditions and ceases to expand further. Analytical solutions show the relevance of (1) relative permeabilities between the hydrate free sediment, the hydrate bearing sediment, and the aquitard layers, and (2) the extent of depressurization in terms of the fluid pressures at the well, at the phase boundary, and in the far field. Close form solutions for the size of the produced zone allow for expeditious financial analyses; results highlight the need for innovative production strategies in order to make hydrate accumulations an economically-viable energy resource. Horizontal directional drilling and multi-wellpoint seafloor dewatering installations may lead to advantageous production strategies in shallow seafloor reservoirs.

  4. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  5. Approximate truncation robust computed tomography—ATRACT

    International Nuclear Information System (INIS)

    Dennerlein, Frank; Maier, Andreas

    2013-01-01

    We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)

  6. Improved OAM-Based Radar Targets Detection Using Uniform Concentric Circular Arrays

    Directory of Open Access Journals (Sweden)

    Mingtuan Lin

    2016-01-01

    Full Text Available Without any relative moves or beam scanning, the novel Orbital-Angular-Momentum- (OAM- based radar targets detection technique using uniform concentric circular arrays (UCCAs shows the azimuthal estimation ability, which provides new perspective for radar system design. However, the main estimation method, that is, Fast Fourier Transform (FFT, under this scheme suffers from low resolution. As a solution, this paper rebuilds the OAM-based radar targets detection model and introduces the multiple signal classification (MUSIC algorithm to improve the resolution for detecting targets within the main lobes. The spatial smoothing technique is proposed to tackle the coherent problem brought by the proposed model. Analytical study and simulation demonstrate the superresolution estimation capacity the MUSIC algorithm can achieve for detecting targets within the main lobes. The performance of the MUSIC algorithm to detect targets not illuminated by the main lobes is further evaluated. Despite the fact that MUSIC algorithm loses the resolution advantage under this case, its estimation is more robust than that of the FFT method. Overall, the proposed MUSIC algorithm for the OAM-based radar system demonstrates the superresolution ability for detecting targets within the main lobes and good robustness for targets out of the main lobes.

  7. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  8. Ca-48 targets - Home and abroad!

    Science.gov (United States)

    Greene, John P.; Carpenter, Michael; Janssens, Robert V. F.

    2018-05-01

    Using the method of reduction/distillation, high-purity films of robust and ductile calcium metal were prepared for use as targets in nuclear physics experiments. These targets, however, are extremely air-sensitive and procedures must be developed for their handling and use without exposure to the air. In most instances, the thin 48Ca target is used on a carrier foil (backing) and a thin covering film of similar material is employed to further reduce re-oxidation. Un-backed metallic targets are rarely produced due to these concerns. In addition, the low natural abundance of the isotope 48Ca provided an increased incentive for the best efficiencies available in their preparation. Here, we describe the preparation of 48Ca targets employing a gold backing and thin gold cover for use at home, Argonne National Laboratory (ANL), as well as abroad, at Osaka University. For the overseas shipments, much care and preparation were necessary to ensure good targets and safe arrival to the experimental facilities.

  9. Tracking Target and Spiral Waves

    DEFF Research Database (Denmark)

    Jensen, Flemming G.; Sporring, Jon; Nielsen, Mads

    2002-01-01

    A new algorithm for analyzing the evolution of patterns of spiral and target waves in large aspect ratio chemical systems is introduced. The algorithm does not depend on finding the spiral tip but locates the center of the pattern by a new concept, called the spiral focus, which is defined...... by the evolutes of the actual spiral or target wave. With the use of Gaussian smoothing, a robust method is developed that permits the identification of targets and spirals foci independently of the wave profile. Examples of an analysis of long image sequences from experiments with the Belousov......–Zhabotinsky reaction catalyzed by ruthenium-tris-bipyridyl are presented. Moving target and spiral foci are found, and the speed and direction of movement of single as well as double spiral foci are investigated. For the experiments analyzed in this paper it is found that the movement of a focus correlates with foci...

  10. Influence of micromachined targets on laser accelerated proton beam profiles

    Science.gov (United States)

    Dalui, Malay; Permogorov, Alexander; Pahl, Hannes; Persson, Anders; Wahlström, Claes-Göran

    2018-03-01

    High intensity laser-driven proton acceleration from micromachined targets is studied experimentally in the target-normal-sheath-acceleration regime. Conical pits are created on the front surface of flat aluminium foils of initial thickness 12.5 and 3 μm using series of low energy pulses (0.5-2.5 μJ). Proton acceleration from such micromachined targets is compared with flat foils of equivalent thickness at a laser intensity of 7 × 1019 W cm-2. The maximum proton energy obtained from targets machined from 12.5 μm thick foils is found to be slightly lower than that of flat foils of equivalent remaining thickness, and the angular divergence of the proton beam is observed to increase as the depth of the pit approaches the foil thickness. Targets machined from 3 μm thick foils, on the other hand, show evidence of increasing the maximum proton energy when the depths of the structures are small. Furthermore, shallow pits on 3 μm thick foils are found to be efficient in reducing the proton beam divergence by a factor of up to three compared to that obtained from flat foils, while maintaining the maximum proton energy.

  11. Robustness in Railway Operations (RobustRailS)

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo; Nielsen, Otto Anker

    This study considers the problem of enhancing railway timetable robustness without adding slack time, hence increasing the travel time. The approach integrates a transit assignment model to assess how passengers adapt their behaviour whenever operations are changed. First, the approach considers...

  12. Beam configuration selection for robust intensity-modulated proton therapy in cervical cancer using Pareto front comparison.

    Science.gov (United States)

    van de Schoot, A J A J; Visser, J; van Kesteren, Z; Janssen, T M; Rasch, C R N; Bel, A

    2016-02-21

    The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D(99%)) and OAR doses (rectum V30Gy; bladder V40Gy). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D(99%), rectum V(30Gy) and bladder V(40Gy) to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D(99%) on average by 0.2 Gy and decreased the median rectum V(30Gy) and median bladder V(40Gy) on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal

  13. Beam configuration selection for robust intensity-modulated proton therapy in cervical cancer using Pareto front comparison

    International Nuclear Information System (INIS)

    Van de Schoot, A J A J; Visser, J; Van Kesteren, Z; Rasch, C R N; Bel, A; Janssen, T M

    2016-01-01

    The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D 99% ) and OAR doses (rectum V 30Gy ; bladder V 40Gy ). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D 99% , rectum V 30Gy and bladder V 40Gy to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D 99% on average by 0.2 Gy and decreased the median rectum V 30Gy and median bladder V 40Gy on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal in

  14. Robustness of third family solutions for hybrid stars against mixed phase effects

    Science.gov (United States)

    Ayriyan, A.; Bastian, N.-U.; Blaschke, D.; Grigorian, H.; Maslov, K.; Voskresensky, D. N.

    2018-04-01

    We investigate the robustness of third family solutions for hybrid compact stars with a quark matter core that correspond to the occurrence of high-mass twin stars against a softening of the phase transition by means of a construction that mimics the effects of pasta structures in the mixed phase. We consider a class of hybrid equations of state that exploits a relativistic mean-field model for the hadronic as well as for the quark matter phase. We present parametrizations that correspond to branches of high-mass twin star pairs with maximum masses between 2.05 M⊙ and 1.48 M⊙ having radius differences between 3.2 and 1.5 km, respectively. When compared to a Maxwell construction with a fixed value of critical pressure Pc, the effect of the mixed phase construction consists in the occurrence of a region of pressures around Pc belonging to the coexistence of hadronic and quark matter phases between the onset pressure at PH and the end of the transition at PQ. The maximum broadening which would still allow mass-twin compact stars is found to be (PQ-PH)max≈Pc for all parametrizations within the present class of models. At least the heavier of the neutron stars of the binary merger GW170817 could have been a member of the third family of hybrid stars. We present the example of another class of hybrid star equations of state for which the appearance of the third family branch is not as robust against mixed phase effects as that of the present work.

  15. Geomagnetic matching navigation algorithm based on robust estimation

    Science.gov (United States)

    Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan

    2017-08-01

    The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.

  16. Developing the fuzzy c-means clustering algorithm based on maximum entropy for multitarget tracking in a cluttered environment

    Science.gov (United States)

    Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing

    2018-01-01

    For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.

  17. Robust statistical methods with R

    CERN Document Server

    Jureckova, Jana

    2005-01-01

    Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...

  18. The Bering Autonomous Target Detection

    DEFF Research Database (Denmark)

    Jørgensen, John Leif; Denver, Troelz; Betto, Maurizio

    2003-01-01

    An autonomous asteroid target detection and tracking method has been developed. The method features near omnidirectionality and focus on high speed operations and completeness of search of the near space rather than the traditional faint object search methods, employed presently at the larger...... telescopes. The method has proven robust in operation and is well suited for use onboard spacecraft. As development target for the method and the associated instrumentation the asteroid research mission Bering has been used. Onboard a spacecraft, the autonomous detection is centered around the fully...... autonomous star tracker the Advanced Stellar Compass (ASC). One feature of this instrument is that potential targets are registered directly in terms of date, right ascension, declination, and intensity, which greatly facilitates both tracking search and registering. Results from ground and inflight tests...

  19. MRI definition of target volumes using fuzzy logic method for three-dimensional conformal radiation therapy

    International Nuclear Information System (INIS)

    Caudrelier, Jean-Michel; Vial, Stephane; Gibon, David; Kulik, Carine; Fournier, Charles; Castelain, Bernard; Coche-Dequeant, Bernard; Rousseau, Jean

    2003-01-01

    Purpose: Three-dimensional (3D) volume determination is one of the most important problems in conformal radiation therapy. Techniques of volume determination from tomographic medical imaging are usually based on two-dimensional (2D) contour definition with the result dependent on the segmentation method used, as well as on the user's manual procedure. The goal of this work is to describe and evaluate a new method that reduces the inaccuracies generally observed in the 2D contour definition and 3D volume reconstruction process. Methods and Materials: This new method has been developed by integrating the fuzziness in the 3D volume definition. It first defines semiautomatically a minimal 2D contour on each slice that definitely contains the volume and a maximal 2D contour that definitely does not contain the volume. The fuzziness region in between is processed using possibility functions in possibility theory. A volume of voxels, including the membership degree to the target volume, is then created on each slice axis, taking into account the slice position and slice profile. A resulting fuzzy volume is obtained after data fusion between multiorientation slices. Different studies have been designed to evaluate and compare this new method of target volume reconstruction and a classical reconstruction method. First, target definition accuracy and robustness were studied on phantom targets. Second, intra- and interobserver variations were studied on radiosurgery clinical cases. Results: The absolute volume errors are less than or equal to 1.5% for phantom volumes calculated by the fuzzy logic method, whereas the values obtained with the classical method are much larger than the actual volumes (absolute volume errors up to 72%). With increasing MRI slice thickness (1 mm to 8 mm), the phantom volumes calculated by the classical method are increasing exponentially with a maximum absolute error up to 300%. In contrast, the absolute volume errors are less than 12% for phantom

  20. Inaugural Maximum Values for Sodium in Processed Food Products in the Americas.

    Science.gov (United States)

    Campbell, Norm; Legowski, Barbara; Legetic, Branka; Nilson, Eduardo; L'Abbé, Mary

    2015-08-01

    Reducing dietary salt/sodium is one of the most cost-effective interventions to improve population health. There are five initiatives in the Americas that independently developed targets for reformulating foods to reduce salt/sodium content. Applying selection criteria, recommended by the Pan American Health Organization (PAHO)/World Health Organization (WHO) Technical Advisory Group on Dietary Salt/Sodium Reduction, a consortium of governments, civil society, and food companies (the Salt Smart Consortium) agreed to an inaugural set of regional maximum targets (upper limits) for salt/sodium levels for 11 food categories, to be achieved by December 2016. Ultimately, to substantively reduce dietary salt across whole populations, targets will be needed for the majority of processed and pre-prepared foods. Cardiovascular and hypertension organizations are encouraged to utilize the regional targets in advocacy and in monitoring and evaluation of progress by the food industry. © 2015 Wiley Periodicals, Inc.

  1. 3D–2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    International Nuclear Information System (INIS)

    De Silva, T; Ketcha, M D; Siewerdsen, J H; Uneri, A; Reaungamornrat, S; Kleinszig, G; Vogt, S; Aygun, N; Lo, S-F; Wolinsky, J-P

    2016-01-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D–2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D–2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  <  6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1–2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved

  2. Robustness - theoretical framework

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.

    2010-01-01

    More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....

  3. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  4. Robustness Assessment of Spatial Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    2012-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern buildi...... to robustness of spatial timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern building...... codes consider the need for robustness of structures and provide strategies and methods to obtain robustness. Therefore a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues with respect...

  5. Vision-Based Target Finding and Inspection of a Ground Target Using a Multirotor UAV System.

    Science.gov (United States)

    Hinas, Ajmal; Roberts, Jonathan M; Gonzalez, Felipe

    2017-12-17

    In this paper, a system that uses an algorithm for target detection and navigation and a multirotor Unmanned Aerial Vehicle (UAV) for finding a ground target and inspecting it closely is presented. The system can also be used for accurate and safe delivery of payloads or spot spraying applications in site-specific crop management. A downward-looking camera attached to a multirotor is used to find the target on the ground. The UAV descends to the target and hovers above the target for a few seconds to inspect the target. A high-level decision algorithm based on an OODA (observe, orient, decide, and act) loop was developed as a solution to address the problem. Navigation of the UAV was achieved by continuously sending local position messages to the autopilot via Mavros. The proposed system performed hovering above the target in three different stages: locate, descend, and hover. The system was tested in multiple trials, in simulations and outdoor tests, from heights of 10 m to 40 m. Results show that the system is highly reliable and robust to sensor errors, drift, and external disturbance.

  6. Prediction of Effective Drug Combinations by Chemical Interaction, Protein Interaction and Target Enrichment of KEGG Pathways

    Directory of Open Access Journals (Sweden)

    Lei Chen

    2013-01-01

    Full Text Available Drug combinatorial therapy could be more effective in treating some complex diseases than single agents due to better efficacy and reduced side effects. Although some drug combinations are being used, their underlying molecular mechanisms are still poorly understood. Therefore, it is of great interest to deduce a novel drug combination by their molecular mechanisms in a robust and rigorous way. This paper attempts to predict effective drug combinations by a combined consideration of: (1 chemical interaction between drugs, (2 protein interactions between drugs’ targets, and (3 target enrichment of KEGG pathways. A benchmark dataset was constructed, consisting of 121 confirmed effective combinations and 605 random combinations. Each drug combination was represented by 465 features derived from the aforementioned three properties. Some feature selection techniques, including Minimum Redundancy Maximum Relevance and Incremental Feature Selection, were adopted to extract the key features. Random forest model was built with its performance evaluated by 5-fold cross-validation. As a result, 55 key features providing the best prediction result were selected. These important features may help to gain insights into the mechanisms of drug combinations, and the proposed prediction model could become a useful tool for screening possible drug combinations.

  7. Automatic picker of P & S first arrivals and robust event locator

    Science.gov (United States)

    Pinsky, V.; Polozov, A.; Hofstetter, A.

    2003-12-01

    We report on further development of automatic all distances location procedure designed for a regional network. The procedure generalizes the previous "loca l" (R ratio of two STAs, calculated in two consecutive and equal time windows (instead of previously used Akike Information Criterion). "Teleseismic " location is split in two stages: preliminary and final one. The preliminary part estimates azimuth and apparent velocity by fitting a plane wave to the P automatic pickings. The apparent velocity criterion is used to decide about strategy of the following computations: teleseismic or regional. The preliminary estimates of azimuth and apparent velocity provide starting value for the final teleseismic and regional location. Apparent velocity is used to get first a pproximation distance to the source on the basis of the P, Pn, Pg travel-timetables. The distance estimate together with the preliminary azimuth estimate provides first approximations of the source latitude and longitude via sine and cosine theorems formulated for the spherical triangle. Final location is based on robust grid-search optimization procedure, weighting the number of pickings that simultaneously fit the model travel times. The grid covers initial location and becomes finer while approaching true hypocenter. The target function is a sum of the bell-shaped characteristic functions, used to emphasize true pickings and eliminate outliers. The final solution is a grid point that provides maximum to the target function. The procedure was applied to a list of ML > 4 earthquakes recorded by the Israel Seismic Network (ISN) in the 1999-2002 time period. Most of them are badly constrained relative the network. However, the results of location with average normalized error relative bulletin solutions e=dr/R of 5% were obtained, in each of the distance ranges. The first version of the procedure was incorporated in the national Early Warning System in 2001. Recently, we started to send automatic Early

  8. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  9. Interference-robust Air Interface for 5G Small Cells

    DEFF Research Database (Denmark)

    Tavares, Fernando Menezes Leitão

    the existing wireless network infrastructure to the limit. Mobile network operators must invest in network expansion to deal with this problem, but the predicted network requirements show that a new Radio Access Technology (RAT) standard will be fundamental to reach the future target performance. This new 5th...... to the fundamental role of inter-cell interference in this type of networks, the inter-cell interference problem must be addressed since the beginning of the design of the new standard. This Ph.D. thesis deals with the design of an interference-robust air interface for 5G small cell networks. The interference...

  10. Optics robustness of the ATLAS Tile Calorimeter

    CERN Document Server

    Costa Batalha Pedro, Rute; The ATLAS collaboration

    2018-01-01

    TileCal, the central hadronic calorimeter of the ATLAS detector is composed of plastic scintillators interleaved by iron plates, and wavelength shifting optical fibres. The optical properties of these components are known to suffer from natural ageing and degrade due to exposure to radiation. The calorimeter was designed for 10 years of LHC operating at the design luminosity of $10^{34}$ cm$^{-1}$s$^{-1}$. Irradiation tests of scintillators and fibres shown that their light yield decrease about 10 for the maximum dose expected after the 10 years of LHC operation. The robustness of the TileCal optics components is evaluated using the calibration systems of the calorimeter: Cs-137 gamma source, laser light, and integrated photomultiplier signals of particles from collisions. It is observed that the loss of light yield increases with exposure to radiation as expected. The decrease in the light yield during the years 2015-2017 corresponding to the LHC Run 2 will be reported.

  11. Limited Impact of Setup and Range Uncertainties, Breathing Motion, and Interplay Effects in Robustly Optimized Intensity Modulated Proton Therapy for Stage III Non-small Cell Lung Cancer

    International Nuclear Information System (INIS)

    Inoue, Tatsuya; Widder, Joachim; Dijk, Lisanne V. van; Takegawa, Hideki; Koizumi, Masahiko; Takashina, Masaaki; Usui, Keisuke; Kurokawa, Chie; Sugimoto, Satoru; Saito, Anneyuko I.; Sasai, Keisuke; Veld, Aart A. van't; Langendijk, Johannes A.; Korevaar, Erik W.

    2016-01-01

    Purpose: To investigate the impact of setup and range uncertainties, breathing motion, and interplay effects using scanning pencil beams in robustly optimized intensity modulated proton therapy (IMPT) for stage III non-small cell lung cancer (NSCLC). Methods and Materials: Three-field IMPT plans were created using a minimax robust optimization technique for 10 NSCLC patients. The plans accounted for 5- or 7-mm setup errors with ±3% range uncertainties. The robustness of the IMPT nominal plans was evaluated considering (1) isotropic 5-mm setup errors with ±3% range uncertainties; (2) breathing motion; (3) interplay effects; and (4) a combination of items 1 and 2. The plans were calculated using 4-dimensional and average intensity projection computed tomography images. The target coverage (TC, volume receiving 95% of prescribed dose) and homogeneity index (D_2 − D_9_8, where D_2 and D_9_8 are the least doses received by 2% and 98% of the volume) for the internal clinical target volume, and dose indexes for lung, esophagus, heart and spinal cord were compared with that of clinical volumetric modulated arc therapy plans. Results: The TC and homogeneity index for all plans were within clinical limits when considering the breathing motion and interplay effects independently. The setup and range uncertainties had a larger effect when considering their combined effect. The TC decreased to 98% for robust 7-mm evaluations for all patients. The organ at risk dose parameters did not significantly vary between the respective robust 5-mm and robust 7-mm evaluations for the 4 error types. Compared with the volumetric modulated arc therapy plans, the IMPT plans showed better target homogeneity and mean lung and heart dose parameters reduced by about 40% and 60%, respectively. Conclusions: In robustly optimized IMPT for stage III NSCLC, the setup and range uncertainties, breathing motion, and interplay effects have limited impact on target coverage, dose homogeneity, and

  12. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results show...

  13. New designs of LMJ targets for early ignition experiments

    International Nuclear Information System (INIS)

    Clerouin, C; Bonnefille, M; Dattolo, E; Fremerye, P; Galmiche, D; Gauthier, P; Giorla, J; Laffite, S; Liberatore, S; Loiseau, P; Malinie, G; Masse, L; Poggi, F; Seytor, P

    2008-01-01

    The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 40 laser quads, delivering up to 1.4MJ and 380TW. New targets needing reduced laser energy with only a small decrease in robustness are then designed for this purpose. A first strategy is to use scaled-down cylindrical hohlraums and capsules, taking advantage of our better understanding of the problem, set on theoretical modelling, simulations and experiments. Another strategy is to work specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, which is with parametric instabilities a crucial drawback of indirect drive. An alternative design is proposed, made up of the nominal 60 quads capsule, named A1040, in a rugby-shaped hohlraum. Robustness evaluations of these different targets are in progress

  14. New designs of LMJ targets for early ignition experiments

    Energy Technology Data Exchange (ETDEWEB)

    Clerouin, C; Bonnefille, M; Dattolo, E; Fremerye, P; Galmiche, D; Gauthier, P; Giorla, J; Laffite, S; Liberatore, S; Loiseau, P; Malinie, G; Masse, L; Poggi, F; Seytor, P [Commissariat a l' Energie Atomique, DAM-Ile de France, BP 12 91680 Bruyeres-le-Chatel (France)], E-mail: catherine.cherfils@cea.fr

    2008-05-15

    The LMJ experimental plans include the attempt of ignition and burn of an ICF capsule with 40 laser quads, delivering up to 1.4MJ and 380TW. New targets needing reduced laser energy with only a small decrease in robustness are then designed for this purpose. A first strategy is to use scaled-down cylindrical hohlraums and capsules, taking advantage of our better understanding of the problem, set on theoretical modelling, simulations and experiments. Another strategy is to work specifically on the coupling efficiency parameter, i.e. the ratio of the energy absorbed by the capsule to the laser energy, which is with parametric instabilities a crucial drawback of indirect drive. An alternative design is proposed, made up of the nominal 60 quads capsule, named A1040, in a rugby-shaped hohlraum. Robustness evaluations of these different targets are in progress.

  15. Using a network-based approach and targeted maximum likelihood estimation to evaluate the effect of adding pre-exposure prophylaxis to an ongoing test-and-treat trial.

    Science.gov (United States)

    Balzer, Laura; Staples, Patrick; Onnela, Jukka-Pekka; DeGruttola, Victor

    2017-04-01

    Several cluster-randomized trials are underway to investigate the implementation and effectiveness of a universal test-and-treat strategy on the HIV epidemic in sub-Saharan Africa. We consider nesting studies of pre-exposure prophylaxis within these trials. Pre-exposure prophylaxis is a general strategy where high-risk HIV- persons take antiretrovirals daily to reduce their risk of infection from exposure to HIV. We address how to target pre-exposure prophylaxis to high-risk groups and how to maximize power to detect the individual and combined effects of universal test-and-treat and pre-exposure prophylaxis strategies. We simulated 1000 trials, each consisting of 32 villages with 200 individuals per village. At baseline, we randomized the universal test-and-treat strategy. Then, after 3 years of follow-up, we considered four strategies for targeting pre-exposure prophylaxis: (1) all HIV- individuals who self-identify as high risk, (2) all HIV- individuals who are identified by their HIV+ partner (serodiscordant couples), (3) highly connected HIV- individuals, and (4) the HIV- contacts of a newly diagnosed HIV+ individual (a ring-based strategy). We explored two possible trial designs, and all villages were followed for a total of 7 years. For each village in a trial, we used a stochastic block model to generate bipartite (male-female) networks and simulated an agent-based epidemic process on these networks. We estimated the individual and combined intervention effects with a novel targeted maximum likelihood estimator, which used cross-validation to data-adaptively select from a pre-specified library the candidate estimator that maximized the efficiency of the analysis. The universal test-and-treat strategy reduced the 3-year cumulative HIV incidence by 4.0% on average. The impact of each pre-exposure prophylaxis strategy on the 4-year cumulative HIV incidence varied by the coverage of the universal test-and-treat strategy with lower coverage resulting in a larger

  16. Robust canonical correlations: A comparative study

    OpenAIRE

    Branco, JA; Croux, Christophe; Filzmoser, P; Oliveira, MR

    2005-01-01

    Several approaches for robust canonical correlation analysis will be presented and discussed. A first method is based on the definition of canonical correlation analysis as looking for linear combinations of two sets of variables having maximal (robust) correlation. A second method is based on alternating robust regressions. These methods axe discussed in detail and compared with the more traditional approach to robust canonical correlation via covariance matrix estimates. A simulation study ...

  17. Statistical analysis of maximum likelihood estimator images of human brain FDG PET studies

    International Nuclear Information System (INIS)

    Llacer, J.; Veklerov, E.; Hoffman, E.J.; Nunez, J.; Coakley, K.J.

    1993-01-01

    The work presented in this paper evaluates the statistical characteristics of regional bias and expected error in reconstructions of real PET data of human brain fluorodeoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task that the authors have investigated is that of quantifying radioisotope uptake in regions-of-interest (ROI's). They first describe a robust methodology for the use of the MLE method with clinical data which contains only one adjustable parameter: the kernel size for a Gaussian filtering operation that determines final resolution and expected regional error. Simulation results are used to establish the fundamental characteristics of the reconstructions obtained by out methodology, corresponding to the case in which the transition matrix is perfectly known. Then, data from 72 independent human brain FDG scans from four patients are used to show that the results obtained from real data are consistent with the simulation, although the quality of the data and of the transition matrix have an effect on the final outcome

  18. A robust internal control for high-precision DNA methylation analyses by droplet digital PCR.

    Science.gov (United States)

    Pharo, Heidi D; Andresen, Kim; Berg, Kaja C G; Lothe, Ragnhild A; Jeanmougin, Marine; Lind, Guro E

    2018-01-01

    Droplet digital PCR (ddPCR) allows absolute quantification of nucleic acids and has potential for improved non-invasive detection of DNA methylation. For increased precision of the methylation analysis, we aimed to develop a robust internal control for use in methylation-specific ddPCR. Two control design approaches were tested: (a) targeting a genomic region shared across members of a gene family and (b) combining multiple assays targeting different pericentromeric loci on different chromosomes. Through analyses of 34 colorectal cancer cell lines, the performance of the control assay candidates was optimized and evaluated, both individually and in various combinations, using the QX200™ droplet digital PCR platform (Bio-Rad). The best-performing control was tested in combination with assays targeting methylated CDO1 , SEPT9 , and VIM . A 4Plex panel consisting of EPHA3 , KBTBD4 , PLEKHF1 , and SYT10 was identified as the best-performing control. The use of the 4Plex for normalization reduced the variability in methylation values, corrected for differences in template amount, and diminished the effect of chromosomal aberrations. Positive Droplet Calling (PoDCall), an R-based algorithm for standardized threshold determination, was developed, ensuring consistency of the ddPCR results. Implementation of a robust internal control, i.e., the 4Plex, and an algorithm for automated threshold determination, PoDCall, in methylation-specific ddPCR increase the precision of DNA methylation analysis.

  19. Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism.

    Directory of Open Access Journals (Sweden)

    Bineng Zhong

    Full Text Available To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object, while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object. Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method.

  20. Robust second-order scheme for multi-phase flow computations

    Science.gov (United States)

    Shahbazi, Khosro

    2017-06-01

    A robust high-order scheme for the multi-phase flow computations featuring jumps and discontinuities due to shock waves and phase interfaces is presented. The scheme is based on high-order weighted-essentially non-oscillatory (WENO) finite volume schemes and high-order limiters to ensure the maximum principle or positivity of the various field variables including the density, pressure, and order parameters identifying each phase. The two-phase flow model considered besides the Euler equations of gas dynamics consists of advection of two parameters of the stiffened-gas equation of states, characterizing each phase. The design of the high-order limiter is guided by the findings of Zhang and Shu (2011) [36], and is based on limiting the quadrature values of the density, pressure and order parameters reconstructed using a high-order WENO scheme. The proof of positivity-preserving and accuracy is given, and the convergence and the robustness of the scheme are illustrated using the smooth isentropic vortex problem with very small density and pressure. The effectiveness and robustness of the scheme in computing the challenging problem of shock wave interaction with a cluster of tightly packed air or helium bubbles placed in a body of liquid water is also demonstrated. The superior performance of the high-order schemes over the first-order Lax-Friedrichs scheme for computations of shock-bubble interaction is also shown. The scheme is implemented in two-dimensional space on parallel computers using message passing interface (MPI). The proposed scheme with limiter features approximately 50% higher number of inter-processor message communications compared to the corresponding scheme without limiter, but with only 10% higher total CPU time. The scheme is provably second-order accurate in regions requiring positivity enforcement and higher order in the rest of domain.

  1. Robustness of non-interdependent and interdependent networks against dependent and adaptive attacks

    Science.gov (United States)

    Tyra, Adam; Li, Jingtao; Shang, Yilun; Jiang, Shuo; Zhao, Yanjun; Xu, Shouhuai

    2017-09-01

    Robustness of complex networks has been extensively studied via the notion of site percolation, which typically models independent and non-adaptive attacks (or disruptions). However, real-life attacks are often dependent and/or adaptive. This motivates us to characterize the robustness of complex networks, including non-interdependent and interdependent ones, against dependent and adaptive attacks. For this purpose, dependent attacks are accommodated by L-hop percolation where the nodes within some L-hop (L ≥ 0) distance of a chosen node are all deleted during one attack (with L = 0 degenerating to site percolation). Whereas, adaptive attacks are launched by attackers who can make node-selection decisions based on the network state in the beginning of each attack. The resulting characterization enriches the body of knowledge with new insights, such as: (i) the Achilles' Heel phenomenon is only valid for independent attacks, but not for dependent attacks; (ii) powerful attack strategies (e.g., targeted attacks and dependent attacks, dependent attacks and adaptive attacks) are not compatible and cannot help the attacker when used collectively. Our results shed some light on the design of robust complex networks.

  2. SeedVicious: Analysis of microRNA target and near-target sites.

    Science.gov (United States)

    Marco, Antonio

    2018-01-01

    Here I describe seedVicious, a versatile microRNA target site prediction software that can be easily fitted into annotation pipelines and run over custom datasets. SeedVicious finds microRNA canonical sites plus other, less efficient, target sites. Among other novel features, seedVicious can compute evolutionary gains/losses of target sites using maximum parsimony, and also detect near-target sites, which have one nucleotide different from a canonical site. Near-target sites are important to study population variation in microRNA regulation. Some analyses suggest that near-target sites may also be functional sites, although there is no conclusive evidence for that, and they may actually be target alleles segregating in a population. SeedVicious does not aim to outperform but to complement existing microRNA prediction tools. For instance, the precision of TargetScan is almost doubled (from 11% to ~20%) when we filter predictions by the distance between target sites using this program. Interestingly, two adjacent canonical target sites are more likely to be present in bona fide target transcripts than pairs of target sites at slightly longer distances. The software is written in Perl and runs on 64-bit Unix computers (Linux and MacOS X). Users with no computing experience can also run the program in a dedicated web-server by uploading custom data, or browse pre-computed predictions. SeedVicious and its associated web-server and database (SeedBank) are distributed under the GPL/GNU license.

  3. A method for selection of beam angles robust to intra-fractional motion in proton therapy of lung cancer

    DEFF Research Database (Denmark)

    Casares-Magaz, Oscar; Toftegaard, Jakob; Muren, Ludvig P.

    2014-01-01

    that are robust to patient-specific patterns of intra-fractional motion. Material and methods. Using four-dimensional computed tomography (4DCT) images of three lung cancer patients we evaluated the impact of the WEPL changes on target dose coverage for a series of coplanar single-beam plans. The plans were...... reduction was associated with the mean difference between the WEPL and the phase-averaged WEPL computed for all beam rays across all possible gantry-couch angle combinations. Results. The gantry-couch angle maps showed areas of both high and low WEPL variation, with overall quite similar patterns yet...... presented a 4DCT-based method to quantify WEPL changes during the breathing cycle. The method identified proton field gantry-couch angle combinations that were either sensitive or robust to WEPL changes. WEPL variations along the beam path were associated with target under-dosage....

  4. Multi-MW target station: Beam Window Issues and Transverse Film Target

    CERN Document Server

    Herrera-Martinez, A

    The analysis of the EURISOL-DS Multi_MW target precise geometry has proved that large fission yields can be achieved with a 4 MW, providing a technically feasible design to evacuate the power deposited in the liquid mercury. Different designs for the mercury flow have been proposed, which maintain its temperature below the boiling point with moderate flow speeds (maximum 4 m/s).

  5. A Novel Loss Recovery and Tracking Scheme for Maneuvering Target in Hybrid WSNs.

    Science.gov (United States)

    Qian, Hanwang; Fu, Pengcheng; Li, Baoqing; Liu, Jianpo; Yuan, Xiaobing

    2018-01-25

    Tracking a mobile target, which aims to timely monitor the invasion of specific target, is one of the most prominent applications in wireless sensor networks (WSNs). Traditional tracking methods in WSNs only based on static sensor nodes (SNs) have several critical problems. For example, to void the loss of mobile target, many SNs must be active to track the target in all possible directions, resulting in excessive energy consumption. Additionally, when entering coverage holes in the monitoring area, the mobile target may be missing and then its state is unknown during this period. To tackle these problems, in this paper, a few mobile sensor nodes (MNs) are introduced to cooperate with SNs to form a hybrid WSN due to their stronger abilities and less constrained energy. Then, we propose a valid target tracking scheme for hybrid WSNs to dynamically schedule the MNs and SNs. Moreover, a novel loss recovery mechanism is proposed to find the lost target and recover the tracking with fewer SNs awakened. Furthermore, to improve the robustness and accuracy of the recovery mechanism, an adaptive unscented Kalman filter (AUKF) algorithm is raised to dynamically adjust the process noise covariance. Simulation results demonstrate that our tracking scheme for maneuvering target in hybrid WSNs can not only track the target effectively even if the target is lost but also maintain an excellent accuracy and robustness with fewer activated nodes.

  6. Project Robust Scheduling Based on the Scattered Buffer Technology

    Directory of Open Access Journals (Sweden)

    Nansheng Pang

    2018-04-01

    Full Text Available The research object in this paper is the sub network formed by the predecessor’s affect on the solution activity. This paper is to study three types of influencing factors from the predecessors that lead to the delay of starting time of the solution activity on the longest path, and to analyze the influence degree on the delay of the solution activity’s starting time from different types of factors. On this basis, through the comprehensive analysis of various factors that influence the solution activity, this paper proposes a metric that is used to evaluate the solution robustness of the project scheduling, and this metric is taken as the optimization goal. This paper also adopts the iterative process to design a scattered buffer heuristics algorithm based on the robust scheduling of the time buffer. At the same time, the resource flow network is introduced in this algorithm, using the tabu search algorithm to solve baseline scheduling. For the generation of resource flow network in the baseline scheduling, this algorithm designs a resource allocation algorithm with the maximum use of the precedence relations. Finally, the algorithm proposed in this paper and some other algorithms in previous literature are taken into the simulation experiment; under the comparative analysis, the experimental results show that the algorithm proposed in this paper is reasonable and feasible.

  7. Energy efficient hotspot-targeted embedded liquid cooling of electronics

    International Nuclear Information System (INIS)

    Sharma, Chander Shekhar; Tiwari, Manish K.; Zimmermann, Severin; Brunschwiler, Thomas; Schlottig, Gerd; Michel, Bruno; Poulikakos, Dimos

    2015-01-01

    Highlights: • We present a novel concept for hotspot-targeted, energy efficient ELC for electronic chips. • Microchannel throttling zones distribute flow optimally without any external control. • Design is optimized for highly non-uniform multicore chip heat flux maps. • Optimized design minimizes chip temperature non-uniformity. • This is achieved with pumping power consumption less than 1% of total chip power. - Abstract: Large data centers today already account for nearly 1.31% of total electricity consumption with cooling responsible for roughly 33% of that energy consumption. This energy intensive cooling problem is exacerbated by the presence of hotspots in multicore microprocessors due to excess coolant flow requirement for thermal management. Here we present a novel liquid-cooling concept, for targeted, energy efficient cooling of hotspots through passively optimized microchannel structures etched into the backside of a chip (embedded liquid cooling or ELC architecture). We adopt an experimentally validated and computationally efficient modeling approach to predict the performance of our hotspot-targeted ELC design. The design is optimized for exemplar non-uniform chip power maps using Response Surface Methodology (RSM). For industrially acceptable limits of approximately 0.4 bar (40 kPa) on pressure drop and one percent of total chip power on pumping power, the optimized designs are computationally evaluated against a base, standard ELC design with uniform channel widths and uniform flow distribution. For an average steady-state heat flux of 150 W/cm 2 in core areas (hotspots) and 20 W/cm 2 over remaining chip area (background), the optimized design reduces the maximum chip temperature non-uniformity by 61% to 3.7 °C. For a higher average, steady-state hotspot heat flux of 300 W/cm 2 , the maximum temperature non-uniformity is reduced by 54% to 8.7 °C. It is shown that the base design requires a prohibitively high level of pumping power (about

  8. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.

    Science.gov (United States)

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may

  9. TH-C-BRD-12: Robust Intensity Modulated Proton Therapy Plan Can Eliminate Junction Shifts for Craniospinal Irradiation

    International Nuclear Information System (INIS)

    Liao, L; Jiang, S; Li, Y; Wang, X; Li, H; Zhu, X; Sahoo, N; Gillin, M; Mahajan, A; Grosshans, D; Zhang, X; Lim, G

    2014-01-01

    Purpose: The passive scattering proton therapy (PSPT) technique is the commonly used radiotherapy technique for craniospinal irradiation (CSI). However, PSPT involves many numbers of junction shifts applied over the course of treatment to reduce the cold and hot regions caused by field mismatching. In this work, we introduced a robust planning approach to develop an optimal and clinical efficient techniques for CSI using intensity modulated proton therapy (IMPT) so that junction shifts can essentially be eliminated. Methods: The intra-fractional uncertainty, in which two overlapping fields shift in the opposite directions along the craniospinal axis, are incorporated into the robust optimization algorithm. Treatment plans with junction sizes 3,5,10,15,20,25 cm were designed and compared with the plan designed using the non-robust optimization. Robustness of the plans were evaluated based on dose profiles along the craniospinal axis for the plans applying 3 mm intra-fractional shift. The dose intra-fraction variations (DIV) at the junction are used to evaluate the robustness of the plans. Results: The DIVs are 7.9%, 6.3%, 5.0%, 3.8%, 2.8% and 2.2%, for the robustly optimized plans with junction sizes 3,5,10,15,20,25 cm. The DIV are 10% for the non-robustly optimized plans with junction size 25 cm. The dose profiles along the craniospinal axis exhibit gradual and tapered dose distribution. Using DIVs less than 5% as maximum acceptable intrafractional variation, the overlapping region can be reduced to 10 cm, leading to potential reduced number of the fields. The DIVs are less than 5% for 5 mm intra-fractional shifts with junction size 25 cm, leading to potential no-junction-shift for CSI using IMPT. Conclusion: This work is the first report of the robust optimization on CSI based on IMPT. We demonstrate that robust optimization can lead to much efficient carniospinal irradiation by eliminating the junction shifts

  10. SU-F-BRD-01: A Novel 4D Robust Optimization Mitigates Interplay Effect in Intensity-Modulated Proton Therapy for Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Liu, W; Shen, J; Stoker, J; Bues, M [Mayo Clinic Arizona, Phoenix, AZ (United States); Schild, S; Wong, W [Mayo Clinic, Phoenix, Arizona (United States); Chang, J; Liao, Z; Wen, Z; Sahoo, N [MD Anderson Cancer Center, Houston, TX (United States); Herman, M [Mayo Clinic, Rochester, MN (United States); Mohan, R [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: To compare the impact of interplay effect on 3D and 4D robustly optimized intensity-modulated proton therapy (IMPT) plans to treat lung cancer. Methods: Two IMPT plans were created for 11 non-small-cell-lung-cancer cases with 6–14 mm spots. 3D robust optimization generated plans on average CTs with the internal gross tumor volume density overridden to deliver 66 CGyE in 33 fractions to the internal target volume (ITV). 4D robust optimization generated plans on 4D CTs with the delivery of prescribed dose to the clinical target volume (CTV). In 4D optimization, the CTV of individual 4D CT phases received non-uniform doses to achieve a uniform cumulative dose. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Indices from dose-volume histograms were used to compare target coverage, dose homogeneity, and normal-tissue sparing. DVH indices were compared using Wilcoxon test. Results: Given the presence of interplay effect, 4D robust optimization produced IMPT plans with better target coverage and homogeneity, but slightly worse normal tissue sparing compared to 3D robust optimization (unit: Gy) [D95% ITV: 63.5 vs 62.0 (p=0.014), D5% - D95% ITV: 6.2 vs 7.3 (p=0.37), D1% spinal cord: 29.0 vs 29.5 (p=0.52), Dmean total lung: 14.8 vs 14.5 (p=0.12), D33% esophagus: 33.6 vs 33.1 (p=0.28)]. The improvement of target coverage (D95%,4D – D95%,3D) was related to the ratio RMA3/(TVx10−4), with RMA and TV being respiratory motion amplitude (RMA) and tumor volume (TV), respectively. Peak benefit was observed at ratios between 2 and 10. This corresponds to 125 – 625 cm3 TV with 0.5-cm RMA. Conclusion: 4D optimization produced more interplay-effect-resistant plans compared to 3D optimization. It is most effective when respiratory motion is modest

  11. SU-F-BRD-01: A Novel 4D Robust Optimization Mitigates Interplay Effect in Intensity-Modulated Proton Therapy for Lung Cancer

    International Nuclear Information System (INIS)

    Liu, W; Shen, J; Stoker, J; Bues, M; Schild, S; Wong, W; Chang, J; Liao, Z; Wen, Z; Sahoo, N; Herman, M; Mohan, R

    2015-01-01

    Purpose: To compare the impact of interplay effect on 3D and 4D robustly optimized intensity-modulated proton therapy (IMPT) plans to treat lung cancer. Methods: Two IMPT plans were created for 11 non-small-cell-lung-cancer cases with 6–14 mm spots. 3D robust optimization generated plans on average CTs with the internal gross tumor volume density overridden to deliver 66 CGyE in 33 fractions to the internal target volume (ITV). 4D robust optimization generated plans on 4D CTs with the delivery of prescribed dose to the clinical target volume (CTV). In 4D optimization, the CTV of individual 4D CT phases received non-uniform doses to achieve a uniform cumulative dose. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Indices from dose-volume histograms were used to compare target coverage, dose homogeneity, and normal-tissue sparing. DVH indices were compared using Wilcoxon test. Results: Given the presence of interplay effect, 4D robust optimization produced IMPT plans with better target coverage and homogeneity, but slightly worse normal tissue sparing compared to 3D robust optimization (unit: Gy) [D95% ITV: 63.5 vs 62.0 (p=0.014), D5% - D95% ITV: 6.2 vs 7.3 (p=0.37), D1% spinal cord: 29.0 vs 29.5 (p=0.52), Dmean total lung: 14.8 vs 14.5 (p=0.12), D33% esophagus: 33.6 vs 33.1 (p=0.28)]. The improvement of target coverage (D95%,4D – D95%,3D) was related to the ratio RMA3/(TVx10−4), with RMA and TV being respiratory motion amplitude (RMA) and tumor volume (TV), respectively. Peak benefit was observed at ratios between 2 and 10. This corresponds to 125 – 625 cm3 TV with 0.5-cm RMA. Conclusion: 4D optimization produced more interplay-effect-resistant plans compared to 3D optimization. It is most effective when respiratory motion is modest

  12. Robust plasmonic substrates

    DEFF Research Database (Denmark)

    Kostiučenko, Oksana; Fiutowski, Jacek; Tamulevicius, Tomas

    2014-01-01

    Robustness is a key issue for the applications of plasmonic substrates such as tip-enhanced Raman spectroscopy, surface-enhanced spectroscopies, enhanced optical biosensing, optical and optoelectronic plasmonic nanosensors and others. A novel approach for the fabrication of robust plasmonic...... substrates is presented, which relies on the coverage of gold nanostructures with diamond-like carbon (DLC) thin films of thicknesses 25, 55 and 105 nm. DLC thin films were grown by direct hydrocarbon ion beam deposition. In order to find the optimum balance between optical and mechanical properties...

  13. Resolution and robustness to noise of the sensitivity-based method for microwave imaging with data acquired on cylindrical surfaces

    International Nuclear Information System (INIS)

    Zhang, Yifan; Tu, Sheng; Amineh, Reza K; Nikolova, Natalia K

    2012-01-01

    The spatial resolution limit of a Jacobian-based microwave imaging algorithm and its robustness to noise are evaluated. The focus here is on tomographic systems where the wideband data are acquired with a vertically scanned circular sensor array and at each scanning step a 2D image is reconstructed in the plane of the sensor array. The theoretical resolution is obtained as one-half of the maximum-frequency wavelength with far-zone data and about two-thirds of the array radius with near-zone data. Validation examples are given using analytical electromagnetic models. The algorithm is shown to be robust to noise when the response data are corrupted by Gaussian white noise. (paper)

  14. A robust standard deviation control chart

    NARCIS (Netherlands)

    Schoonhoven, M.; Does, R.J.M.M.

    2012-01-01

    This article studies the robustness of Phase I estimators for the standard deviation control chart. A Phase I estimator should be efficient in the absence of contaminations and resistant to disturbances. Most of the robust estimators proposed in the literature are robust against either diffuse

  15. Mechanical Design for Robustness of the LHC Collimators

    CERN Document Server

    Bertarelli, Alessandro; Assmann, R W; Calatroni, Sergio; Dallocchio, Alessandro; Kurtyka, Tadeusz; Mayer, Manfred; Perret, Roger; Redaelli, Stefano; Robert-Demolaize, Guillaume

    2005-01-01

    The functional specification of the LHC Collimators requires, for the start-up of the machine and the initial luminosity runs (Phase 1), a collimation system with maximum robustness against abnormal beam operating conditions. The most severe cases to be considered in the mechanical design are the asynchronous beam dump at 7 TeV and the 450 GeV injection error. To ensure that the collimator jaws survive such accident scenarios, low-Z materials were chosen, driving the design towards Graphite or Carbon/Carbon composites. Furthermore, in-depth thermo-mechanical simulations, both static and dynamic, were necessary.This paper presents the results of the numerical analyses performed for the 450 GeV accident case, along with the experimental results of the tests conducted on a collimator prototype in Cern TT40 transfer line, impacted by a 450 GeV beam of 3.1·1013

  16. Fast cat-eye effect target recognition based on saliency extraction

    Science.gov (United States)

    Li, Li; Ren, Jianlin; Wang, Xingbin

    2015-09-01

    Background complexity is a main reason that results in false detection in cat-eye target recognition. Human vision has selective attention property which can help search the salient target from complex unknown scenes quickly and precisely. In the paper, we propose a novel cat-eye effect target recognition method named Multi-channel Saliency Processing before Fusion (MSPF). This method combines traditional cat-eye target recognition with the selective characters of visual attention. Furthermore, parallel processing enables it to achieve fast recognition. Experimental results show that the proposed method performs better in accuracy, robustness and speed compared to other methods.

  17. Accurate and robust phylogeny estimation based on profile distances: a study of the Chlorophyceae (Chlorophyta

    Directory of Open Access Journals (Sweden)

    Rahmann Sven

    2004-06-01

    Full Text Available Abstract Background In phylogenetic analysis we face the problem that several subclade topologies are known or easily inferred and well supported by bootstrap analysis, but basal branching patterns cannot be unambiguously estimated by the usual methods (maximum parsimony (MP, neighbor-joining (NJ, or maximum likelihood (ML, nor are they well supported. We represent each subclade by a sequence profile and estimate evolutionary distances between profiles to obtain a matrix of distances between subclades. Results Our estimator of profile distances generalizes the maximum likelihood estimator of sequence distances. The basal branching pattern can be estimated by any distance-based method, such as neighbor-joining. Our method (profile neighbor-joining, PNJ then inherits the accuracy and robustness of profiles and the time efficiency of neighbor-joining. Conclusions Phylogenetic analysis of Chlorophyceae with traditional methods (MP, NJ, ML and MrBayes reveals seven well supported subclades, but the methods disagree on the basal branching pattern. The tree reconstructed by our method is better supported and can be confirmed by known morphological characters. Moreover the accuracy is significantly improved as shown by parametric bootstrap.

  18. Internal targets for LEAR

    International Nuclear Information System (INIS)

    Kilian, K.; Gspann, J.; Mohl, D.; Poth, H.

    1984-01-01

    This chapter considers the use of thin internal targets in conjunction with phase-space cooling at the Low-Energy Antiproton Ring (LEAR). Topics considered include the merits of internal target operation; the most efficient use of antiprotons and of proton synchrotron (PS) protons, highest center-of-mass (c.m.) energy resolution; highest angular resolution and access to extreme angles; the transparent environment for all reaction products; a windowless source and pure targets; highest luminosity and count rates; access to lowest energies with increasing resolution; internal target thickness and vacuum requirements; required cooling performance; and modes of operation. It is demonstrated that an internal target in conjunction with phase-space cooling has the potential of better performance in terms of the economic use of antiprotons and consequently of PS protons; energy resolution; angular resolution; maximum reaction rate capability (statistical precision); efficient parasitic operation; transparency of the target for reaction products; access to low energies; and the ease of polarized target experiments. It is concluded that all p - experiments which need high statistics and high p - flux, such as studies of rare channels or broad, weak resonance structures, would profit from internal targets

  19. A polarized atomic-beam target for COSY-Juelich

    International Nuclear Information System (INIS)

    Eversheim, P. D.; Altmeier, M.; Felden, O.; Glende, M.; Walker, M.; Hiemer, A.; Gebel, R.

    1998-01-01

    An atomic-beam target (ABT) for the EDDA experiment has been built in Bonn and was tested for the very first time at the cooler synchrotron COSY. The ABT differs from the polarized colliding-beams ion source for COSY in the DC-operation of the dissociator and the use of permanent 6-pole magnets. At present the beam optics of the ABT is set-up for maximum density in the interaction zone, but for target-cell operation it can be modified to give maximum intensity. The modular concept of this atomic ground-state target allows to provide all vector- (and tensor) polarizations for protons and deuterons, respectively. Up to now the polarization of the atomic-beam could be verified by the EDDA experiment to be > or approx. 80% with a density in the interaction zone of > or approx. 10 11 atoms/cm 2

  20. Fibroblast activation protein (FAP as a novel metabolic target

    Directory of Open Access Journals (Sweden)

    Miguel Angel Sánchez-Garrido

    2016-10-01

    Conclusions: We conclude that pharmacological inhibition of FAP enhances levels of FGF21 in obese mice to provide robust metabolic benefits not observed in lean animals, thus validating this enzyme as a novel drug target for the treatment of obesity and diabetes.

  1. The robustness of truncated Airy beam in PT Gaussian potentials media

    Science.gov (United States)

    Wang, Xianni; Fu, Xiquan; Huang, Xianwei; Yang, Yijun; Bai, Yanfeng

    2018-03-01

    The robustness of truncated Airy beam in parity-time (PT) symmetric Gaussian potentials media is numerically investigated. A high-peak power beam sheds from the Airy beam due to the media modulation while the Airy wavefront still retain its self-bending and non-diffraction characteristics under the influence of modulation parameters. Increasing the modulation factor results in the smaller value of maximum power of the center beam, and the opposite trend occurs with the increment of the modulation depth. However, the parabolic trajectory of the Airy wavefront does not be influenced. By utilizing the unique features, the Airy beam can be used as a long distance transmission source under the PT symmetric Gaussian potentials medium.

  2. Is the ozone climate penalty robust in Europe?

    International Nuclear Information System (INIS)

    Colette, Augustin; Bessagnet, Bertrand; Meleux, Frédérik; Rouïl, Laurence; Andersson, Camilla; Engardt, Magnuz; Langner, Joakim; Baklanov, Alexander; Brandt, Jørgen; Christensen, Jesper H; Geels, Camilla; Hedegaard, Gitte B; Doherty, Ruth; Giannakopoulos, Christos; Katragkou, Eleni; Lei, Hang; Manders, Astrid; Melas, Dimitris; Sofiev, Mikhail; Soares, Joana

    2015-01-01

    Ozone air pollution is identified as one of the main threats bearing upon human health and ecosystems, with 25 000 deaths in 2005 attributed to surface ozone in Europe (IIASA 2013 TSAP Report #10). In addition, there is a concern that climate change could negate ozone pollution mitigation strategies, making them insufficient over the long run and jeopardising chances to meet the long term objective set by the European Union Directive of 2008 (Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008) (60 ppbv, daily maximum). This effect has been termed the ozone climate penalty. One way of assessing this climate penalty is by driving chemistry-transport models with future climate projections while holding the ozone precursor emissions constant (although the climate penalty may also be influenced by changes in emission of precursors). Here we present an analysis of the robustness of the climate penalty in Europe across time periods and scenarios by analysing the databases underlying 11 articles published on the topic since 2007, i.e. a total of 25 model projections. This substantial body of literature has never been explored to assess the uncertainty and robustness of the climate ozone penalty because of the use of different scenarios, time periods and ozone metrics. Despite the variability of model design and setup in this database of 25 model projection, the present meta-analysis demonstrates the significance and robustness of the impact of climate change on European surface ozone with a latitudinal gradient from a penalty bearing upon large parts of continental Europe and a benefit over the North Atlantic region of the domain. Future climate scenarios present a penalty for summertime (JJA) surface ozone by the end of the century (2071–2100) of at most 5 ppbv. Over European land surfaces, the 95% confidence interval of JJA ozone change is [0.44; 0.64] and [0.99; 1.50] ppbv for the 2041–2070 and 2071–2100 time windows, respectively

  3. Is the ozone climate penalty robust in Europe?

    Science.gov (United States)

    Colette, Augustin; Andersson, Camilla; Baklanov, Alexander; Bessagnet, Bertrand; Brandt, Jørgen; Christensen, Jesper H.; Doherty, Ruth; Engardt, Magnuz; Geels, Camilla; Giannakopoulos, Christos; Hedegaard, Gitte B.; Katragkou, Eleni; Langner, Joakim; Lei, Hang; Manders, Astrid; Melas, Dimitris; Meleux, Frédérik; Rouïl, Laurence; Sofiev, Mikhail; Soares, Joana; Stevenson, David S.; Tombrou-Tzella, Maria; Varotsos, Konstantinos V.; Young, Paul

    2015-08-01

    Ozone air pollution is identified as one of the main threats bearing upon human health and ecosystems, with 25 000 deaths in 2005 attributed to surface ozone in Europe (IIASA 2013 TSAP Report #10). In addition, there is a concern that climate change could negate ozone pollution mitigation strategies, making them insufficient over the long run and jeopardising chances to meet the long term objective set by the European Union Directive of 2008 (Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008) (60 ppbv, daily maximum). This effect has been termed the ozone climate penalty. One way of assessing this climate penalty is by driving chemistry-transport models with future climate projections while holding the ozone precursor emissions constant (although the climate penalty may also be influenced by changes in emission of precursors). Here we present an analysis of the robustness of the climate penalty in Europe across time periods and scenarios by analysing the databases underlying 11 articles published on the topic since 2007, i.e. a total of 25 model projections. This substantial body of literature has never been explored to assess the uncertainty and robustness of the climate ozone penalty because of the use of different scenarios, time periods and ozone metrics. Despite the variability of model design and setup in this database of 25 model projection, the present meta-analysis demonstrates the significance and robustness of the impact of climate change on European surface ozone with a latitudinal gradient from a penalty bearing upon large parts of continental Europe and a benefit over the North Atlantic region of the domain. Future climate scenarios present a penalty for summertime (JJA) surface ozone by the end of the century (2071-2100) of at most 5 ppbv. Over European land surfaces, the 95% confidence interval of JJA ozone change is [0.44; 0.64] and [0.99; 1.50] ppbv for the 2041-2070 and 2071-2100 time windows, respectively.

  4. Gamma Knife irradiation method based on dosimetric controls to target small areas in rat brains

    International Nuclear Information System (INIS)

    Constanzo, Julie; Paquette, Benoit; Charest, Gabriel; Masson-Côté, Laurence; Guillot, Mathieu

    2015-01-01

    Purpose: Targeted and whole-brain irradiation in humans can result in significant side effects causing decreased patient quality of life. To adequately investigate structural and functional alterations after stereotactic radiosurgery, preclinical studies are needed. The purpose of this work is to establish a robust standardized method of targeted irradiation on small regions of the rat brain. Methods: Euthanized male Fischer rats were imaged in a stereotactic bed, by computed tomography (CT), to estimate positioning variations relative to the bregma skull reference point. Using a rat brain atlas and the stereotactic bregma coordinates obtained from CT images, different regions of the brain were delimited and a treatment plan was generated. A single isocenter treatment plan delivering ≥100 Gy in 100% of the target volume was produced by Leksell GammaPlan using the 4 mm diameter collimator of sectors 4, 5, 7, and 8 of the Gamma Knife unit. Impact of positioning deviations of the rat brain on dose deposition was simulated by GammaPlan and validated with dosimetric measurements. Results: The authors’ results showed that 90% of the target volume received 100 ± 8 Gy and the maximum of deposited dose was 125 ± 0.7 Gy, which corresponds to an excellent relative standard deviation of 0.6%. This dose deposition calculated with GammaPlan was validated with dosimetric films resulting in a dose-profile agreement within 5%, both in X- and Z-axes. Conclusions: The authors’ results demonstrate the feasibility of standardizing the irradiation procedure of a small volume in the rat brain using a Gamma Knife

  5. Maximum Diameter Measurements of Aortic Aneurysms on Axial CT Images After Endovascular Aneurysm Repair: Sufficient for Follow-up?

    International Nuclear Information System (INIS)

    Baumueller, Stephan; Nguyen, Thi Dan Linh; Goetti, Robert Paul; Lachat, Mario; Seifert, Burkhardt; Pfammatter, Thomas; Frauenfelder, Thomas

    2011-01-01

    Purpose: To assess the accuracy of maximum diameter measurements of aortic aneurysms after endovascular aneurysm repair (EVAR) on axial computed tomographic (CT) images in comparison to maximum diameter measurements perpendicular to the intravascular centerline for follow-up by using three-dimensional (3D) volume measurements as the reference standard. Materials and Methods: Forty-nine consecutive patients (73 ± 7.5 years, range 51–88 years), who underwent EVAR of an infrarenal aortic aneurysm were retrospectively included. Two blinded readers twice independently measured the maximum aneurysm diameter on axial CT images performed at discharge, and at 1 and 2 years after intervention. The maximum diameter perpendicular to the centerline was automatically measured. Volumes of the aortic aneurysms were calculated by dedicated semiautomated 3D segmentation software (3surgery, 3mensio, the Netherlands). Changes in diameter of 0.5 cm and in volume of 10% were considered clinically significant. Intra- and interobserver agreements were calculated by intraclass correlations (ICC) in a random effects analysis of variance. The two unidimensional measurement methods were correlated to the reference standard. Results: Intra- and interobserver agreements for maximum aneurysm diameter measurements were excellent (ICC = 0.98 and ICC = 0.96, respectively). There was an excellent correlation between maximum aneurysm diameters measured on axial CT images and 3D volume measurements (r = 0.93, P < 0.001) as well as between maximum diameter measurements perpendicular to the centerline and 3D volume measurements (r = 0.93, P < 0.001). Conclusion: Measurements of maximum aneurysm diameters on axial CT images are an accurate, reliable, and robust method for follow-up after EVAR and can be used in daily routine.

  6. Robustness: confronting lessons from physics and biology.

    Science.gov (United States)

    Lesne, Annick

    2008-11-01

    The term robustness is encountered in very different scientific fields, from engineering and control theory to dynamical systems to biology. The main question addressed herein is whether the notion of robustness and its correlates (stability, resilience, self-organisation) developed in physics are relevant to biology, or whether specific extensions and novel frameworks are required to account for the robustness properties of living systems. To clarify this issue, the different meanings covered by this unique term are discussed; it is argued that they crucially depend on the kind of perturbations that a robust system should by definition withstand. Possible mechanisms underlying robust behaviours are examined, either encountered in all natural systems (symmetries, conservation laws, dynamic stability) or specific to biological systems (feedbacks and regulatory networks). Special attention is devoted to the (sometimes counterintuitive) interrelations between robustness and noise. A distinction between dynamic selection and natural selection in the establishment of a robust behaviour is underlined. It is finally argued that nested notions of robustness, relevant to different time scales and different levels of organisation, allow one to reconcile the seemingly contradictory requirements for robustness and adaptability in living systems.

  7. Application of Robust Regression and Bootstrap in Poductivity Analysis of GERD Variable in EU27

    Directory of Open Access Journals (Sweden)

    Dagmar Blatná

    2014-06-01

    Full Text Available The GERD is one of Europe 2020 headline indicators being tracked within the Europe 2020 strategy. The headline indicator is the 3% target for the GERD to be reached within the EU by 2020. Eurostat defi nes “GERD” as total gross domestic expenditure on research and experimental development in a percentage of GDP. GERD depends on numerous factors of a general economic background, namely of employment, innovation and research, science and technology. The values of these indicators vary among the European countries, and consequently the occurrence of outliers can be anticipated in corresponding analyses. In such a case, a classical statistical approach – the least squares method – can be highly unreliable, the robust regression methods representing an acceptable and useful tool. The aim of the present paper is to demonstrate the advantages of robust regression and applicability of the bootstrap approach in regression based on both classical and robust methods.

  8. Robust Adaptive LCMV Beamformer Based On An Iterative Suboptimal Solution

    Directory of Open Access Journals (Sweden)

    Xiansheng Guo

    2015-06-01

    Full Text Available The main drawback of closed-form solution of linearly constrained minimum variance (CF-LCMV beamformer is the dilemma of acquiring long observation time for stable covariance matrix estimates and short observation time to track dynamic behavior of targets, leading to poor performance including low signal-noise-ratio (SNR, low jammer-to-noise ratios (JNRs and small number of snapshots. Additionally, CF-LCMV suffers from heavy computational burden which mainly comes from two matrix inverse operations for computing the optimal weight vector. In this paper, we derive a low-complexity Robust Adaptive LCMV beamformer based on an Iterative Suboptimal solution (RAIS-LCMV using conjugate gradient (CG optimization method. The merit of our proposed method is threefold. Firstly, RAIS-LCMV beamformer can reduce the complexity of CF-LCMV remarkably. Secondly, RAIS-LCMV beamformer can adjust output adaptively based on measurement and its convergence speed is comparable. Finally, RAIS-LCMV algorithm has robust performance against low SNR, JNRs, and small number of snapshots. Simulation results demonstrate the superiority of our proposed algorithms.

  9. AUTOMATIC SHAPE-BASED TARGET EXTRACTION FOR CLOSE-RANGE PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    X. Guo

    2016-06-01

    Full Text Available In order to perform precise identification and location of artificial coded targets in natural scenes, a novel design of circle-based coded target and the corresponding coarse-fine extraction algorithm are presented. The designed target separates the target box and coding box totally and owns an advantage of rotation invariance. Based on the original target, templates are prepared by three geometric transformations and are used as the input of shape-based template matching. Finally, region growing and parity check methods are used to extract the coded targets as final results. No human involvement is required except for the preparation of templates and adjustment of thresholds in the beginning, which is conducive to the automation of close-range photogrammetry. The experimental results show that the proposed recognition method for the designed coded target is robust and accurate.

  10. Directional support value of Gaussian transformation for infrared small target detection.

    Science.gov (United States)

    Yang, Changcai; Ma, Jiayi; Qi, Shengxiang; Tian, Jinwen; Zheng, Sheng; Tian, Xin

    2015-03-20

    Robust small target detection is one of the key techniques in IR search and tracking systems for self-defense or attacks. In this paper we present a robust solution for small target detection in a single IR image. The key ideas of the proposed method are to use the directional support value of Gaussian transform (DSVoGT) to enhance the targets, and use the multiscale representation provided by DSVoGT to reduce the false alarm rate. The original image is decomposed into sub-bands in different orientations by convolving the image with the directional support value filters, which are deduced from the weighted mapped least-squares-support vector machines (LS-SVMs). Based on the sub-band images, a support value of Gaussian matrix is constructed, and the trace of this matrix is then defined as the target measure. The corresponding multiscale correlations of the target measures are computed for enhancing target signal while suppressing the background clutter. We demonstrate the advantages of the proposed method on real IR images and compare the results against those obtained from standard detection approaches, including the top-hat filter, max-mean filter, max-median filter, min-local-Laplacian of Gaussian (LoG) filter, as well as LS-SVM. The experimental results on various cluttered background images show that the proposed method outperforms other detectors.

  11. Comparison of linear and nonlinear programming approaches for "worst case dose" and "minmax" robust optimization of intensity-modulated proton therapy dose distributions.

    Science.gov (United States)

    Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino

    2017-03-01

    Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet

  12. Process for the fabrication of aluminum metallized pyrolytic graphite sputtering targets

    Science.gov (United States)

    Makowiecki, Daniel M.; Ramsey, Philip B.; Juntz, Robert S.

    1995-01-01

    An improved method for fabricating pyrolytic graphite sputtering targets with superior heat transfer ability, longer life, and maximum energy transmission. Anisotropic pyrolytic graphite is contoured and/or segmented to match the erosion profile of the sputter target and then oriented such that the graphite's high thermal conductivity planes are in maximum contact with a thermally conductive metal backing. The graphite contact surface is metallized, using high rate physical vapor deposition (HRPVD), with an aluminum coating and the thermally conductive metal backing is joined to the metallized graphite target by one of four low-temperature bonding methods; liquid-metal casting, powder metallurgy compaction, eutectic brazing, and laser welding.

  13. Robust AIC with High Breakdown Scale Estimate

    Directory of Open Access Journals (Sweden)

    Shokrya Saleh

    2014-01-01

    Full Text Available Akaike Information Criterion (AIC based on least squares (LS regression minimizes the sum of the squared residuals; LS is sensitive to outlier observations. Alternative criterion, which is less sensitive to outlying observation, has been proposed; examples are robust AIC (RAIC, robust Mallows Cp (RCp, and robust Bayesian information criterion (RBIC. In this paper, we propose a robust AIC by replacing the scale estimate with a high breakdown point estimate of scale. The robustness of the proposed methods is studied through its influence function. We show that, the proposed robust AIC is effective in selecting accurate models in the presence of outliers and high leverage points, through simulated and real data examples.

  14. Robustness of observation-based decadal sea level variability in the Indo-Pacific Ocean

    Science.gov (United States)

    Nidheesh, A. G.; Lengaigne, M.; Vialard, J.; Izumo, T.; Unnikrishnan, A. S.; Meyssignac, B.; Hamlington, B.; de Boyer Montegut, C.

    2017-07-01

    We examine the consistency of Indo-Pacific decadal sea level variability in 10 gridded, observation-based sea level products for the 1960-2010 period. Decadal sea level variations are robust in the Pacific, with more than 50% of variance explained by decadal modulation of two flavors of El Niño-Southern Oscillation (classical ENSO and Modoki). Amplitude of decadal sea level variability is weaker in the Indian Ocean than in the Pacific. All data sets indicate a transmission of decadal sea level signals from the western Pacific to the northwest Australian coast through the Indonesian throughflow. The southern tropical Indian Ocean sea level variability is associated with decadal modulations of ENSO in reconstructions but not in reanalyses or in situ data set. The Pacific-independent Indian Ocean decadal sea level variability is not robust but tends to be maximum in the southwestern tropical Indian Ocean. The inconsistency of Indian Ocean decadal variability across the sea level products calls for caution in making definitive conclusions on decadal sea level variability in this basin.

  15. Methodology in robust and nonparametric statistics

    CERN Document Server

    Jurecková, Jana; Picek, Jan

    2012-01-01

    Introduction and SynopsisIntroductionSynopsisPreliminariesIntroductionInference in Linear ModelsRobustness ConceptsRobust and Minimax Estimation of LocationClippings from Probability and Asymptotic TheoryProblemsRobust Estimation of Location and RegressionIntroductionM-EstimatorsL-EstimatorsR-EstimatorsMinimum Distance and Pitman EstimatorsDifferentiable Statistical FunctionsProblemsAsymptotic Representations for L-Estimators

  16. Saccadic interception of a moving visual target after a spatiotemporal perturbation.

    Science.gov (United States)

    Fleuriet, Jérome; Goffart, Laurent

    2012-01-11

    Animals can make saccadic eye movements to intercept a moving object at the right place and time. Such interceptive saccades indicate that, despite variable sensorimotor delays, the brain is able to estimate the current spatiotemporal (hic et nunc) coordinates of a target at saccade end. The present work further tests the robustness of this estimate in the monkey when a change in eye position and a delay are experimentally added before the onset of the saccade and in the absence of visual feedback. These perturbations are induced by brief microstimulation in the deep superior colliculus (dSC). When the microstimulation moves the eyes in the direction opposite to the target motion, a correction saccade brings gaze back on the target path or very near. When it moves the eye in the same direction, the performance is more variable and depends on the stimulated sites. Saccades fall ahead of the target with an error that increases when the stimulation is applied more caudally in the dSC. The numerous cases of compensation indicate that the brain is able to maintain an accurate and robust estimate of the location of the moving target. The inaccuracies observed when stimulating the dSC that encodes the visual field traversed by the target indicate that dSC microstimulation can interfere with signals encoding the target motion path. The results are discussed within the framework of the dual-drive and the remapping hypotheses.

  17. Robust Approaches to Forecasting

    OpenAIRE

    Jennifer Castle; David Hendry; Michael P. Clements

    2014-01-01

    We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods ar...

  18. A robust method for estimating motorbike count based on visual information learning

    Science.gov (United States)

    Huynh, Kien C.; Thai, Dung N.; Le, Sach T.; Thoai, Nam; Hamamoto, Kazuhiko

    2015-03-01

    Estimating the number of vehicles in traffic videos is an important and challenging task in traffic surveillance, especially with a high level of occlusions between vehicles, e.g.,in crowded urban area with people and/or motorbikes. In such the condition, the problem of separating individual vehicles from foreground silhouettes often requires complicated computation [1][2][3]. Thus, the counting problem is gradually shifted into drawing statistical inferences of target objects density from their shape [4], local features [5], etc. Those researches indicate a correlation between local features and the number of target objects. However, they are inadequate to construct an accurate model for vehicles density estimation. In this paper, we present a reliable method that is robust to illumination changes and partial affine transformations. It can achieve high accuracy in case of occlusions. Firstly, local features are extracted from images of the scene using Speed-Up Robust Features (SURF) method. For each image, a global feature vector is computed using a Bag-of-Words model which is constructed from the local features above. Finally, a mapping between the extracted global feature vectors and their labels (the number of motorbikes) is learned. That mapping provides us a strong prediction model for estimating the number of motorbikes in new images. The experimental results show that our proposed method can achieve a better accuracy in comparison to others.

  19. Theoretical Framework for Robustness Evaluation

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, although...... the importance of robustness for structural design is widely recognized the code requirements are not specified in detail, which makes the practical use difficult. This paper describes a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines...

  20. Proposed industrial recoverd materials utilization targets for the textile mill products industry

    Energy Technology Data Exchange (ETDEWEB)

    1979-05-01

    Materials recovery targets were established to represent the maximum technically and economically feasible increase in the use of energy-saving materials by January 1, 1987. This report describes targets for the textile industry and describes how those targets were determined. (MCW)

  1. Robust portfolio selection under norm uncertainty

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2016-06-01

    Full Text Available Abstract In this paper, we consider the robust portfolio selection problem which has a data uncertainty described by the ( p , w $(p,w$ -norm in the objective function. We show that the robust formulation of this problem is equivalent to a linear optimization problem. Moreover, we present some numerical results concerning our robust portfolio selection problem.

  2. Is countershading camouflage robust to lighting change due to weather?

    Science.gov (United States)

    Penacchio, Olivier; Lovell, P George; Harris, Julie M

    2018-02-01

    Countershading is a pattern of coloration thought to have evolved in order to implement camouflage. By adopting a pattern of coloration that makes the surface facing towards the sun darker and the surface facing away from the sun lighter, the overall amount of light reflected off an animal can be made more uniformly bright. Countershading could hence contribute to visual camouflage by increasing background matching or reducing cues to shape. However, the usefulness of countershading is constrained by a particular pattern delivering 'optimal' camouflage only for very specific lighting conditions. In this study, we test the robustness of countershading camouflage to lighting change due to weather, using human participants as a 'generic' predator. In a simulated three-dimensional environment, we constructed an array of simple leaf-shaped items and a single ellipsoidal target 'prey'. We set these items in two light environments: strongly directional 'sunny' and more diffuse 'cloudy'. The target object was given the optimal pattern of countershading for one of these two environment types or displayed a uniform pattern. By measuring detection time and accuracy, we explored whether and how target detection depended on the match between the pattern of coloration on the target object and scene lighting. Detection times were longest when the countershading was appropriate to the illumination; incorrectly camouflaged targets were detected with a similar pattern of speed and accuracy to uniformly coloured targets. We conclude that structural changes in light environment, such as caused by differences in weather, do change the effectiveness of countershading camouflage.

  3. Spatio-Temporal Convergence of Maximum Daily Light-Use Efficiency Based on Radiation Absorption by Canopy Chlorophyll

    Science.gov (United States)

    Zhang, Yao; Xiao, Xiangming; Wolf, Sebastian; Wu, Jin; Wu, Xiaocui; Gioli, Beniamino; Wohlfahrt, Georg; Cescatti, Alessandro; van der Tol, Christiaan; Zhou, Sha; Gough, Christopher M.; Gentine, Pierre; Zhang, Yongguang; Steinbrecher, Rainer; Ardö, Jonas

    2018-04-01

    Light-use efficiency (LUE), which quantifies the plants' efficiency in utilizing solar radiation for photosynthetic carbon fixation, is an important factor for gross primary production estimation. Here we use satellite-based solar-induced chlorophyll fluorescence as a proxy for photosynthetically active radiation absorbed by chlorophyll (APARchl) and derive an estimation of the fraction of APARchl (fPARchl) from four remotely sensed vegetation indicators. By comparing maximum LUE estimated at different scales from 127 eddy flux sites, we found that the maximum daily LUE based on PAR absorption by canopy chlorophyll (ɛmaxchl), unlike other expressions of LUE, tends to converge across biome types. The photosynthetic seasonality in tropical forests can also be tracked by the change of fPARchl, suggesting the corresponding ɛmaxchl to have less seasonal variation. This spatio-temporal convergence of LUE derived from fPARchl can be used to build simple but robust gross primary production models and to better constrain process-based models.

  4. Variable-structure approaches analysis, simulation, robust control and estimation of uncertain dynamic processes

    CERN Document Server

    Senkel, Luise

    2016-01-01

    This edited book aims at presenting current research activities in the field of robust variable-structure systems. The scope equally comprises highlighting novel methodological aspects as well as presenting the use of variable-structure techniques in industrial applications including their efficient implementation on hardware for real-time control. The target audience primarily comprises research experts in the field of control theory and nonlinear dynamics but the book may also be beneficial for graduate students.

  5. Adaptive Waveform Design for Cognitive Radar in Multiple Targets Situations

    Directory of Open Access Journals (Sweden)

    Xiaowen Zhang

    2018-02-01

    Full Text Available In this paper, the problem of cognitive radar (CR waveform optimization design for target detection and estimation in multiple extended targets situations is investigated. This problem is analyzed in signal-dependent interference, as well as additive channel noise for extended targets with unknown target impulse response (TIR. To address this problem, an improved algorithm is employed for target detection by maximizing the detection probability of the received echo on the promise of ensuring the TIR estimation precision. In this algorithm, an additional weight vector is introduced to achieve a trade-off among different targets. Both the estimate of TIR and transmit waveform can be updated at each step based on the previous step. Under the same constraint on waveform energy and bandwidth, the information theoretical approach is also considered. In addition, the relationship between the waveforms that are designed based on the two criteria is discussed. Unlike most existing works that only consider single target with temporally correlated characteristics, waveform design for multiple extended targets is considered in this method. Simulation results demonstrate that compared with linear frequency modulated (LFM signal, waveforms designed based on maximum detection probability and maximum mutual information (MI criteria can make radar echoes contain more multiple-target information and improve radar performance as a result.

  6. Automatic target detection using binary template matching

    Science.gov (United States)

    Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook

    2005-03-01

    This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.

  7. Common pitfalls in preclinical cancer target validation.

    Science.gov (United States)

    Kaelin, William G

    2017-07-01

    An alarming number of papers from laboratories nominating new cancer drug targets contain findings that cannot be reproduced by others or are simply not robust enough to justify drug discovery efforts. This problem probably has many causes, including an underappreciation of the danger of being misled by off-target effects when using pharmacological or genetic perturbants in complex biological assays. This danger is particularly acute when, as is often the case in cancer pharmacology, the biological phenotype being measured is a 'down' readout (such as decreased proliferation, decreased viability or decreased tumour growth) that could simply reflect a nonspecific loss of cellular fitness. These problems are compounded by multiple hypothesis testing, such as when candidate targets emerge from high-throughput screens that interrogate multiple targets in parallel, and by a publication and promotion system that preferentially rewards positive findings. In this Perspective, I outline some of the common pitfalls in preclinical cancer target identification and some potential approaches to mitigate them.

  8. Robust Combining of Disparate Classifiers Through Order Statistics

    Science.gov (United States)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  9. UAV Robust Strategy Control Based on MAS

    Directory of Open Access Journals (Sweden)

    Jian Han

    2014-01-01

    Full Text Available A novel multiagent system (MAS has been proposed to integrate individual UAV (unmanned aerial vehicle to form a UAV team which can accomplish complex missions with better efficiency and effect. The MAS based UAV team control is more able to conquer dynamic situations and enhance the performance of any single UAV. In this paper, the MAS proposed and established combines the reacting and thinking abilities to be an initiative and autonomous hybrid system which can solve missions involving coordinated flight and cooperative operation. The MAS uses BDI model to support its logical perception and to classify the different missions; then the missions will be allocated by utilizing auction mechanism after analyzing dynamic parameters. Prim potential algorithm, particle swarm algorithm, and reallocation mechanism are proposed to realize the rational decomposing and optimal allocation in order to reach the maximum profit. After simulation, the MAS has been proved to be able to promote the success ratio and raise the robustness, while realizing feasibility of coordinated flight and optimality of cooperative mission.

  10. Does a crouched leg posture enhance running stability and robustness?

    Science.gov (United States)

    Blum, Yvonne; Birn-Jeffery, Aleksandra; Daley, Monica A; Seyfarth, Andre

    2011-07-21

    Humans and birds both walk and run bipedally on compliant legs. However, differences in leg architecture may result in species-specific leg control strategies as indicated by the observed gait patterns. In this work, control strategies for stable running are derived based on a conceptual model and compared with experimental data on running humans and pheasants (Phasianus colchicus). From a model perspective, running with compliant legs can be represented by the planar spring mass model and stabilized by applying swing leg control. Here, linear adaptations of the three leg parameters, leg angle, leg length and leg stiffness during late swing phase are assumed. Experimentally observed kinematic control parameters (leg rotation and leg length change) of human and avian running are compared, and interpreted within the context of this model, with specific focus on stability and robustness characteristics. The results suggest differences in stability characteristics and applied control strategies of human and avian running, which may relate to differences in leg posture (straight leg posture in humans, and crouched leg posture in birds). It has been suggested that crouched leg postures may improve stability. However, as the system of control strategies is overdetermined, our model findings suggest that a crouched leg posture does not necessarily enhance running stability. The model also predicts different leg stiffness adaptation rates for human and avian running, and suggests that a crouched avian leg posture, which is capable of both leg shortening and lengthening, allows for stable running without adjusting leg stiffness. In contrast, in straight-legged human running, the preparation of the ground contact seems to be more critical, requiring leg stiffness adjustment to remain stable. Finally, analysis of a simple robustness measure, the normalized maximum drop, suggests that the crouched leg posture may provide greater robustness to changes in terrain height

  11. Expression robust 3D face recognition via mesh-based histograms of multiple order surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-09-01

    This paper presents a mesh-based approach for 3D face recognition using a novel local shape descriptor and a SIFT-like matching process. Both maximum and minimum curvatures estimated in the 3D Gaussian scale space are employed to detect salient points. To comprehensively characterize 3D facial surfaces and their variations, we calculate weighted statistical distributions of multiple order surface differential quantities, including histogram of mesh gradient (HoG), histogram of shape index (HoS) and histogram of gradient of shape index (HoGS) within a local neighborhood of each salient point. The subsequent matching step then robustly associates corresponding points of two facial surfaces, leading to much more matched points between different scans of a same person than the ones of different persons. Experimental results on the Bosphorus dataset highlight the effectiveness of the proposed method and its robustness to facial expression variations. © 2011 IEEE.

  12. Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting

    Directory of Open Access Journals (Sweden)

    Stanislav Anatolyev

    2015-08-01

    Full Text Available Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.

  13. The ''Dolphin'' power laser installation for spherical thermonuclear target heating

    International Nuclear Information System (INIS)

    Basov, N.G.; Bykovskij, N.E.; Danilov, A.E.

    1978-01-01

    12-channel laser installation the ''Dolphin'' for thermonuclear target heating in the radiation spheric geometry has been developed to carry out series of physical investigations of laser-thermonuclear plasma system, optimization of target heating conditions and obtaining a comparatively large value of thermonuclear output in ratio to the energy of absorbed light radiation in the target. The description of installation main elements, consisting of the following components, is given: 1)neodymium laser with the maximum permissible radiation energy of 10kJ, with light pulse duration of 10 -10 /10 -9 c and radiation divergence of approximately 5x10 -4 rad; 2)vacuum chamber, where laser radiation interaction with plasma takes place; 3)diagnostic means of laser and plasma parameters and 4)focus system. The focus system provides a high degree of target spherical radiation symmetry at current maximum density on its surface of approximately 10 15 W/cm 2

  14. A theoretical and practical contribution to supply chain robustness:developing a schema for robustness in dyads

    OpenAIRE

    Durach, Christian F.

    2016-01-01

    Published in print by Universitätsverlag der TU Berlin, ISBN 978-3-7983-2812-9 (ISSN 1865-3170) This doctoral thesis develops four individual research studies on supply chain robustness. The overall goal of these studies is to develop a conceptual framework of supply chain robustness by consolidating current literature in the field, and, drawing on that framework, to construct a schema of determinants that facilitate robustness in buyer-supplier relationships. This research is motivated by...

  15. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  16. High gain direct drive target designs and supporting experiments with KrF

    International Nuclear Information System (INIS)

    Karasik, Max; Bates, Jason W.; Aglitskiy, Yefim

    2013-01-01

    Krypton-fluoride laser is an attractive inertial fusion energy driver from the standpoint of target physics. Target designs taking advantage of zooming, shock ignition, and favorable physics with KrF reach energy gains of 200 with sub-MJ laser energy. The designs are robust under 2D simulations. Experiments on the Nike KrF laser support the physics basis. (author)

  17. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  18. Seismic Target Classification Using a Wavelet Packet Manifold in Unattended Ground Sensors Systems

    Directory of Open Access Journals (Sweden)

    Enliang Song

    2013-07-01

    Full Text Available One of the most challenging problems in target classification is the extraction of a robust feature, which can effectively represent a specific type of targets. The use of seismic signals in unattended ground sensor (UGS systems makes this problem more complicated, because the seismic target signal is non-stationary, geology-dependent and with high-dimensional feature space. This paper proposes a new feature extraction algorithm, called wavelet packet manifold (WPM, by addressing the neighborhood preserving embedding (NPE algorithm of manifold learning on the wavelet packet node energy (WPNE of seismic signals. By combining non-stationary information and low-dimensional manifold information, WPM provides a more robust representation for seismic target classification. By using a K nearest neighbors classifier on the WPM signature, the algorithm of wavelet packet manifold classification (WPMC is proposed. Experimental results show that the proposed WPMC can not only reduce feature dimensionality, but also improve the classification accuracy up to 95.03%. Moreover, compared with state-of-the-art methods, WPMC is more suitable for UGS in terms of recognition ratio and computational complexity.

  19. Rotated Walsh-Hadamard Spreading with Robust Channel Estimation for a Coded MC-CDMA System

    Directory of Open Access Journals (Sweden)

    Raulefs Ronald

    2004-01-01

    Full Text Available We investigate rotated Walsh-Hadamard spreading matrices for a broadband MC-CDMA system with robust channel estimation in the synchronous downlink. The similarities between rotated spreading and signal space diversity are outlined. In a multiuser MC-CDMA system, possible performance improvements are based on the chosen detector, the channel code, and its Hamming distance. By applying rotated spreading in comparison to a standard Walsh-Hadamard spreading code, a higher throughput can be achieved. As combining the channel code and the spreading code forms a concatenated code, the overall minimum Hamming distance of the concatenated code increases. This asymptotically results in an improvement of the bit error rate for high signal-to-noise ratio. Higher convolutional channel code rates are mostly generated by puncturing good low-rate channel codes. The overall Hamming distance decreases significantly for the punctured channel codes. Higher channel code rates are favorable for MC-CDMA, as MC-CDMA utilizes diversity more efficiently compared to pure OFDMA. The application of rotated spreading in an MC-CDMA system allows exploiting diversity even further. We demonstrate that the rotated spreading gain is still present for a robust pilot-aided channel estimator. In a well-designed system, rotated spreading extends the performance by using a maximum likelihood detector with robust channel estimation at the receiver by about 1 dB.

  20. EURISOL-DS MULTI-MW TARGET ISSUES: BEAM WINDOW AND TRANSVERSE FILM TARGET

    CERN Document Server

    Adonai Herrera-Martínez, Yacine Kadi

    The analysis of the EURISOL-DS Multi_MW target precise geometry (Fig.1) has proved that large fission yields can be achieved with a 4 MW, providing a technically feasible design to evacuate the power deposited in the liquid mercury. Different designs for the mercury flow have been proposed, which maintain its temperature below the boiling point with moderate flow speeds (maximum 4 m/s).

  1. Robust control design with MATLAB

    CERN Document Server

    Gu, Da-Wei; Konstantinov, Mihail M

    2013-01-01

    Robust Control Design with MATLAB® (second edition) helps the student to learn how to use well-developed advanced robust control design methods in practical cases. To this end, several realistic control design examples from teaching-laboratory experiments, such as a two-wheeled, self-balancing robot, to complex systems like a flexible-link manipulator are given detailed presentation. All of these exercises are conducted using MATLAB® Robust Control Toolbox 3, Control System Toolbox and Simulink®. By sharing their experiences in industrial cases with minimum recourse to complicated theories and formulae, the authors convey essential ideas and useful insights into robust industrial control systems design using major H-infinity optimization and related methods allowing readers quickly to move on with their own challenges. The hands-on tutorial style of this text rests on an abundance of examples and features for the second edition: ·        rewritten and simplified presentation of theoretical and meth...

  2. Robust Portfolio Optimization Using Pseudodistances

    Science.gov (United States)

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  3. Robust Portfolio Optimization Using Pseudodistances.

    Science.gov (United States)

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  4. On evaluating the robustness of spatial-proximity-based regionalization methods

    Science.gov (United States)

    Lebecherel, Laure; Andréassian, Vazken; Perrin, Charles

    2016-08-01

    In absence of streamflow data to calibrate a hydrological model, its parameters are to be inferred by a regionalization method. In this technical note, we discuss a specific class of regionalization methods, those based on spatial proximity, which transfers hydrological information (typically calibrated parameter sets) from neighbor gauged stations to the target ungauged station. The efficiency of any spatial-proximity-based regionalization method will depend on the density of the available streamgauging network, and the purpose of this note is to discuss how to assess the robustness of the regionalization method (i.e., its resilience to an increasingly sparse hydrometric network). We compare two options: (i) the random hydrometrical reduction (HRand) method, which consists in sub-sampling the existing gauging network around the target ungauged station, and (ii) the hydrometrical desert method (HDes), which consists in ignoring the closest gauged stations. Our tests suggest that the HDes method should be preferred, because it provides a more realistic view on regionalization performance.

  5. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  6. Robust Sliding Mode Control of Air Handling Unit for Energy Efficiency Enhancement

    Directory of Open Access Journals (Sweden)

    Awais Shah

    2017-11-01

    Full Text Available In order to achieve feasible and copacetic low energy consuming building, a robust and efficient air conditioning system is necessary. Since heating ventilation and air conditioning systems are nonlinear and temperature and humidity are coupled, application of conventional control is inappropriate. A multi-input multi-output nonlinear model is presented. The temperature and humidity of thermal zone are ascendance by the manipulation of the water and air flow rates. A sliding mode controller (SMC is designed to ensure robust performance of air handling unit in the presence of uncertainties. A simple proportional-integral-derivative (PID controller is used as a comparison template to highlight the efficiency of the proposed controller. To accomplish tracking targets, a variety of desired temperature and relative humidity commands (including ramp and combination with sequence of steps are investigated. According to simulation results, SMC transcends the PID controller in terms of settling time, steady state and rise time, which makes SMC more energy efficient.

  7. Assessing the Stability and Robustness of Semantic Web Services Recommendation Algorithms Under Profile Injection Attacks

    Directory of Open Access Journals (Sweden)

    GRANDIN, P. H.

    2014-06-01

    Full Text Available Recommendation systems based on collaborative filtering are open by nature, what makes them vulnerable to profile injection attacks that insert biased evaluations in the system database in order to manipulate recommendations. In this paper we evaluate the stability and robustness of collaborative filtering algorithms applied to semantic web services recommendation when submitted to random and segment profile injection attacks. We evaluated four algorithms: (1 IMEAN, that makes predictions using the average of the evaluations received by the target item; (2 UMEAN, that makes predictions using the average of the evaluation made by the target user; (3 an algorithm based on the k-nearest neighbor (k-NN method and (4, an algorithm based on the k-means clustering method.The experiments showed that the UMEAN algorithm is not affected by the attacks and that IMEAN is the most vulnerable of all algorithms tested. Nevertheless, both UMEAN and IMEAN have little practical application due to the low precision of their predictions. Among the algorithms with intermediate tolerance to attacks but with good prediction performance, the algorithm based on k-nn proved to be more robust and stable than the algorithm based on k-means.

  8. How Robust is Your System Resilience?

    Science.gov (United States)

    Homayounfar, M.; Muneepeerakul, R.

    2017-12-01

    Robustness and resilience are concepts in system thinking that have grown in importance and popularity. For many complex social-ecological systems, however, robustness and resilience are difficult to quantify and the connections and trade-offs between them difficult to study. Most studies have either focused on qualitative approaches to discuss their connections or considered only one of them under particular classes of disturbances. In this study, we present an analytical framework to address the linkage between robustness and resilience more systematically. Our analysis is based on a stylized dynamical model that operationalizes a widely used concept framework for social-ecological systems. The model enables us to rigorously define robustness and resilience and consequently investigate their connections. The results reveal the tradeoffs among performance, robustness, and resilience. They also show how the nature of the such tradeoffs varies with the choices of certain policies (e.g., taxation and investment in public infrastructure), internal stresses and external disturbances.

  9. International Conference on Robust Statistics 2015

    CERN Document Server

    Basu, Ayanendranath; Filzmoser, Peter; Mukherjee, Diganta

    2016-01-01

    This book offers a collection of recent contributions and emerging ideas in the areas of robust statistics presented at the International Conference on Robust Statistics 2015 (ICORS 2015) held in Kolkata during 12–16 January, 2015. The book explores the applicability of robust methods in other non-traditional areas which includes the use of new techniques such as skew and mixture of skew distributions, scaled Bregman divergences, and multilevel functional data methods; application areas being circular data models and prediction of mortality and life expectancy. The contributions are of both theoretical as well as applied in nature. Robust statistics is a relatively young branch of statistical sciences that is rapidly emerging as the bedrock of statistical analysis in the 21st century due to its flexible nature and wide scope. Robust statistics supports the application of parametric and other inference techniques over a broader domain than the strictly interpreted model scenarios employed in classical statis...

  10. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  11. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  12. Robustness of IPTV business models

    NARCIS (Netherlands)

    Bouwman, H.; Zhengjia, M.; Duin, P. van der; Limonard, S.

    2008-01-01

    The final stage in the STOF method is an evaluation of the robustness of the design, for which the method provides some guidelines. For many innovative services, the future holds numerous uncertainties, which makes evaluating the robustness of a business model a difficult task. In this chapter, we

  13. Robust Design Impact Metrics: Measuring the effect of implementing and using Robust Design

    DEFF Research Database (Denmark)

    Ebro, Martin; Olesen, Jesper; Howard, Thomas J.

    2014-01-01

    Measuring the performance of an organisation’s product development process can be challenging due to the limited use of metrics in R&D. An organisation considering whether to use Robust Design as an integrated part of their development process may find it difficult to define whether it is relevant......, and afterwards measure the effect of having implemented it. This publication identifies and evaluates Robust Design-related metrics and finds that 2 metrics are especially useful: 1) Relative amount of R&D Resources spent after Design Verification and 2) Number of ‘change notes’ after Design Verification....... The metrics have been applied in a case company to test the assumptions made during the evaluation. It is concluded that the metrics are useful and relevant, but further work is necessary to make a proper overview and categorisation of different types of robustness related metrics....

  14. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    Directory of Open Access Journals (Sweden)

    Annette Mossel

    2015-12-01

    Full Text Available In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1 user tracking for virtual and augmented reality applications, (2 handheld target tracking for tunneling and (3 machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m.

  15. Robust loss functions for boosting.

    Science.gov (United States)

    Kanamori, Takafumi; Takenouchi, Takashi; Eguchi, Shinto; Murata, Noboru

    2007-08-01

    Boosting is known as a gradient descent algorithm over loss functions. It is often pointed out that the typical boosting algorithm, Adaboost, is highly affected by outliers. In this letter, loss functions for robust boosting are studied. Based on the concept of robust statistics, we propose a transformation of loss functions that makes boosting algorithms robust against extreme outliers. Next, the truncation of loss functions is applied to contamination models that describe the occurrence of mislabels near decision boundaries. Numerical experiments illustrate that the proposed loss functions derived from the contamination models are useful for handling highly noisy data in comparison with other loss functions.

  16. Robustness of airline route networks

    Science.gov (United States)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  17. SU-E-J-137: Incorporating Tumor Regression Into Robust Plan Optimization for Head and Neck Radiotherapy

    International Nuclear Information System (INIS)

    Zhang, P; Hu, J; Tyagi, N; Mageras, G; Lee, N; Hunt, M

    2014-01-01

    Purpose: To develop a robust planning paradigm which incorporates a tumor regression model into the optimization process to ensure tumor coverage in head and neck radiotherapy. Methods: Simulation and weekly MR images were acquired for a group of head and neck patients to characterize tumor regression during radiotherapy. For each patient, the tumor and parotid glands were segmented on the MR images and the weekly changes were formulated with an affine transformation, where morphological shrinkage and positional changes are modeled by a scaling factor, and centroid shifts, respectively. The tumor and parotid contours were also transferred to the planning CT via rigid registration. To perform the robust planning, weekly predicted PTV and parotid structures were created by transforming the corresponding simulation structures according to the weekly affine transformation matrix averaged over patients other than him/herself. Next, robust PTV and parotid structures were generated as the union of the simulation and weekly prediction contours. In the subsequent robust optimization process, attainment of the clinical dose objectives was required for the robust PTV and parotids, as well as other organs at risk (OAR). The resulting robust plans were evaluated by looking at the weekly and total accumulated dose to the actual weekly PTV and parotid structures. The robust plan was compared with the original plan based on the planning CT to determine its potential clinical benefit. Results: For four patients, the average weekly change to tumor volume and position was −4% and 1.2 mm laterally-posteriorly. Due to these temporal changes, the robust plans resulted in an accumulated PTV D95 that was, on average, 2.7 Gy higher than the plan created from the planning CT. OAR doses were similar. Conclusion: Integration of a tumor regression model into target delineation and plan robust optimization is feasible and may yield improved tumor coverage. Part of this research is supported

  18. Research on the robust optimization of the enterprise's decision on the investment to the collaborative innovation: Under the risk constraints

    International Nuclear Information System (INIS)

    Zhou, Qing; Fang, Gang; Wang, Dong-peng; Yang, Wei

    2016-01-01

    Abstracts: The robust optimization model is applied to analyze the enterprise's decision of the investment portfolio for the collaborative innovation under the risk constraints. Through the mathematical model deduction and the simulation analysis, the research result shows that the enterprise's investment to the collaborative innovation has relatively obvious robust effect. As for the collaborative innovation, the return from the investment coexists with the risk of it. Under the risk constraints, the robust optimization method could solve the minimum risk as well as the proportion of each investment scheme in the portfolio on the condition of different target returns from the investment. On the basis of the result, the enterprise could balance between the investment return and risk and make optimal decision on the investment scheme.

  19. Efficient and robust gradient enhanced Kriging emulators.

    Energy Technology Data Exchange (ETDEWEB)

    Dalbey, Keith R.

    2013-08-01

    %E2%80%9CNaive%E2%80%9D or straight-forward Kriging implementations can often perform poorly in practice. The relevant features of the robustly accurate and efficient Kriging and Gradient Enhanced Kriging (GEK) implementations in the DAKOTA software package are detailed herein. The principal contribution is a novel, effective, and efficient approach to handle ill-conditioning of GEK's %E2%80%9Ccorrelation%E2%80%9D matrix, RN%CC%83, based on a pivoted Cholesky factorization of Kriging's (not GEK's) correlation matrix, R, which is a small sub-matrix within GEK's RN%CC%83 matrix. The approach discards sample points/equations that contribute the least %E2%80%9Cnew%E2%80%9D information to RN%CC%83. Since these points contain the least new information, they are the ones which when discarded are both the easiest to predict and provide maximum improvement of RN%CC%83's conditioning. Prior to this work, handling ill-conditioned correlation matrices was a major, perhaps the principal, unsolved challenge necessary for robust and efficient GEK emulators. Numerical results demonstrate that GEK predictions can be significantly more accurate when GEK is allowed to discard points by the presented method. Numerical results also indicate that GEK can be used to break the curse of dimensionality by exploiting inexpensive derivatives (such as those provided by automatic differentiation or adjoint techniques), smoothness in the response being modeled, and adaptive sampling. Development of a suitable adaptive sampling algorithm was beyond the scope of this work; instead adaptive sampling was approximated by omitting the cost of samples discarded by the presented pivoted Cholesky approach.

  20. Multimodel Robust Control for Hydraulic Turbine

    OpenAIRE

    Osuský, Jakub; Števo, Stanislav

    2014-01-01

    The paper deals with the multimodel and robust control system design and their combination based on M-Δ structure. Controller design will be done in the frequency domain with nominal performance specified by phase margin. Hydraulic turbine model is analyzed as system with unstructured uncertainty, and robust stability condition is included in controller design. Multimodel and robust control approaches are presented in detail on hydraulic turbine model. Control design approaches are compared a...

  1. Robustness of Linear Systems towards Multi-Dissipative Pertubations

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Poulsen, Niels Kjølstad

    1997-01-01

    We consider the question of robust stability of a linear time invariant plant subject to dynamic perturbations, which are dissipative in the sense of Willems with respect to several quadratic supply rates. For instance, parasitic dynamics are often both small gain and passive. We reduce several...... robustness analysis questions to linear matrix inequalities: robust stability, robust H2 performance and robust performance in presence of disturbances with finite signal-to-noise ratios...

  2. Robust performance results for discrete-time systems

    Directory of Open Access Journals (Sweden)

    Mahmoud Magdi S.

    1997-01-01

    Full Text Available The problems of robust performance and feedback control synthesis for a class of linear discrete-time systems with time-varying parametric uncertainties are addressed in this paper. The uncertainties are bound and have a linear matrix fractional form. Based on the concept of strongly robust H ∞ -performance criterion, results of robust stability and performance are developed and expressed in easily computable linear matrix inequalities. Synthesis of robust feedback controllers is carried out for several system models of interest.

  3. Computer models and the evidence of anthropogenic climate change: An epistemology of variety-of-evidence inferences and robustness analysis.

    Science.gov (United States)

    Vezér, Martin A

    2016-04-01

    To study climate change, scientists employ computer models, which approximate target systems with various levels of skill. Given the imperfection of climate models, how do scientists use simulations to generate knowledge about the causes of observed climate change? Addressing a similar question in the context of biological modelling, Levins (1966) proposed an account grounded in robustness analysis. Recent philosophical discussions dispute the confirmatory power of robustness, raising the question of how the results of computer modelling studies contribute to the body of evidence supporting hypotheses about climate change. Expanding on Staley's (2004) distinction between evidential strength and security, and Lloyd's (2015) argument connecting variety-of-evidence inferences and robustness analysis, I address this question with respect to recent challenges to the epistemology robustness analysis. Applying this epistemology to case studies of climate change, I argue that, despite imperfections in climate models, and epistemic constraints on variety-of-evidence reasoning and robustness analysis, this framework accounts for the strength and security of evidence supporting climatological inferences, including the finding that global warming is occurring and its primary causes are anthropogenic. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Robust adaptive synchronization of general dynamical networks ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 86; Issue 6. Robust ... A robust adaptive synchronization scheme for these general complex networks with multiple delays and uncertainties is established and raised by employing the robust adaptive control principle and the Lyapunov stability theory. We choose ...

  5. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  6. Robust synthesis for real-time systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Legay, Axel; Traonouez, Luois-Marie

    2014-01-01

    Specification theories for real-time systems allow reasoning about interfaces and their implementation models, using a set of operators that includes satisfaction, refinement, logical and parallel composition. To make such theories applicable throughout the entire design process from an abstract...... of introducing small perturbations into formal models. We address this problem of robust implementations in timed specification theories. We first consider a fixed perturbation and study the robustness of timed specifications with respect to the operators of the theory. To this end we synthesize robust...... specification to an implementation, we need to reason about the possibility to effectively implement the theoretical specifications on physical systems, despite their limited precision. In the literature, this implementation problem has been linked to the robustness problem that analyzes the consequences...

  7. SU-F-BRD-05: Robustness of Dose Painting by Numbers in Proton Therapy

    International Nuclear Information System (INIS)

    Montero, A Barragan; Sterpin, E; Lee, J

    2015-01-01

    Purpose: Proton range uncertainties may cause important dose perturbations within the target volume, especially when steep dose gradients are present as in dose painting. The aim of this study is to assess the robustness against setup and range errors for high heterogeneous dose prescriptions (i.e., dose painting by numbers), delivered by proton pencil beam scanning. Methods: An automatic workflow, based on MATLAB functions, was implemented through scripting in RayStation (RaySearch Laboratories). It performs a gradient-based segmentation of the dose painting volume from 18FDG-PET images (GTVPET), and calculates the dose prescription as a linear function of the FDG-uptake value on each voxel. The workflow was applied to two patients with head and neck cancer. Robustness against setup and range errors of the conventional PTV margin strategy (prescription dilated by 2.5 mm) versus CTV-based (minimax) robust optimization (2.5 mm setup, 3% range error) was assessed by comparing the prescription with the planned dose for a set of error scenarios. Results: In order to ensure dose coverage above 95% of the prescribed dose in more than 95% of the GTVPET voxels while compensating for the uncertainties, the plans with a PTV generated a high overdose. For the nominal case, up to 35% of the GTVPET received doses 5% beyond prescription. For the worst of the evaluated error scenarios, the volume with 5% overdose increased to 50%. In contrast, for CTV-based plans this 5% overdose was present only in a small fraction of the GTVPET, which ranged from 7% in the nominal case to 15% in the worst of the evaluated scenarios. Conclusion: The use of a PTV leads to non-robust dose distributions with excessive overdose in the painted volume. In contrast, robust optimization yields robust dose distributions with limited overdose. RaySearch Laboratories is sincerely acknowledged for providing us with RayStation treatment planning system and for the support provided

  8. JAERI/KEK target material program overview

    International Nuclear Information System (INIS)

    Kikuchi, Kenji; Kogawa, Hiroyuki; Sasa, Toshinobu

    2001-01-01

    Mercury target was designed for megawatt neutron scattering facility in JAERI/KEK spallation neutron source. The incident proton energy and current are 3 GeV and 333 μA, respectively: the total proton energy is 1 MW in short pulses at a frequency of 25 Hz. Under the guide rule the mercury target was designed: the maximum temperature of target window is 170degC and induced stresses for the type 316 stainless steel are within limits of design guide. In order to demonstrate ADS (Accelerator Driven Systems) transmutation critical and engineering facilities have been designed conceptually. In engineering facility lead-bismuth spallation target station is to be planned. Objective to build the facility is to demonstrate material irradiation. According to neutronics calculation irradiation damage of the target vessel window will be 5 dpa per year. (author)

  9. ISAC target operation with high proton currents

    CERN Document Server

    Dombsky, M; Schmor, P; Lane, M

    2003-01-01

    The TRIUMF-ISAC facility target stations were designed for ISOL target irradiations with up to 100 mu A proton beam currents. Since beginning operation in 1998, ISAC irradiation currents have progressively increased from initial values of approx 1 mu A to present levels of up to 40 mu A on refractory metal foil targets. In addition, refractory carbide targets have operated at currents of up to 15 mu A for extended periods. The 1-40 mu A operational regime is achieved by tailoring each target to the thermal requirements dictated by material properties such as beam power deposition, thermal conductivity and maximum operating temperature of the target material. The number of heat shields on each target can be varied in order to match the effective emissivity of the target surface for the required radiative power dissipation. Targets of different thickness, surface area and volume have been investigated to study the effect of diffusion and effusion delays on the yield of radioisotopes. For yields of short-lived p...

  10. Active Multimodal Sensor System for Target Recognition and Tracking.

    Science.gov (United States)

    Qu, Yufu; Zhang, Guirong; Zou, Zhaofan; Liu, Ziyue; Mao, Jiansen

    2017-06-28

    High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent diversity of the available target-related data. This paper proposes an active multimodal sensor system for target recognition and tracking, consisting of a visible, an infrared, and a hyperspectral sensor. The system makes full use of its multisensor information collection abilities; furthermore, it can actively control different sensors to collect additional data, according to the needs of the real-time target recognition and tracking processes. This level of integration between hardware collection control and data processing is experimentally shown to effectively improve the accuracy and robustness of the target recognition and tracking system.

  11. Analysis to determine the maximum dimensions of flexible apertures in sensored security netting products.

    Energy Technology Data Exchange (ETDEWEB)

    Murton, Mark; Bouchier, Francis A.; vanDongen, Dale T.; Mack, Thomas Kimball; Cutler, Robert P; Ross, Michael P.

    2013-08-01

    Although technological advances provide new capabilities to increase the robustness of security systems, they also potentially introduce new vulnerabilities. New capability sometimes requires new performance requirements. This paper outlines an approach to establishing a key performance requirement for an emerging intrusion detection sensor: the sensored net. Throughout the security industry, the commonly adopted standard for maximum opening size through barriers is a requirement based on square inchestypically 96 square inches. Unlike standard rigid opening, the dimensions of a flexible aperture are not fixed, but variable and conformable. It is demonstrably simple for a human intruder to move through a 96-square-inch opening that is conformable to the human body. The longstanding 96-square-inch requirement itself, though firmly embedded in policy and best practice, lacks a documented empirical basis. This analysis concluded that the traditional 96-square-inch standard for openings is insufficient for flexible openings that are conformable to the human body. Instead, a circumference standard is recommended for these newer types of sensored barriers. The recommended maximum circumference for a flexible opening should be no more than 26 inches, as measured on the inside of the netting material.

  12. Robust and flexible mapping for real-time distributed applications during the early design phases

    DEFF Research Database (Denmark)

    Gan, Junhe; Pop, Paul; Gruian, Flavius

    2012-01-01

    has a high chance of being schedulable, considering the wcet uncertainties, whereas a flexible mapping has a high chance to successfully accommodate the future scenarios. We propose a Genetic Algorithm-based approach to solve this optimization problem. Extensive experiments show the importance......We are interested in mapping hard real-time applications on distributed heterogeneous architectures. An application is modeled as a set of tasks, and we consider a fixed-priority preemptive scheduling policy. We target the early design phases, when decisions have a high impact on the subsequent...... in the functionality requirements are captured using “future scenarios”, which are task sets that model functionality likely to be added in the future. In this context, we derive a mapping of tasks in the application, such that the resulted implementation is both robust and flexible. Robust means that the application...

  13. Infrared target recognition based on improved joint local ternary pattern

    Science.gov (United States)

    Sun, Junding; Wu, Xiaosheng

    2016-05-01

    This paper presents a simple, efficient, yet robust approach, named joint orthogonal combination of local ternary pattern, for automatic forward-looking infrared target recognition. It gives more advantages to describe the macroscopic textures and microscopic textures by fusing variety of scales than the traditional LBP-based methods. In addition, it can effectively reduce the feature dimensionality. Further, the rotation invariant and uniform scheme, the robust LTP, and soft concave-convex partition are introduced to enhance its discriminative power. Experimental results demonstrate that the proposed method can achieve competitive results compared with the state-of-the-art methods.

  14. A Novel Evolutionary Algorithm for Designing Robust Analog Filters

    Directory of Open Access Journals (Sweden)

    Shaobo Li

    2018-03-01

    Full Text Available Designing robust circuits that withstand environmental perturbation and device degradation is critical for many applications. Traditional robust circuit design is mainly done by tuning parameters to improve system robustness. However, the topological structure of a system may set a limit on the robustness achievable through parameter tuning. This paper proposes a new evolutionary algorithm for robust design that exploits the open-ended topological search capability of genetic programming (GP coupled with bond graph modeling. We applied our GP-based robust design (GPRD algorithm to evolve robust lowpass and highpass analog filters. Compared with a traditional robust design approach based on a state-of-the-art real-parameter genetic algorithm (GA, our GPRD algorithm with a fitness criterion rewarding robustness, with respect to parameter perturbations, can evolve more robust filters than what was achieved through parameter tuning alone. We also find that inappropriate GA tuning may mislead the search process and that multiple-simulation and perturbed fitness evaluation methods for evolving robustness have complementary behaviors with no absolute advantage of one over the other.

  15. REINA at CLEF 2007 Robust Task

    OpenAIRE

    Zazo Rodríguez, Ángel Francisco; Figuerola, Carlos G.; Alonso Berrocal, José Luis

    2007-01-01

    This paper describes our work at CLEF 2007 Robust Task. We have participated in the monolingual (English, French and Portuguese) and the bilingual (English to French) subtask. At CLEF 2006 our research group obtained very good results applying local query expansion using windows of terms in the robust task. This year we have used the same expansion technique, but taking into account some criteria of robustness: MAP, GMAP, MMR, GS@10, P@10, number of failed topics, number of topics bellow 0.1 ...

  16. Efficient and robust model-to-image alignment using 3D scale-invariant features.

    Science.gov (United States)

    Toews, Matthew; Wells, William M

    2013-04-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. The Variation Management Framework (VMF) for Robust Design

    DEFF Research Database (Denmark)

    Howard, Thomas J.; Ebro, Martin; Eifler, Tobias

    2014-01-01

    Robust Design is an approach to reduce the effects of variation. There are numerous tools,methods and models associated with robust design, however, there is both a lack of a processmodel formalising the step of a robust design process and a framework tying the models together.In this paper we pr...... in the market place and identifies areaswhere action can be taken against variation. An additional benefit of the framework is that itmakes the link between visual/sensory/perceptual robustness, product robustness, and productionvariation (Six Sigma)....

  18. Measure of robustness for complex networks

    Science.gov (United States)

    Youssef, Mina Nabil

    Critical infrastructures are repeatedly attacked by external triggers causing tremendous amount of damages. Any infrastructure can be studied using the powerful theory of complex networks. A complex network is composed of extremely large number of different elements that exchange commodities providing significant services. The main functions of complex networks can be damaged by different types of attacks and failures that degrade the network performance. These attacks and failures are considered as disturbing dynamics, such as the spread of viruses in computer networks, the spread of epidemics in social networks, and the cascading failures in power grids. Depending on the network structure and the attack strength, every network differently suffers damages and performance degradation. Hence, quantifying the robustness of complex networks becomes an essential task. In this dissertation, new metrics are introduced to measure the robustness of technological and social networks with respect to the spread of epidemics, and the robustness of power grids with respect to cascading failures. First, we introduce a new metric called the Viral Conductance (VCSIS ) to assess the robustness of networks with respect to the spread of epidemics that are modeled through the susceptible/infected/susceptible (SIS) epidemic approach. In contrast to assessing the robustness of networks based on a classical metric, the epidemic threshold, the new metric integrates the fraction of infected nodes at steady state for all possible effective infection strengths. Through examples, VCSIS provides more insights about the robustness of networks than the epidemic threshold. In addition, both the paradoxical robustness of Barabasi-Albert preferential attachment networks and the effect of the topology on the steady state infection are studied, to show the importance of quantifying the robustness of networks. Second, a new metric VCSIR is introduced to assess the robustness of networks with respect

  19. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine

    2012-11-01

    The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.

  20. Scaling laws for simple heavy ion targets

    International Nuclear Information System (INIS)

    Gula, W.P.; Magelssen, G.R.

    1981-01-01

    We have examined the behavior of single shell DT gas filled spherical targets irradiated by a constant power heavy ion beam pulse. For targets in which the ion range is less than the shell thickness, our computational results suggest that the target can be divided into three regions: (1) the absorber (100 to 400 eV for the energies we have considered), (2) the cold pusher (a few eV), and (3) the DT gas fuel. We have examined the pusher collapse time, velocity, and maximum kinetic energy variations as functions of the various target parameters and ion beam energy. The results are expressed in analytic terms and verified by computer simulation

  1. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    Science.gov (United States)

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  2. Cell Density Affects the Detection of Chk1 Target Engagement by the Selective Inhibitor V158411.

    Science.gov (United States)

    Geneste, Clara C; Massey, Andrew J

    2018-02-01

    Understanding drug target engagement and the relationship to downstream pharmacology is critical for drug discovery. Here we have evaluated target engagement of Chk1 by the small-molecule inhibitor V158411 using two different target engagement methods (autophosphorylation and cellular thermal shift assay [CETSA]). Target engagement measured by these methods was subsequently related to Chk1 inhibitor-dependent pharmacology. Inhibition of autophosphorylation was a robust method for measuring V158411 Chk1 target engagement. In comparison, while target engagement determined using CETSA appeared robust, the V158411 CETSA target engagement EC 50 values were 43- and 19-fold greater than the autophosphorylation IC 50 values. This difference was attributed to the higher cell density in the CETSA assay configuration. pChk1 (S296) IC 50 values determined using the CETSA assay conditions were 54- and 33-fold greater than those determined under standard conditions and were equivalent to the CETSA EC 50 values. Cellular conditions, especially cell density, influenced the target engagement of V158411 for Chk1. The effects of high cell density on apparent compound target engagement potency should be evaluated when using target engagement assays that necessitate high cell densities (such as the CETSA conditions used in this study). In such cases, the subsequent relation of these data to downstream pharmacological changes should therefore be interpreted with care.

  3. Resolving Multi-Stakeholder Robustness Asymmetries in Coupled Agricultural and Urban Systems

    Science.gov (United States)

    Li, Yu; Giuliani, Matteo; Castelletti, Andrea; Reed, Patrick

    2016-04-01

    The evolving pressures from a changing climate and society are increasingly motivating decision support frameworks that consider the robustness of management actions across many possible futures. Focusing on robustness is helpful for investigating key vulnerabilities within current water systems and for identifying potential tradeoffs across candidate adaptation responses. To date, most robustness studies assume a social planner perspective by evaluating highly aggregated measures of system performance. This aggregate treatment of stakeholders does not explore the equity or intrinsic multi-stakeholder conflicts implicit to the system-wide measures of performance benefits and costs. The commonly present heterogeneity across complex management interests, however, may produce strong asymmetries for alternative adaptation options, designed to satisfy system-level targets. In this work, we advance traditional robustness decision frameworks by replacing the centralized social planner with a bottom-up, agent-based approach, where stakeholders are modeled as individuals, and represented as potentially self-interested agents. This agent-based model enables a more explicit exploration of the potential inequities and asymmetries in the distribution of the system-wide benefit. The approach is demonstrated by exploring the potential conflicts between urban flooding and agricultural production in the Lake Como system (Italy). Lake Como is a regulated lake that is operated to supply water to the downstream agricultural district (Muzza as the pilot study area in this work) composed of a set of farmers with heterogeneous characteristics in terms of water allocation, cropping patterns, and land properties. Supplying water to farmers increases the risk of floods along the lakeshore and therefore the system is operated based on the tradeoff between these two objectives. We generated an ensemble of co-varying climate and socio-economic conditions and evaluated the robustness of the

  4. Robust recognition via information theoretic learning

    CERN Document Server

    He, Ran; Yuan, Xiaotong; Wang, Liang

    2014-01-01

    This Springer Brief represents a comprehensive review of information theoretic methods for robust recognition. A variety of information theoretic methods have been proffered in the past decade, in a large variety of computer vision applications; this work brings them together, attempts to impart the theory, optimization and usage of information entropy.The?authors?resort to a new information theoretic concept, correntropy, as a robust measure and apply it to solve robust face recognition and object recognition problems. For computational efficiency,?the brief?introduces the additive and multip

  5. Robust methods for multivariate data analysis A1

    DEFF Research Database (Denmark)

    Frosch, Stina; Von Frese, J.; Bro, Rasmus

    2005-01-01

    Outliers may hamper proper classical multivariate analysis, and lead to incorrect conclusions. To remedy the problem of outliers, robust methods are developed in statistics and chemometrics. Robust methods reduce or remove the effect of outlying data points and allow the ?good? data to primarily...... determine the result. This article reviews the most commonly used robust multivariate regression and exploratory methods that have appeared since 1996 in the field of chemometrics. Special emphasis is put on the robust versions of chemometric standard tools like PCA and PLS and the corresponding robust...

  6. Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    2009-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure....

  7. Ins-Robust Primitive Words

    OpenAIRE

    Srivastava, Amit Kumar; Kapoor, Kalpesh

    2017-01-01

    Let Q be the set of primitive words over a finite alphabet with at least two symbols. We characterize a class of primitive words, Q_I, referred to as ins-robust primitive words, which remain primitive on insertion of any letter from the alphabet and present some properties that characterizes words in the set Q_I. It is shown that the language Q_I is dense. We prove that the language of primitive words that are not ins-robust is not context-free. We also present a linear time algorithm to reco...

  8. On the role of secondary pions in spallation targets

    Energy Technology Data Exchange (ETDEWEB)

    Mancusi, Davide [Paris-Saclay Univ., Gif-sur-Yvette (France). Den-Service d' Etude des Reacteurs et de Mathematiques Appliquees (SERMA); Lo Meo, Sergio [ENEA, Research Centre ' ' Ezio Clementel' ' , Bologna (Italy); INFN, Bologna (Italy); Colonna, Nicola [INFN, Bari (Italy); Boudard, Alain; David, Jean-Christophe; Leray, Sylvie [Paris-Saclay Univ., Gif-sur-Yvette (France). IRFU, CEA; Cortes-Giraldo, Miguel Antonio; Lerendegui-Marco, Jorge [Sevilla Univ. (Spain). Facultad de Fisica; Cugnon, Joseph [Liege Univ. (Belgium). AGO Dept.; Massimi, Cristian [INFN, Bologna (Italy); Bologna Univ. (Italy). Physics and Astronomy Dept.; Vlachoudis, Vasilis [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2017-05-15

    We use particle-transport simulations to show that secondary pions play a crucial role for the development of the hadronic cascade and therefore for the production of neutrons and photons from thick spallation targets. In particular, for the nTOF lead spallation target, irradiated with 20 GeV/c protons, neutral pions are involved in the production of ∝ 90% of the high-energy photons; charged pions participate in ∝ 40% of the integral neutron yield. Nevertheless, photon and neutron yields are shown to be relatively insensitive to large changes of the average pion multiplicity in the individual spallation reactions. We characterize this robustness as a peculiar property of hadronic cascades in thick targets. (orig.)

  9. On the role of secondary pions in spallation targets

    CERN Document Server

    Mancusi, Davide; Colonna, Nicola; Boudard, Alain; Cortés-Giraldo, Miguel Antonio; Cugnon, Joseph; David, Jean-Christophe; Leray, Sylvie; Lerendegui-Marco, Jorge; Massimi, Cristian; Vlachoudis, Vasilis

    2017-01-01

    We use particle-transport simulations to show that secondary pions play a crucial role for the development of the hadronic cascade and therefore for the production of neutrons and photons from thick spallation targets. In particular, for the n_TOF lead spallation target, irradiated with 20-GeV/c protons, neutral pions are involved in the production of ~90% of the high-energy photons; charged pions participate in ~40% of the integral neutron yield. Nevertheless, photon and neutron yields are shown to be relatively insensitive to large changes of the average pion multiplicity in the individual spallation reactions. We characterize this robustness as a peculiar property of hadronic cascades in thick targets.

  10. SU-E-T-287: Robustness Study of Passive-Scattering Proton Therapy in Lung: Is Range and Setup Uncertainty Calculation On the Initial CT Enough to Predict the Plan Robustness?

    Energy Technology Data Exchange (ETDEWEB)

    Ding, X; Dormer, J; Kenton, O; Liu, H; Simone, C; Solberg, T; Lin, L [University of Pennsylvania, Philadelphia, PA (United States)

    2014-06-01

    Purpose: Plan robustness of the passive-scattering proton therapy treatment of lung tumors has been studied previously using combined uncertainties of 3.5% in CT number and 3 mm geometric shifts. In this study, we investigate whether this method is sufficient to predict proton plan robustness by comparing to plans performed on weekly verification CT scans. Methods: Ten lung cancer patients treated with passive-scattering proton therapy were randomly selected. All plans were prescribed 6660cGy in 37 fractions. Each initial plan was calculated using +/− 3.5% range and +/− 0.3cm setup uncertainty in x, y and z directions in Eclipse TPS(Method-A). Throughout the treatment course, patients received weekly verification CT scans to assess the daily treatment variation(Method-B). After contours and imaging registrations are verified by the physician, the initial plan with the same beamline and compensator was mapped into the verification CT. Dose volume histograms (DVH) were evaluated for robustness study. Results: Differences are observed between method A and B in terms of iCTV coverage and lung dose. Method-A shows all the iCTV D95 are within +/− 1% difference, while 20% of cases fall outside +/−1% range in Method-B. In the worst case scenario(WCS), the iCTV D95 is reduced by 2.5%. All lung V5 and V20 are within +/−5% in Method-A while 15% of V5 and 10% of V20 fall outside of +/−5% in Method-B. In the WCS, Lung V5 increased by 15% and V20 increased by 9%. Method A and B show good agreement with regard to cord maximum and Esophagus mean dose. Conclusion: This study suggests that using range and setup uncertainty calculation (+/−3.5% and +/−3mm) may not be sufficient to predict the WCS. In the absence of regular verification scans, expanding the conventional uncertainty parameters(e.g., to +/−3.5% and +/−4mm) may be needed to better reflect plan actual robustness.

  11. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  12. Arbitrary-step randomly delayed robust filter with application to boost phase tracking

    Science.gov (United States)

    Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang

    2018-04-01

    The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.

  13. Some objective measures indicative of perceived voice robustness in student teachers.

    Science.gov (United States)

    Orr, Rosemary; de Jong, Felix; Cranen, Bert

    2002-01-01

    One of the problems confronted in the teaching profession is the maintenance of a healthy voice. This basic pedagogical tool is subjected to extensive use, and frequently suffers from overload, with some teachers having to give up their profession altogether. In some teacher training schools, it is the current practice to examine the student's voice, and to refer any perceived susceptibility to strain to voice specialists. For this study, a group of vocally healthy students were examined first at the teacher training schools, and then at the ENT clinic at the University Hospital of Nijmegen. The aim was to predict whether the subject's voice might be at risk for occupational dysphonia as a result of the vocal load of the teaching profession. We tried to find objective measures of voice quality in student teachers, used in current clinical practice, which reflect the judgements of the therapists and phoniatricians. We tried to explain such measures physiologically in terms of robustness of, and control over voicing. Objective measures used included video-laryngostroboscopy, phonetography and spectrography. Maximum phonation time, melodic range in conjunction with maximum intensity range, and the production of soft voice are suggested as possible predictive parameters for the risk of occupational voice strain.

  14. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  15. Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter

    KAUST Repository

    Luo, Xiaodong

    2011-12-01

    A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used in the Kalman filter. By design, the H∞ filter is more robust than the Kalman filter, in the sense that the estimation error in the H∞ filter in general has a finite growth rate with respect to the uncertainties in assimilation, except for a special case that corresponds to the Kalman filter. The original form of the H∞ filter contains global constraints in time, which may be inconvenient for sequential data assimilation problems. Therefore a variant is introduced that solves some time-local constraints instead, and hence it is called the time-local H∞ filter (TLHF). By analogy to the ensemble Kalman filter (EnKF), the concept of ensemble time-local H∞ filter (EnTLHF) is also proposed. The general form of the EnTLHF is outlined, and some of its special cases are discussed. In particular, it is shown that an EnKF with certain covariance inflation is essentially an EnTLHF. In this sense, the EnTLHF provides a general framework for conducting covariance inflation in the EnKF-based methods. Some numerical examples are used to assess the relative robustness of the TLHF–EnTLHF in comparison with the corresponding KF–EnKF method.

  16. Robust 3D–2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    International Nuclear Information System (INIS)

    Otake, Yoshito; Wang, Adam S; Webster Stayman, J; Siewerdsen, Jeffrey H; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L

    2013-01-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust

  17. Robust 3D–2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    Energy Technology Data Exchange (ETDEWEB)

    Otake, Yoshito; Wang, Adam S; Webster Stayman, J; Siewerdsen, Jeffrey H [Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD (United States); Uneri, Ali [Department of Computer Science, Johns Hopkins University, Baltimore MD (United States); Kleinszig, Gerhard; Vogt, Sebastian [Siemens Healthcare, Erlangen (Germany); Khanna, A Jay [Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore MD (United States); Gokaslan, Ziya L, E-mail: jeff.siewerdsen@jhu.edu [Department of Neurosurgery, Johns Hopkins University, Baltimore MD (United States)

    2013-12-07

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust

  18. IMRT delivery to a moving target by dynamic MLC tracking: delivery for targets moving in two dimensions in the beam's eye view

    International Nuclear Information System (INIS)

    McQuaid, D; Webb, S

    2006-01-01

    A new modification of the dynamic multileaf collimator (dMLC) delivery technique for intensity-modulated therapy (IMRT) is outlined. This technique enables the tracking of a target moving through rigid-body translations in a 2D trajectory in the beam's eye view. The accuracy of the delivery versus that of deliveries with no tracking and of 1D tracking techniques is quantified with clinically derived intensity-modulated beams (IMBs). Leaf trajectories calculated in the target-reference frame were iteratively synchronized assuming regular target motion. This allowed the leaves defined in the lab-reference frame to simultaneously follow the target motion and to deliver the required IMB without violation of the leaf maximum-velocity constraint. The leaves are synchronized until the gradient of the leaf position at every instant is less than a calculated maximum. The delivered fluence in the target-reference frame was calculated with a simple primary-fluence model. The new 2D tracking technique was compared with the delivered fluence produced by no-tracking deliveries and by 1D tracking deliveries for 33 clinical IMBs. For the clinical IMBs normalized to a maximum fluence of 200 MUs, the rms difference between the desired and the delivered IMB was 15.6 ± 3.3 MU for the case of a no-tracking delivery, 7.9 ± 1.6 MU for the case where only the primary component of motion was corrected and 5.1 ± 1.1 MU for the 2D tracking delivery. The residual error is due to interpolation and sampling effects. The 2D tracking delivery technique requires an increase in the delivery time evaluated as between 0 and 50% of the unsynchronized delivery time for each beam with a mean increase of 13% for the IMBs tested. The 2D tracking dMLC delivery technique allows an optimized IMB to be delivered to moving targets with increased accuracy and with acceptable increases in delivery time. When combined with real-time knowledge of the target motion at delivery time, this technique facilitates

  19. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  20. A Survey on Robustness in Railway Planning

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Bull, Simon Henry

    2018-01-01

    Planning problems in passenger railway range from long term strategic decision making to the detailed planning of operations.Operations research methods have played an increasing role in this planning process. However, recently more attention has been given to considerations of robustness...... in the quality of solutions to individual planning problems, and of operations in general. Robustness in general is the capacity for some system to absorb or resist changes. In the context of railway robustness it is often taken to be the capacity for operations to continue at some level when faced...... with a disruption such as delay or failure. This has resulted in more attention given to the inclusion of robustness measures and objectives in individual planning problems, and to the providing of tools to ensure operations continue under disrupted situations. In this paper we survey the literature on robustness...

  1. Enhanced target normal sheath acceleration of protons from intense laser interaction with a cone-tube target

    Directory of Open Access Journals (Sweden)

    K. D. Xiao

    2016-01-01

    Full Text Available Laser driven proton acceleration is proposed to be greatly enhanced by using a cone-tube target, which can be easily manufactured by current 3D-print technology. It is observed that energetic electron bunches are generated along the tube and accelerated to a much higher temperature by the combination of ponderomotive force and longitudinal electric field which is induced by the optical confinement of the laser field. As a result, a localized and enhanced sheath field is produced at the rear of the target and the maximum proton energy is about three-fold increased based on the two-dimentional particle-in-cell simulation results. It is demonstrated that by employing this advanced target scheme, the scaling of the proton energy versus the laser intensity is much beyond the normal target normal sheath acceleration (TNSA case.

  2. A robust interpretation of duration calculus

    DEFF Research Database (Denmark)

    Franzle, M.; Hansen, Michael Reichhardt

    2005-01-01

    We transfer the concept of robust interpretation from arithmetic first-order theories to metric-time temporal logics. The idea is that the interpretation of a formula is robust iff its truth value does not change under small variation of the constants in the formula. Exemplifying this on Duration...... Calculus (DC), our findings are that the robust interpretation of DC is equivalent to a multi-valued interpretation that uses the real numbers as semantic domain and assigns Lipschitz-continuous interpretations to all operators of DC. Furthermore, this continuity permits approximation between discrete...

  3. Design Robust Controller for Rotary Kiln

    Directory of Open Access Journals (Sweden)

    Omar D. Hernández-Arboleda

    2013-11-01

    Full Text Available This paper presents the design of a robust controller for a rotary kiln. The designed controller is a combination of a fractional PID and linear quadratic regulator (LQR, these are not used to control the kiln until now, in addition robustness criteria are evaluated (gain margin, phase margin, strength gain, rejecting high frequency noise and sensitivity applied to the entire model (controller-plant, obtaining good results with a frequency range of 0.020 to 90 rad/s, which contributes to the robustness of the system.

  4. Danish Requirements for Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Christensen, H. H.

    2006-01-01

    . This paper describes the background of the revised robustness requirements implemented in the Danish Code of Practice for Safety of Structures in 2003 [1, 2, 3]. According to the Danish design rules robustness shall be documented for all structures where consequences of failure are serious. This paper...... describes the background of the design procedure in the Danish codes, which shall be followed in order to document sufficient robustness in the following steps: Step 1: review of loads and possible failure modes/scenarios and determination of acceptable collapse extent. Step 2: review of the structural...

  5. Robust lyapunov controller for uncertain systems

    KAUST Repository

    Laleg-Kirati, Taous-Meriem; Elmetennani, Shahrazed

    2017-01-01

    Various examples of systems and methods are provided for Lyapunov control for uncertain systems. In one example, a system includes a process plant and a robust Lyapunov controller configured to control an input of the process plant. The robust

  6. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  7. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  8. Robustness Analysis of Typologies of Reciprocal Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Parigi, Dario

    2013-01-01

    to the future development of typologies of reciprocal timber structures. The paper concludes that these kinds of structures can have a potential as long span timber structures in real projects if they are carefully designed with respect to the overall robustness strategies.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern building...... codes consider the need for robustness in structures and provides strategies and methods to obtain robustness. Therefore a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper outlines robustness issues related...

  9. Robust Moving Horizon H∞ Control of Discrete Time-Delayed Systems with Interval Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    F. Yıldız Tascikaraoglu

    2014-01-01

    Full Text Available In this study, design of a delay-dependent type moving horizon state-feedback control (MHHC is considered for a class of linear discrete-time system subject to time-varying state delays, norm-bounded uncertainties, and disturbances with bounded energies. The closed-loop robust stability and robust performance problems are considered to overcome the instability and poor disturbance rejection performance due to the existence of parametric uncertainties and time-delay appeared in the system dynamics. Utilizing a discrete-time Lyapunov-Krasovskii functional, some delay-dependent linear matrix inequality (LMI based conditions are provided. It is shown that if one can find a feasible solution set for these LMI conditions iteratively at each step of run-time, then we can construct a control law which guarantees the closed-loop asymptotic stability, maximum disturbance rejection performance, and closed-loop dissipativity in view of the actuator limitations. Two numerical examples with simulations on a nominal and uncertain discrete-time, time-delayed systems, are presented at the end, in order to demonstrate the efficiency of the proposed method.

  10. Maximum principles and sharp constants for solutions of elliptic and parabolic systems

    CERN Document Server

    Kresin, Gershon

    2012-01-01

    The main goal of this book is to present results pertaining to various versions of the maximum principle for elliptic and parabolic systems of arbitrary order. In particular, the authors present necessary and sufficient conditions for validity of the classical maximum modulus principles for systems of second order and obtain sharp constants in inequalities of Miranda-Agmon type and in many other inequalities of a similar nature. Somewhat related to this topic are explicit formulas for the norms and the essential norms of boundary integral operators. The proofs are based on a unified approach using, on one hand, representations of the norms of matrix-valued integral operators whose target spaces are linear and finite dimensional, and, on the other hand, on solving certain finite dimensional optimization problems. This book reflects results obtained by the authors, and can be useful to research mathematicians and graduate students interested in partial differential equations.

  11. Robust Tracking with Discriminative Ranking Middle-Level Patches

    Directory of Open Access Journals (Sweden)

    Hong Liu

    2014-04-01

    Full Text Available The appearance model has been shown to be essential for robust visual tracking since it is the basic criterion to locating targets in video sequences. Though existing tracking-by-detection algorithms have shown to be greatly promising, they still suffer from the drift problem, which is caused by updating appearance models. In this paper, we propose a new appearance model composed of ranking middle-level patches to capture more object distinctiveness than traditional tracking-by-detection models. Targets and backgrounds are represented by both low-level bottom-up features and high-level top-down patches, which can compensate each other. Bottom-up features are defined at the pixel level, and each feature gets its discrimination score through selective feature attention mechanism. In top-down feature extraction, rectangular patches are ranked according to their bottom-up discrimination scores, by which all of them are clustered into irregular patches, named ranking middle-level patches. In addition, at the stage of classifier training, the online random forests algorithm is specially refined to reduce drifting problems. Experiments on challenging public datasets and our test videos demonstrate that our approach can effectively prevent the tracker drifting problem and obtain competitive performance in visual tracking.

  12. RobOKoD: microbial strain design for (over)production of target compounds.

    Science.gov (United States)

    Stanford, Natalie J; Millard, Pierre; Swainston, Neil

    2015-01-01

    Sustainable production of target compounds such as biofuels and high-value chemicals for pharmaceutical, agrochemical, and chemical industries is becoming an increasing priority given their current dependency upon diminishing petrochemical resources. Designing these strains is difficult, with current methods focusing primarily on knocking-out genes, dismissing other vital steps of strain design including the overexpression and dampening of genes. The design predictions from current methods also do not translate well-into successful strains in the laboratory. Here, we introduce RobOKoD (Robust, Overexpression, Knockout and Dampening), a method for predicting strain designs for overproduction of targets. The method uses flux variability analysis to profile each reaction within the system under differing production percentages of target-compound and biomass. Using these profiles, reactions are identified as potential knockout, overexpression, or dampening targets. The identified reactions are ranked according to their suitability, providing flexibility in strain design for users. The software was tested by designing a butanol-producing Escherichia coli strain, and was compared against the popular OptKnock and RobustKnock methods. RobOKoD shows favorable design predictions, when predictions from these methods are compared to a successful butanol-producing experimentally-validated strain. Overall RobOKoD provides users with rankings of predicted beneficial genetic interventions with which to support optimized strain design.

  13. The Automatic Measurement of Targets

    DEFF Research Database (Denmark)

    Höhle, Joachim

    1997-01-01

    The automatic measurement of targets is demonstrated by means of a theoretical example and by an interactive measuring program for real imagery from a réseau camera. The used strategy is a combination of two methods: the maximum correlation coefficient and the correlation in the subpixel range...... interactive software is also part of a computer-assisted learning program on digital photogrammetry....

  14. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  15. Robust identification of transcriptional regulatory networks using a Gibbs sampler on outlier sum statistic.

    Science.gov (United States)

    Gu, Jinghua; Xuan, Jianhua; Riggins, Rebecca B; Chen, Li; Wang, Yue; Clarke, Robert

    2012-08-01

    Identification of transcriptional regulatory networks (TRNs) is of significant importance in computational biology for cancer research, providing a critical building block to unravel disease pathways. However, existing methods for TRN identification suffer from the inclusion of excessive 'noise' in microarray data and false-positives in binding data, especially when applied to human tumor-derived cell line studies. More robust methods that can counteract the imperfection of data sources are therefore needed for reliable identification of TRNs in this context. In this article, we propose to establish a link between the quality of one target gene to represent its regulator and the uncertainty of its expression to represent other target genes. Specifically, an outlier sum statistic was used to measure the aggregated evidence for regulation events between target genes and their corresponding transcription factors. A Gibbs sampling method was then developed to estimate the marginal distribution of the outlier sum statistic, hence, to uncover underlying regulatory relationships. To evaluate the effectiveness of our proposed method, we compared its performance with that of an existing sampling-based method using both simulation data and yeast cell cycle data. The experimental results show that our method consistently outperforms the competing method in different settings of signal-to-noise ratio and network topology, indicating its robustness for biological applications. Finally, we applied our method to breast cancer cell line data and demonstrated its ability to extract biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. The Gibbs sampler MATLAB package is freely available at http://www.cbil.ece.vt.edu/software.htm. xuan@vt.edu Supplementary data are available at Bioinformatics online.

  16. Robustness of Long Span Reciprocal Timber Structures

    DEFF Research Database (Denmark)

    Balfroid, Nathalie; Kirkegaard, Poul Henning

    2011-01-01

    engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper makes a discussion of such robustness issues related to the future development of reciprocal timber structures. The paper concludes that these kind of structures can have...... a potential as long span timber structures in real projects if they are carefully designed with respect to the overall robustness strategies.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. The interest has also been facilitated due to recently severe structural failures...

  17. Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices

    Science.gov (United States)

    Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie

    2016-09-01

    Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.

  18. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  19. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  20. Nonlinear robust hierarchical control for nonlinear uncertain systems

    Directory of Open Access Journals (Sweden)

    Leonessa Alexander

    1999-01-01

    Full Text Available A nonlinear robust control-system design framework predicated on a hierarchical switching controller architecture parameterized over a set of moving nominal system equilibria is developed. Specifically, using equilibria-dependent Lyapunov functions, a hierarchical nonlinear robust control strategy is developed that robustly stabilizes a given nonlinear system over a prescribed range of system uncertainty by robustly stabilizing a collection of nonlinear controlled uncertain subsystems. The robust switching nonlinear controller architecture is designed based on a generalized (lower semicontinuous Lyapunov function obtained by minimizing a potential function over a given switching set induced by the parameterized nominal system equilibria. The proposed framework robustly stabilizes a compact positively invariant set of a given nonlinear uncertain dynamical system with structured parametric uncertainty. Finally, the efficacy of the proposed approach is demonstrated on a jet engine propulsion control problem with uncertain pressure-flow map data.

  1. The importance of robust design methodology

    DEFF Research Database (Denmark)

    Eifler, Tobias; Howard, Thomas J.

    2018-01-01

    infamous recalls in automotive history, that of the GM ignition switch, from the perspective of Robust Design. It is investigated if available Robust Design methods such as sensitivity analysis, tolerance stack-ups, design clarity, etc. would have been suitable to account for the performance variation...

  2. Security and robustness for collaborative monitors

    NARCIS (Netherlands)

    Testerink, Bas; Bulling, Nils; Dastani, Mehdi

    2016-01-01

    Decentralized monitors can be subject to robustness and security risks. Robustness risks include attacks on the monitor’s infrastructure in order to disable parts of its functionality. Security risks include attacks that try to extract information from the monitor and thereby possibly leak sensitive

  3. The "Robustness" of Vocabulary Intervention in the Public Schools: Targets and Techniques Employed in Speech-Language Therapy

    Science.gov (United States)

    Justice, Laura M.; Schmitt, Mary Beth; Murphy, Kimberly A.; Pratt, Amy; Biancone, Tricia

    2014-01-01

    This study examined vocabulary intervention--in terms of targets and techniques--for children with language impairment receiving speech-language therapy in public schools (i.e., non-fee-paying schools) in the United States. Vocabulary treatments and targets were examined with respect to their alignment with the empirically validated practice of…

  4. Robust statistics and geochemical data analysis

    International Nuclear Information System (INIS)

    Di, Z.

    1987-01-01

    Advantages of robust procedures over ordinary least-squares procedures in geochemical data analysis is demonstrated using NURE data from the Hot Springs Quadrangle, South Dakota, USA. Robust principal components analysis with 5% multivariate trimming successfully guarded the analysis against perturbations by outliers and increased the number of interpretable factors. Regression with SINE estimates significantly increased the goodness-of-fit of the regression and improved the correspondence of delineated anomalies with known uranium prospects. Because of the ubiquitous existence of outliers in geochemical data, robust statistical procedures are suggested as routine procedures to replace ordinary least-squares procedures

  5. Towards distortion-free robust image authentication

    International Nuclear Information System (INIS)

    Coltuc, D

    2007-01-01

    This paper investigates a general framework for distortion-free robust image authentication by multiple marking. First, by robust watermarking a subsampled version of image edges is embedded. Then, by reversible watermarking the information needed to recover the original image is inserted, too. The hiding capacity of the reversible watermarking is the essential requirement for this approach. Thus in case of no attacks not only image is authenticated but also the original is exactly recovered. In case of attacks, reversibility is lost, but image can still be authenticated. Preliminary results providing very good robustness against JPEG compression are presented

  6. A Very Robust AlGaN/GaN HEMT Technology to High Forward Gate Bias and Current

    Directory of Open Access Journals (Sweden)

    Bradley D. Christiansen

    2012-01-01

    Full Text Available Reports to date of GaN HEMTs subjected to forward gate bias stress include varied extents of degradation. We report an extremely robust GaN HEMT technology that survived—contrary to conventional wisdom—high forward gate bias (+6 V and current (>1.8 A/mm for >17.5 hours exhibiting only a slight change in gate diode characteristic, little decrease in maximum drain current, with only a 0.1 V positive threshold voltage shift, and, remarkably, a persisting breakdown voltage exceeding 200 V.

  7. Robust coordinated control of a dual-arm space robot

    Science.gov (United States)

    Shi, Lingling; Kayastha, Sharmila; Katupitiya, Jay

    2017-09-01

    Dual-arm space robots are more capable of implementing complex space tasks compared with single arm space robots. However, the dynamic coupling between the arms and the base will have a serious impact on the spacecraft attitude and the hand motion of each arm. Instead of considering one arm as the mission arm and the other as the balance arm, in this work two arms of the space robot perform as mission arms aimed at accomplishing secure capture of a floating target. The paper investigates coordinated control of the base's attitude and the arms' motion in the task space in the presence of system uncertainties. Two types of controllers, i.e. a Sliding Mode Controller (SMC) and a nonlinear Model Predictive Controller (MPC) are verified and compared with a conventional Computed-Torque Controller (CTC) through numerical simulations in terms of control accuracy and system robustness. Both controllers eliminate the need to linearly parameterize the dynamic equations. The MPC has been shown to achieve performance with higher accuracy than CTC and SMC in the absence of system uncertainties under the condition that they consume comparable energy. When the system uncertainties are included, SMC and CTC present advantageous robustness than MPC. Specifically, in a case where system inertia increases, SMC delivers higher accuracy than CTC and costs the least amount of energy.

  8. On the robustness of Herlihy's hierarchy

    Science.gov (United States)

    Jayanti, Prasad

    1993-01-01

    A wait-free hierarchy maps object types to levels in Z(+) U (infinity) and has the following property: if a type T is at level N, and T' is an arbitrary type, then there is a wait-free implementation of an object of type T', for N processes, using only registers and objects of type T. The infinite hierarchy defined by Herlihy is an example of a wait-free hierarchy. A wait-free hierarchy is robust if it has the following property: if T is at level N, and S is a finite set of types belonging to levels N - 1 or lower, then there is no wait-free implementation of an object of type T, for N processes, using any number and any combination of objects belonging to the types in S. Robustness implies that there are no clever ways of combining weak shared objects to obtain stronger ones. Contrary to what many researchers believe, we prove that Herlihy's hierarchy is not robust. We then define some natural variants of Herlihy's hierarchy, which are also infinite wait-free hierarchies. With the exception of one, which is still open, these are not robust either. We conclude with the open question of whether non-trivial robust wait-free hierarchies exist.

  9. Replication and robustness in developmental research.

    Science.gov (United States)

    Duncan, Greg J; Engel, Mimi; Claessens, Amy; Dowsett, Chantelle J

    2014-11-01

    Replications and robustness checks are key elements of the scientific method and a staple in many disciplines. However, leading journals in developmental psychology rarely include explicit replications of prior research conducted by different investigators, and few require authors to establish in their articles or online appendices that their key results are robust across estimation methods, data sets, and demographic subgroups. This article makes the case for prioritizing both explicit replications and, especially, within-study robustness checks in developmental psychology. It provides evidence on variation in effect sizes in developmental studies and documents strikingly different replication and robustness-checking practices in a sample of journals in developmental psychology and a sister behavioral science-applied economics. Our goal is not to show that any one behavioral science has a monopoly on best practices, but rather to show how journals from a related discipline address vital concerns of replication and generalizability shared by all social and behavioral sciences. We provide recommendations for promoting graduate training in replication and robustness-checking methods and for editorial policies that encourage these practices. Although some of our recommendations may shift the form and substance of developmental research articles, we argue that they would generate considerable scientific benefits for the field. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  10. Maximum power point tracking-based control algorithm for PMSG wind generation system without mechanical sensors

    International Nuclear Information System (INIS)

    Hong, Chih-Ming; Chen, Chiung-Hsing; Tu, Chia-Sheng

    2013-01-01

    Highlights: ► This paper presents MPPT based control for optimal wind energy capture using RBFN. ► MPSO is adopted to adjust the learning rates to improve the learning capability. ► This technique can maintain the system stability and reach the desired performance. ► The EMF in the rotating reference frame is utilized in order to estimate speed. - Abstract: This paper presents maximum-power-point-tracking (MPPT) based control algorithms for optimal wind energy capture using radial basis function network (RBFN) and a proposed torque observer MPPT algorithm. The design of a high-performance on-line training RBFN using back-propagation learning algorithm with modified particle swarm optimization (MPSO) regulating controller for the sensorless control of a permanent magnet synchronous generator (PMSG). The MPSO is adopted in this study to adapt the learning rates in the back-propagation process of the RBFN to improve the learning capability. The PMSG is controlled by the loss-minimization control with MPPT below the base speed, which corresponds to low and high wind speed, and the maximum energy can be captured from the wind. Then the observed disturbance torque is feed-forward to increase the robustness of the PMSG system

  11. Primal and dual approaches to adjustable robust optimization

    NARCIS (Netherlands)

    de Ruiter, Frans

    2018-01-01

    Robust optimization has become an important paradigm to deal with optimization under uncertainty. Adjustable robust optimization is an extension that deals with multistage problems. This thesis starts with a short but comprehensive introduction to adjustable robust optimization. Then the two

  12. Reliability-Based Robustness Analysis for a Croatian Sports Hall

    DEFF Research Database (Denmark)

    Čizmar, Dean; Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a probabilistic approach for structural robustness assessment for a timber structure built a few years ago. The robustness analysis is based on a structural reliability based framework for robustness and a simplified mechanical system modelling of a timber truss system....... A complex timber structure with a large number of failure modes is modelled with only a few dominant failure modes. First, a component based robustness analysis is performed based on the reliability indices of the remaining elements after the removal of selected critical elements. The robustness...... is expressed and evaluated by a robustness index. Next, the robustness is assessed using system reliability indices where the probabilistic failure model is modelled by a series system of parallel systems....

  13. Realistic, achievable and effective targets and timetables

    International Nuclear Information System (INIS)

    Hambley, M.G.

    1997-01-01

    The current status of U.S. policy regarding climate change, and the U.S. perspective on targets and timetables were discussed. U.S. policy is based on four particular points: (1) legally binding, multi-year emissions budgets, (2) focus on medium, not short-term targets, (3) maximum flexibility offered to parties to reach whatever targets are agreed upon, and (4) a proposal concerning developing countries. It was strongly suggested that if the December 1997 conference in Kyoto is to succeed, developing countries would have to have a role in negotiations. Greenhouse gas emissions and climate change are global issues, and can only be solved by global action

  14. Identification of a robust subpathway-based signature for acute myeloid leukemia prognosis using an miRNA integrated strategy.

    Science.gov (United States)

    Chang, Huijuan; Gao, Qiuying; Ding, Wei; Qing, Xueqin

    2018-01-01

    Acute myeloid leukemia (AML) is a heterogeneous disease, and survival signatures are urgently needed to better monitor treatment. MiRNAs displayed vital regulatory roles on target genes, which was necessary involved in the complex disease. We therefore examined the expression levels of miRNAs and genes to identify robust signatures for survival benefit analyses. First, we reconstructed subpathway graphs by embedding miRNA components that were derived from low-throughput miRNA-gene interactions. Then, we randomly divided the data sets from The Cancer Genome Atlas (TCGA) into training and testing sets, and further formed 100 subsets based on the training set. Using each subset, we identified survival-related miRNAs and genes, and identified survival subpathways based on the reconstructed subpathway graphs. After statistical analyses of these survival subpathways, the most robust subpathways with the top three ranks were identified, and risk scores were calculated based on these robust subpathways for AML patient prognoses. Among these robust subpathways, three representative subpathways, path: 05200_10 from Pathways in cancer, path: 04110_20 from Cell cycle, and path: 04510_8 from Focal adhesion, were significantly associated with patient survival in the TCGA training and testing sets based on subpathway risk scores. In conclusion, we performed integrated analyses of miRNAs and genes to identify robust prognostic subpathways, and calculated subpathway risk scores to characterize AML patient survival.

  15. Analysis of the role of homology arms in gene-targeting vectors in human cells.

    Directory of Open Access Journals (Sweden)

    Ayako Ishii

    Full Text Available Random integration of targeting vectors into the genome is the primary obstacle in human somatic cell gene targeting. Non-homologous end-joining (NHEJ, a major pathway for repairing DNA double-strand breaks, is thought to be responsible for most random integration events; however, absence of DNA ligase IV (LIG4, the critical NHEJ ligase, does not significantly reduce random integration frequency of targeting vector in human cells, indicating robust integration events occurring via a LIG4-independent mechanism. To gain insights into the mechanism and robustness of LIG4-independent random integration, we employed various types of targeting vectors to examine their integration frequencies in LIG4-proficient and deficient human cell lines. We find that the integration frequency of targeting vector correlates well with the length of homology arms and with the amount of repetitive DNA sequences, especially SINEs, present in the arms. This correlation was prominent in LIG4-deficient cells, but was also seen in LIG4-proficient cells, thus providing evidence that LIG4-independent random integration occurs frequently even when NHEJ is functionally normal. Our results collectively suggest that random integration frequency of conventional targeting vectors is substantially influenced by homology arms, which typically harbor repetitive DNA sequences that serve to facilitate LIG4-independent random integration in human cells, regardless of the presence or absence of functional NHEJ.

  16. Design principles for robust oscillatory behavior.

    Science.gov (United States)

    Castillo-Hair, Sebastian M; Villota, Elizabeth R; Coronado, Alberto M

    2015-09-01

    Oscillatory responses are ubiquitous in regulatory networks of living organisms, a fact that has led to extensive efforts to study and replicate the circuits involved. However, to date, design principles that underlie the robustness of natural oscillators are not completely known. Here we study a three-component enzymatic network model in order to determine the topological requirements for robust oscillation. First, by simulating every possible topological arrangement and varying their parameter values, we demonstrate that robust oscillators can be obtained by augmenting the number of both negative feedback loops and positive autoregulations while maintaining an appropriate balance of positive and negative interactions. We then identify network motifs, whose presence in more complex topologies is a necessary condition for obtaining oscillatory responses. Finally, we pinpoint a series of simple architectural patterns that progressively render more robust oscillators. Together, these findings can help in the design of more reliable synthetic biomolecular networks and may also have implications in the understanding of other oscillatory systems.

  17. Robustness of Distance-to-Default

    DEFF Research Database (Denmark)

    Jessen, Cathrine; Lando, David

    2013-01-01

    Distance-to-default is a remarkably robust measure for ranking firms according to their risk of default. The ranking seems to work despite the fact that the Merton model from which the measure is derived produces default probabilities that are far too small when applied to real data. We use...... simulations to investigate the robustness of the distance-to-default measure to different model specifications. Overall we find distance-to-default to be robust to a number of deviations from the simple Merton model that involve different asset value dynamics and different default triggering mechanisms....... A notable exception is a model with stochastic volatility of assets. In this case both the ranking of firms and the estimated default probabilities using distance-to-default perform significantly worse. We therefore propose a volatility adjustment of the distance-to-default measure, that significantly...

  18. Robustness of digital artist authentication

    DEFF Research Database (Denmark)

    Jacobsen, Robert; Nielsen, Morten

    In many cases it is possible to determine the authenticity of a painting from digital reproductions of the paintings; this has been demonstrated for a variety of artists and with different approaches. Common to all these methods in digital artist authentication is that the potential of the method...... is in focus, while the robustness has not been considered, i.e. the degree to which the data collection process influences the decision of the method. However, in order for an authentication method to be successful in practice, it needs to be robust to plausible error sources from the data collection....... In this paper we investigate the robustness of the newly proposed authenticity method introduced by the authors based on second generation multiresolution analysis. This is done by modelling a number of realistic factors that can occur in the data collection....

  19. A water-cooled target of a 14 MeV neutron source

    International Nuclear Information System (INIS)

    Ogawa, Masuro; Seki, Masahiro; Kawamura, Hiroshi; Sanokawa, Konomo

    1979-09-01

    For the cooling system of a stationary target for the fusion neutronics source (FNS), designed to meet the structural, thermal and hydraulic requirements, thermohydraulic experiments were made. In the heat transfer experiment, in place of an accelerator, electric-heater assemblies were used. The relation of head loss and heat transfer was obtained as a function of Reynolds number. The head loss was not large for flow rates up to 1.3 l/s. Neither vibration of the apparatus nor cavitation of water was observed even at the maximum flow rate. The heat load of 1 kW for the beam diameter of 15mm, i.e. the requirement of FNS, could be removed by 0.2 l/s water flow, with the target-surface maximum temperature kept below 200 0 C. Extrapolation of the experimental results showed that with the target system, the maximum heat load is 2.3 kW for the beam of diameter 15 mm. The value is sufficiently large compared with the heat load of FNS; with finned cooling surfaces, the heat loads up to 3.7 kW may be removed. (author)

  20. The 'robustness' of vocabulary intervention in the public schools: targets and techniques employed in speech-language therapy.

    Science.gov (United States)

    Justice, Laura M; Schmitt, Mary Beth; Murphy, Kimberly A; Pratt, Amy; Biancone, Tricia

    2014-01-01

    This study examined vocabulary intervention-in terms of targets and techniques-for children with language impairment receiving speech-language therapy in public schools (i.e., non-fee-paying schools) in the United States. Vocabulary treatments and targets were examined with respect to their alignment with the empirically validated practice of rich vocabulary intervention. Participants were forty-eight 5-7-year-old children participating in kindergarten or the first-grade year of school, all of whom had vocabulary-specific goals on their individualized education programmes. Two therapy sessions per child were coded to determine what vocabulary words were being directly targeted and what techniques were used for each. Study findings showed that the majority of words directly targeted during therapy were lower-level basic vocabulary words (87%) and very few (1%) were academically relevant. On average, three techniques were used per word to promote deep understanding. Interpreting findings against empirical descriptions of rich vocabulary intervention indicates that children were exposed to some but not all aspects of this empirically supported practice. © 2013 Royal College of Speech and Language Therapists.

  1. DAF-12 Regulates a Connected Network of Genes to Ensure Robust Developmental Decisions

    Science.gov (United States)

    Stuckenholz, Carsten; Labhart, Paul; Alexiadis, Vassili; Martin, René; Knölker, Hans-Joachim; Fisher, Alfred L.

    2011-01-01

    The nuclear receptor DAF-12 has roles in normal development, the decision to pursue dauer development in unfavorable conditions, and the modulation of adult aging. Despite the biologic importance of DAF-12, target genes for this receptor are largely unknown. To identify DAF-12 targets, we performed chromatin immunoprecipitation followed by hybridization to whole-genome tiling arrays. We identified 1,175 genomic regions to be bound in vivo by DAF-12, and these regions are enriched in known DAF-12 binding motifs and act as DAF-12 response elements in transfected cells and in transgenic worms. The DAF-12 target genes near these binding sites include an extensive network of interconnected heterochronic and microRNA genes. We also identify the genes encoding components of the miRISC, which is required for the control of target genes by microRNA, as a target of DAF-12 regulation. During reproductive development, many of these target genes are misregulated in daf-12(0) mutants, but this only infrequently results in developmental phenotypes. In contrast, we and others have found that null daf-12 mutations enhance the phenotypes of many miRISC and heterochronic target genes. We also find that environmental fluctuations significantly strengthen the weak heterochronic phenotypes of null daf-12 alleles. During diapause, DAF-12 represses the expression of many heterochronic and miRISC target genes, and prior work has demonstrated that dauer formation can suppress the heterochronic phenotypes of many of these target genes in post-dauer development. Together these data are consistent with daf-12 acting to ensure developmental robustness by committing the animal to adult or dauer developmental programs despite variable internal or external conditions. PMID:21814518

  2. Is MoS2 a robust material for 2D electronics?

    International Nuclear Information System (INIS)

    Lorenz, Tommy; Joswig, Jan-Ole; Seifert, Gotthard; Ghorbani-Asl, Mahdi; Heine, Thomas

    2014-01-01

    A nanoindentation computer experiment has been carried out by means of Born–Oppenheimer molecular-dynamics simulations employing the density-functional based tight-binding method. A free-standing MoS 2 sheet, fixed at a circular support, was indented by a stiff, sharp tip. During this process, the strain on the nanolayer is locally different, with maximum values in the vicinity of the tip. All studied electronic properties—the band gap, the projected density of states, the atomic charges and the quantum conductance through the layer—vary only slightly before they change significantly when the MoS 2 sheet finally is pierced. After strong local deformation due to the indentation process, the electronic conductance in our model still is 80% of its original value. Thus, the electronic structure of single-layer MoS 2 is rather robust upon local deformation. (paper)

  3. Adaptive Critic Nonlinear Robust Control: A Survey.

    Science.gov (United States)

    Wang, Ding; He, Haibo; Liu, Derong

    2017-10-01

    Adaptive dynamic programming (ADP) and reinforcement learning are quite relevant to each other when performing intelligent optimization. They are both regarded as promising methods involving important components of evaluation and improvement, at the background of information technology, such as artificial intelligence, big data, and deep learning. Although great progresses have been achieved and surveyed when addressing nonlinear optimal control problems, the research on robustness of ADP-based control strategies under uncertain environment has not been fully summarized. Hence, this survey reviews the recent main results of adaptive-critic-based robust control design of continuous-time nonlinear systems. The ADP-based nonlinear optimal regulation is reviewed, followed by robust stabilization of nonlinear systems with matched uncertainties, guaranteed cost control design of unmatched plants, and decentralized stabilization of interconnected systems. Additionally, further comprehensive discussions are presented, including event-based robust control design, improvement of the critic learning rule, nonlinear H ∞ control design, and several notes on future perspectives. By applying the ADP-based optimal and robust control methods to a practical power system and an overhead crane plant, two typical examples are provided to verify the effectiveness of theoretical results. Overall, this survey is beneficial to promote the development of adaptive critic control methods with robustness guarantee and the construction of higher level intelligent systems.

  4. A Robust Controller Structure for Pico-Satellite Applications

    DEFF Research Database (Denmark)

    Kragelund, Martin Nygaard; Green, Martin; Kristensen, Mads

    This paper describes the development of a robust controller structure for use in pico-satellite missions. The structure relies on unknown disturbance estimation and use of robust control theory to implement a system that is robust to both unmodeled disturbances and parameter uncertainties. As one...

  5. Multi-Objective Evaluation of Target Sets for Logistics Networks

    National Research Council Canada - National Science Library

    Emslie, Paul

    2000-01-01

    .... In the presence of many objectives--such as reducing maximum flow, lengthening routes, avoiding collateral damage, all at minimal risk to our pilots--the problem of determining the best target set is complex...

  6. Estimating open population site occupancy from presence-absence data lacking the robust design.

    Science.gov (United States)

    Dail, D; Madsen, L

    2013-03-01

    Many animal monitoring studies seek to estimate the proportion of a study area occupied by a target population. The study area is divided into spatially distinct sites where the detected presence or absence of the population is recorded, and this is repeated in time for multiple seasons. However, when occupied sites are detected with probability p Ecology 84, 2200-2207) developed a multiseason model for estimating seasonal site occupancy (ψt ) while accounting for unknown p. Their model performs well when observations are collected according to the robust design, where multiple sampling occasions occur during each season; the repeated sampling aids in the estimation p. However, their model does not perform as well when the robust design is lacking. In this paper, we propose an alternative likelihood model that yields improved seasonal estimates of p and Ψt in the absence of the robust design. We construct the marginal likelihood of the observed data by conditioning on, and summing out, the latent number of occupied sites during each season. A simulation study shows that in cases without the robust design, the proposed model estimates p with less bias than the MacKenzie et al. model and hence improves the estimates of Ψt . We apply both models to a data set consisting of repeated presence-absence observations of American robins (Turdus migratorius) with yearly survey periods. The two models are compared to a third estimator available when the repeated counts (from the same study) are considered, with the proposed model yielding estimates of Ψt closest to estimates from the point count model. Copyright © 2013, The International Biometric Society.

  7. THz-SAR Vibrating Target Imaging via the Bayesian Method

    Directory of Open Access Journals (Sweden)

    Bin Deng

    2017-01-01

    Full Text Available Target vibration bears important information for target recognition, and terahertz, due to significant micro-Doppler effects, has strong advantages for remotely sensing vibrations. In this paper, the imaging characteristics of vibrating targets with THz-SAR are at first analyzed. An improved algorithm based on an excellent Bayesian approach, that is, the expansion-compression variance-component (ExCoV method, has been proposed for reconstructing scattering coefficients of vibrating targets, which provides more robust and efficient initialization and overcomes the deficiencies of sidelobes as well as artifacts arising from the traditional correlation method. A real vibration measurement experiment of idle cars was performed to validate the range model. Simulated SAR data of vibrating targets and a tank model in a real background in 220 GHz show good performance at low SNR. Rapidly evolving high-power terahertz devices will offer viable THz-SAR application at a distance of several kilometers.

  8. DC Algorithm for Extended Robust Support Vector Machine.

    Science.gov (United States)

    Fujiwara, Shuhei; Takeda, Akiko; Kanamori, Takafumi

    2017-05-01

    Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM. Because of the two nonconvexities, the existing algorithm we proposed needs to be divided into two parts depending on whether the hyperparameter value is in the extended range or not. The algorithm also heuristically solves the nonconvex problem in the extended range. In this letter, we propose a new, efficient algorithm for ER-SVM. The algorithm deals with two types of nonconvexity while never entailing more computations than either E[Formula: see text]-SVM or robust SVM, and it finds a critical point of ER-SVM. Furthermore, we show that ER-SVM includes the existing robust SVMs as special cases. Numerical experiments confirm the effectiveness of integrating the two nonconvexities.

  9. Simulation model of ANN based maximum power point tracking controller for solar PV system

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Anil K.; Singh, Bhupal [Department of Electrical and Electronics Engineering, Ajay Kumar Garg Engineering College, Ghaziabad 201009 (India); Kaushika, N.D.; Agarwal, Niti [School of Research and Development, Bharati Vidyapeeth College of Engineering, A-4 Paschim Vihar, New Delhi 110063 (India)

    2011-02-15

    In this paper the simulation model of an artificial neural network (ANN) based maximum power point tracking controller has been developed. The controller consists of an ANN tracker and the optimal control unit. The ANN tracker estimates the voltages and currents corresponding to a maximum power delivered by solar PV (photovoltaic) array for variable cell temperature and solar radiation. The cell temperature is considered as a function of ambient air temperature, wind speed and solar radiation. The tracker is trained employing a set of 124 patterns using the back propagation algorithm. The mean square error of tracker output and target values is set to be of the order of 10{sup -5} and the successful convergent of learning process takes 1281 epochs. The accuracy of the ANN tracker has been validated by employing different test data sets. The control unit uses the estimates of the ANN tracker to adjust the duty cycle of the chopper to optimum value needed for maximum power transfer to the specified load. (author)

  10. Robustness-related issues in speaker recognition

    CERN Document Server

    Zheng, Thomas Fang

    2017-01-01

    This book presents an overview of speaker recognition technologies with an emphasis on dealing with robustness issues. Firstly, the book gives an overview of speaker recognition, such as the basic system framework, categories under different criteria, performance evaluation and its development history. Secondly, with regard to robustness issues, the book presents three categories, including environment-related issues, speaker-related issues and application-oriented issues. For each category, the book describes the current hot topics, existing technologies, and potential research focuses in the future. The book is a useful reference book and self-learning guide for early researchers working in the field of robust speech recognition.

  11. Voyager 2 Neptune targeting strategy

    Science.gov (United States)

    Potts, C. L.; Francis, K.; Matousek, S. E.; Cesarone, R. J.; Gray, D. L.

    1989-01-01

    The success of the Voyager 2 flybys of Neptune and Triton depends upon the ability to correct the spacecraft's trajectory. Accurate spacecraft delivery to the desired encounter conditions will promote the maximum science return. However, Neptune's great distance causes large a priori uncertainties in Neptune and Triton ephemerides and planetary system parameters. Consequently, the 'ideal' trajectory is unknown beforehand. The targeting challenge is to utilize the gradually improving knowledge as the spacecraft approaches Neptune to meet the science objectives, but with an overriding concern for spacecraft safety and a desire to limit propellant expenditure. A unique targeting strategy has been developed in response to this challenge. Through the use of a Monte Carlo simulation, candidate strategies are evaluated by the degree to which they meet these objectives and are compared against each other in determining the targeting strategy to be adopted.

  12. Gene silencing in Tribolium castaneum as a tool for the targeted identification of candidate RNAi targets in crop pests.

    Science.gov (United States)

    Knorr, Eileen; Fishilevich, Elane; Tenbusch, Linda; Frey, Meghan L F; Rangasamy, Murugesan; Billion, Andre; Worden, Sarah E; Gandra, Premchand; Arora, Kanika; Lo, Wendy; Schulenberg, Greg; Valverde-Garcia, Pablo; Vilcinskas, Andreas; Narva, Kenneth E

    2018-02-01

    RNAi shows potential as an agricultural technology for insect control, yet, a relatively low number of robust lethal RNAi targets have been demonstrated to control insects of agricultural interest. In the current study, a selection of lethal RNAi target genes from the iBeetle (Tribolium castaneum) screen were used to demonstrate efficacy of orthologous targets in the economically important coleopteran pests Diabrotica virgifera virgifera and Meligethes aeneus. Transcript orthologs of 50 selected genes were analyzed in D. v. virgifera diet-based RNAi bioassays; 21 of these RNAi targets showed mortality and 36 showed growth inhibition. Low dose injection- and diet-based dsRNA assays in T. castaneum and D. v. virgifera, respectively, enabled the identification of the four highly potent RNAi target genes: Rop, dre4, ncm, and RpII140. Maize was genetically engineered to express dsRNA directed against these prioritized candidate target genes. T 0 plants expressing Rop, dre4, or RpII140 RNA hairpins showed protection from D. v. virgifera larval feeding damage. dsRNA targeting Rop, dre4, ncm, and RpII140 in M. aeneus also caused high levels of mortality both by injection and feeding. In summary, high throughput systems for model organisms can be successfully used to identify potent RNA targets for difficult-to-work with agricultural insect pests.

  13. Energy deposition in a thin copper target downstream and off-axis of a proton-radiography target

    International Nuclear Information System (INIS)

    Greene, G.A.; Finfrock, C.C.; Snead, C.L.; Hanson, A.L.; Murray, M.M.

    2002-01-01

    A series of proton energy-deposition experiments was conducted to measure the energy deposited in a copper target located downstream and off-axis of a high-energy proton-radiography target. The proton/target interactions involved low-intensity bunches of protons at 24 GeV/c onto a spherical target consisting of concentric shells of tungsten and copper. The energy-deposition target was placed at five locations downstream of the proton-radiography target, off-axis of the primary beam transport, and was either unshielded or shielded by 5 or 10 cm of lead. Maximum temperature rises measured in the energy-deposition target due to single bunches of 5x10 10 protons on the proton-radiography target were approximately 20 mK per bunch. The data indicated that the scattered radiation was concentrated close to the primary transport axis of the beam line. The energy deposited in the energy-deposition target was reduced by moving the target radially away from the primary transport axis. Placing lead shielding in front of the target further reduced the energy deposition. The measured temperature rises of the energy-deposition target were empirically correlated with the distance from the source, the number of protons incident on the proton-radiography target, the thickness of the lead shielding, and the angle of the energy-deposition target off-axis of the beam line from the proton-radiography target. The correlation of the experimental data that was developed provides a starting point for the evaluation of the shielding requirements for devices downstream of proton-radiography targets such as superconducting magnets

  14. Robust Self Tuning Controllers

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    1985-01-01

    The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...... has several operation modes and a detector for controlling the mode. A special self tuning controller has been developed to regulate plant with changing time delay.......The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...

  15. A game theory approach to target tracking in sensor networks.

    Science.gov (United States)

    Gu, Dongbing

    2011-02-01

    In this paper, we investigate a moving-target tracking problem with sensor networks. Each sensor node has a sensor to observe the target and a processor to estimate the target position. It also has wireless communication capability but with limited range and can only communicate with neighbors. The moving target is assumed to be an intelligent agent, which is "smart" enough to escape from the detection by maximizing the estimation error. This adversary behavior makes the target tracking problem more difficult. We formulate this target estimation problem as a zero-sum game in this paper and use a minimax filter to estimate the target position. The minimax filter is a robust filter that minimizes the estimation error by considering the worst case noise. Furthermore, we develop a distributed version of the minimax filter for multiple sensor nodes. The distributed computation is implemented via modeling the information received from neighbors as measurements in the minimax filter. The simulation results show that the target tracking algorithm proposed in this paper provides a satisfactory result.

  16. Cooperative Robots to Observe Moving Targets: Review.

    Science.gov (United States)

    Khan, Asif; Rinner, Bernhard; Cavallaro, Andrea

    2018-01-01

    The deployment of multiple robots for achieving a common goal helps to improve the performance, efficiency, and/or robustness in a variety of tasks. In particular, the observation of moving targets is an important multirobot application that still exhibits numerous open challenges, including the effective coordination of the robots. This paper reviews control techniques for cooperative mobile robots monitoring multiple targets. The simultaneous movement of robots and targets makes this problem particularly interesting, and our review systematically addresses this cooperative multirobot problem for the first time. We classify and critically discuss the control techniques: cooperative multirobot observation of multiple moving targets, cooperative search, acquisition, and track, cooperative tracking, and multirobot pursuit evasion. We also identify the five major elements that characterize this problem, namely, the coordination method, the environment, the target, the robot and its sensor(s). These elements are used to systematically analyze the control techniques. The majority of the studied work is based on simulation and laboratory studies, which may not accurately reflect real-world operational conditions. Importantly, while our systematic analysis is focused on multitarget observation, our proposed classification is useful also for related multirobot applications.

  17. Robustness analysis of chiller sequencing control

    International Nuclear Information System (INIS)

    Liao, Yundan; Sun, Yongjun; Huang, Gongsheng

    2015-01-01

    Highlights: • Uncertainties with chiller sequencing control were systematically quantified. • Robustness of chiller sequencing control was systematically analyzed. • Different sequencing control strategies were sensitive to different uncertainties. • A numerical method was developed for easy selection of chiller sequencing control. - Abstract: Multiple-chiller plant is commonly employed in the heating, ventilating and air-conditioning system to increase operational feasibility and energy-efficiency under part load condition. In a multiple-chiller plant, chiller sequencing control plays a key role in achieving overall energy efficiency while not sacrifices the cooling sufficiency for indoor thermal comfort. Various sequencing control strategies have been developed and implemented in practice. Based on the observation that (i) uncertainty, which cannot be avoided in chiller sequencing control, has a significant impact on the control performance and may cause the control fail to achieve the expected control and/or energy performance; and (ii) in current literature few studies have systematically addressed this issue, this paper therefore presents a study on robustness analysis of chiller sequencing control in order to understand the robustness of various chiller sequencing control strategies under different types of uncertainty. Based on the robustness analysis, a simple and applicable method is developed to select the most robust control strategy for a given chiller plant in the presence of uncertainties, which will be verified using case studies

  18. Inferring phylogenetic networks by the maximum parsimony criterion: a case study.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-01

    Horizontal gene transfer (HGT) may result in genes whose evolutionary histories disagree with each other, as well as with the species tree. In this case, reconciling the species and gene trees results in a network of relationships, known as the "phylogenetic network" of the set of species. A phylogenetic network that incorporates HGT consists of an underlying species tree that captures vertical inheritance and a set of edges which model the "horizontal" transfer of genetic material. In a series of papers, Nakhleh and colleagues have recently formulated a maximum parsimony (MP) criterion for phylogenetic networks, provided an array of computationally efficient algorithms and heuristics for computing it, and demonstrated its plausibility on simulated data. In this article, we study the performance and robustness of this criterion on biological data. Our findings indicate that MP is very promising when its application is extended to the domain of phylogenetic network reconstruction and HGT detection. In all cases we investigated, the MP criterion detected the correct number of HGT events required to map the evolutionary history of a gene data set onto the species phylogeny. Furthermore, our results indicate that the criterion is robust with respect to both incomplete taxon sampling and the use of different site substitution matrices. Finally, our results show that the MP criterion is very promising in detecting HGT in chimeric genes, whose evolutionary histories are a mix of vertical and horizontal evolution. Besides the performance analysis of MP, our findings offer new insights into the evolution of 4 biological data sets and new possible explanations of HGT scenarios in their evolutionary history.

  19. Optimal Control for Fast and Robust Generation of Entangled States in Anisotropic Heisenberg Chains

    Science.gov (United States)

    Zhang, Xiong-Peng; Shao, Bin; Zou, Jian

    2017-05-01

    Motivated by some recent results of the optimal control (OC) theory, we study anisotropic XXZ Heisenberg spin-1/2 chains with control fields acting on a single spin, with the aim of exploring how maximally entangled state can be prepared. To achieve the goal, we use a numerical optimization algorithm (e.g., the Krotov algorithm, which was shown to be capable of reaching the quantum speed limit) to search an optimal set of control parameters, and then obtain OC pulses corresponding to the target fidelity. We find that the minimum time for implementing our target state depending on the anisotropy parameter Δ of the model. Finally, we analyze the robustness of the obtained results for the optimal fidelities and the effectiveness of the Krotov method under some realistic conditions.

  20. Attractive ellipsoids in robust control

    CERN Document Server

    Poznyak, Alexander; Azhmyakov, Vadim

    2014-01-01

    This monograph introduces a newly developed robust-control design technique for a wide class of continuous-time dynamical systems called the “attractive ellipsoid method.” Along with a coherent introduction to the proposed control design and related topics, the monograph studies nonlinear affine control systems in the presence of uncertainty and presents a constructive and easily implementable control strategy that guarantees certain stability properties. The authors discuss linear-style feedback control synthesis in the context of the above-mentioned systems. The development and physical implementation of high-performance robust-feedback controllers that work in the absence of complete information is addressed, with numerous examples to illustrate how to apply the attractive ellipsoid method to mechanical and electromechanical systems. While theorems are proved systematically, the emphasis is on understanding and applying the theory to real-world situations. Attractive Ellipsoids in Robust Control will a...