WorldWideScience

Sample records for cuny algorithms background

  1. In situ observation of Cu-Ni alloy nanoparticle formation by X-ray diffraction, X-ray absorption spectroscopy, and transmission electron microscopy: Influence of Cu/Ni ratio

    DEFF Research Database (Denmark)

    Wu, Qiongxiao; Duchstein, Linus Daniel Leonhard; Chiarello, Gian Luca

    2014-01-01

    Silica-supported, bimetallic Cu-Ni nanomaterials were prepared with different ratios of Cu to Ni by incipient wetness impregnation without a specific calcination step before reduction. Different in situ characterization techniques, in particular transmission electron microscopy (TEM), X-ray...... diffraction (XRD), and X-ray absorption spectroscopy (XAS), were applied to follow the reduction and alloying process of Cu-Ni nanoparticles on silica. In situ reduction of Cu-Ni samples with structural characterization by combined synchrotron XRD and XAS reveals a strong interaction between Cu and Ni species......, which results in improved reducibility of the Ni species compared with monometallic Ni. At high Ni concentrations silica-supported Cu-Ni alloys form a homogeneous solid solution of Cu and Ni, whereas at lower Ni contents Cu and Ni are partly segregated and form metallic Cu and Cu-Ni alloy phases. Under...

  2. Mechanical properties of highly textured Cu/Ni multilayers

    International Nuclear Information System (INIS)

    Liu, Y.; Bufford, D.; Wang, H.; Sun, C.; Zhang, X.

    2011-01-01

    We report on the synthesis of highly (1 1 1) and (1 0 0) textured Cu/Ni multilayers with individual layer thicknesses, h, varying from 1 to 200 nm. When, h, decreases to 5 nm or less, X-ray diffraction spectra show epitaxial growth of Cu/Ni multilayers. High resolution transmission electron microscopy studies show the coexistence of nanotwins and coherent layer interfaces in highly (1 1 1) textured Cu/Ni multilayers with smaller h. Hardnesses of multilayer films increase with decreasing h, approach a maximum at h of a few nanometers, and show softening thereafter at smaller h. The influence of layer interfaces as well as twin interfaces on strengthening mechanisms of multilayers and the formation of twins in Ni in multilayers are discussed.

  3. Enhanced Oxidation-Resistant Cu@Ni Core-Shell Nanoparticles for Printed Flexible Electrodes.

    Science.gov (United States)

    Kim, Tae Gon; Park, Hye Jin; Woo, Kyoohee; Jeong, Sunho; Choi, Youngmin; Lee, Su Yeon

    2018-01-10

    In this work, the fabrication and application of highly conductive, robust, flexible, and oxidation-resistant Cu-Ni core-shell nanoparticle (NP)-based electrodes have been reported. Cu@Ni core-shell NPs with a tunable Ni shell thickness were synthesized by varying the Cu/Ni molar ratios in the precursor solution. Through continuous spray coating and flash photonic sintering without an inert atmosphere, large-area Cu@Ni NP-based conductors were fabricated on various polymer substrates. These NP-based electrodes demonstrate a low sheet resistance of 1.3 Ω sq -1 under an optical energy dose of 1.5 J cm -2 . In addition, they exhibit highly stable sheet resistances (ΔR/R 0 flexible heater fabricated from the Cu@Ni film is demonstrated, which shows uniform heat distribution and stable temperature compared to those of a pure Cu film.

  4. The New Community College at CUNY and the Common Good

    Science.gov (United States)

    Rosenthal, Bill; Schnee, Emily

    2013-01-01

    On a prime site in Manhattan, a block from the lions guarding the New York Public Library, the City University of New York (CUNY) opened its newest community college in the fall of 2012. Designed to achieve greater student success, as measured through increased graduation rates, the New Community College at CUNY (NCC) is seen as a beacon of hope…

  5. Study on the characteristics of the impingement erosion-corrosion for Cu-Ni Alloy sprayed coating(I)

    International Nuclear Information System (INIS)

    Lee, Sang Yeol; Lim, Uh Joh; Yun, Byoung Du

    1998-01-01

    Impingement erosion-corrosion test and electrochemical corrosion test in tap water(5000Ω-cm) and seawater(25Ω-cm). Thermal spraying coated Cu-Ni alloy on the carbon steel was carried out. The impingement erosion-corrosion behavior and electrochemical corrosion characteristics of the substrate(SS41) and Cu-Ni thermal spray coating were investigated. The erosion-corrosion control efficiency of Cu-Ni coating to substrate was also estimated quantitatively. Main results obtained are as follows : 1) Under the flow velocity of 13m/s, impingement erosion-corrosion of Cu-Ni coating is under the control of electrochemical corrosion factor rather than that of mechanical erosion. 2) The corrosion potential of Cu-Ni coating becomes more noble than that of substrate, and the current density of Cu-Ni coating under the corrosion potential is drained lowly than that of substrate. 3) The erosion-corrosion control efficiency of Cu-Ni coating to substrate is excellent in the tap water of high specific resistance solution, but it becomes dull in the seawater of low specific resistance. 4) The corrosion control efficiency of Cu-Ni coating to substrate in the seawater appears to be higher than that in the tap water

  6. Fabrication of a Cu/Ni stack in supercritical carbon dioxide at low-temperature

    Energy Technology Data Exchange (ETDEWEB)

    Rasadujjaman, Md, E-mail: rasadphy@duet.ac.bd [Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, 4-3-11 Takeda, Kofu, Yamanashi 400-8511 (Japan); Department of Physics, Dhaka University of Engineering & Technology, Gazipur 1700 (Bangladesh); Watanabe, Mitsuhiro [Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, 4-3-11 Takeda, Kofu, Yamanashi 400-8511 (Japan); Sudoh, Hiroshi; Machida, Hideaki [Gas-Phase Growth Ltd., 2-24-16 Naka, Koganei, Tokyo 184-0012 (Japan); Kondoh, Eiichi [Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, 4-3-11 Takeda, Kofu, Yamanashi 400-8511 (Japan)

    2015-09-30

    We report the low-temperature deposition of Cu on a Ni-lined substrate in supercritical carbon dioxide. A novel Cu(I) amidinate precursor was used to reduce the deposition temperature. From the temperature dependence of the growth rate, the activation energy for Cu growth on the Ni film was determined to be 0.19 eV. The films and interfaces were characterized by Auger electron spectroscopy. At low temperature (140 °C), we successfully deposited a Cu/Ni stack with a sharp Cu/Ni interface. The stack had a high adhesion strength (> 1000 mN) according to microscratch testing. The high adhesion strength originated from strong interfacial bonding between the Cu and the Ni. However, at a higher temperature (240 °C), significant interdiffusion was observed and the adhesion became weak. - Highlights: • Cu/Ni stack fabricated in supercritical CO{sub 2} at low temperature. • A novel Cu(I) amidinate precursor was used to reduce the deposition temperature. • Adhesion strength of Cu/Ni stack improved dramatically. • Fabricated Cu/Ni stack is suitable for Cu interconnections in microelectronics.

  7. Copper and CuNi alloys substrates for HTS coated conductor applications protected from oxidation

    Energy Technology Data Exchange (ETDEWEB)

    Segarra, M; Diaz, J; Xuriguera, H; Chimenos, J M; Espiell, F [Dept. of Chemical Engineering and Metallurgy, Univ. of Barcelona, Barcelona (Spain); Miralles, L [Lab. d' Investigacio en Formacions Geologiques. Dept. of Petrology, Geochemistry and Geological Prospecting, Univ. of Barcelona, Barcelona (Spain); Pinol, S [Inst. de Ciencia de Materials de Barcelona, Bellaterra (Spain)

    2003-07-01

    Copper is an interesting substrate for HTS coated conductors for its low cost compared to other metallic substrates, and for its low resistivity. Nevertheless, mechanical properties and resistance to oxidation should be improved in order to use it as substrate for YBCO deposition by non-vacuum techniques. Therefore, different cube textured CuNi tapes were prepared by RABIT as possible substrates for deposition of high critical current density YBCO films. Under the optimised conditions of deformation and annealing, all the studied CuNi alloys (2%, 5%, and 10% Ni) presented (100) left angle 001 right angle cube texture which is compatible for YBCO deposition. Textured CuNi alloys present higher tensile strength than pure copper. Oxidation resistance of CuNi tapes under different oxygen atmospheres was also studied by thermogravimetric analysis and compared to pure copper tapes. Although the presence of nickel improves mechanical properties of annealed copper, it does not improve its oxidation resistance. However, when a chromium buffer layer is electrodeposited on the tape, oxygen diffusion is slowed down. Chromium is, therefore, useful for protecting copper and CuNi alloys from oxidation although its recrystallisation texture, (110), is not suitable for coated conductors. (orig.)

  8. Thermoelasticity and interdiffusion in CuNi multilayers

    International Nuclear Information System (INIS)

    Benoudia, M.C.; Gao, F.; Roussel, J.M.; Labat, S.; Gailhanou, M.; Thomas, O.; Beke, D.L.; Erdelyi, Z.; Langer, G.A.; Csik, A.; Kis-Varga, M.

    2012-01-01

    Complete text of publication follows. The idea of observing artificial metallic multilayers with x-ray diffraction techniques to study interdiffusion phenomena dates back to the work of DuMond and Youtz. Interestingly, these pioneering contributions even suggested that the approach could be used to measure the concentration dependence of the diffusion coefficient. This remark is precisely the subject of the present work: we aim to revisit this issue in light of recent atomistic simulation results obtained for coherent CuNi multilayers. More generally, CuNi multilayers have been extensively studied for their magnetic, mechanical, and optical properties. These physical properties depend critically on interfaces and require a good control on the evolution of composition and strain fields under heat treatment. Understanding of how interdiffusion proceeds in these nanosystems should therefore improve these practical aspects. From a theoretical viewpoint these synthetic modulated structures have been also used as valuable model systems to test the various diffusion theories accounting in particular for the influence of the alloying energy, the coherency strain, and the local concentration. Nowadays, this field remains active and has been extended with the development of atomic simulations and many microscopy techniques like atom probe tomography which give details on the intermixing mechanisms. We have performed x-ray diffraction experiments on coherent CuNi multilayers to probe thermoelasticity and interdiffusion in these samples. Kinetic mean-field simulations combined with the modeling of the x-ray spectra were also achieved to rationalize the experimental results. We have shown that classical thermoelastic arguments combined with bulk data can be used to model the x-ray scattered intensity of annealed coherent CuNi multilayers. This result provides a valuable framework to analyze the evolution of the concentration profiles at higher temperature. The typical coherent

  9. Correlation of plastic deformation induced intermittent electromagnetic radiation characteristics with mechanical properties of Cu-Ni alloys

    International Nuclear Information System (INIS)

    Singh, Ranjana; Lal, Shree P.; Misra, Ashok

    2015-01-01

    This paper presents experimental results on intermittent electromagnetic radiation during plastic deformation of Cu-Ni alloys under tension and compression modes of deformation. On the basis of the nature of electromagnetic radiation signals, oscillatory or exponential, results show that the compression increases the viscous coefficient of Cu-Ni alloys during plastic deformation. Increasing the percentage of solute atoms in Cu-Ni alloys makes electromagnetic radiation strength higher under tension. The electromagnetic radiation emission occurs at smaller strains under compression showing early onset of plastic deformation. This is attributed to the role of high core region tensile residual stresses in the rolled Cu-Ni alloy specimens in accordance with the Bauschinger effect. The distance between the apexes of the dead metal cones during compression plays a significant role in electromagnetic radiation parameters. The dissociation of edge dislocations into partials and increase in internal stresses with increase in solute percentage in Cu-Ni alloys under compression considerably influences the electromagnetic radiation frequency.

  10. CuNi Nanoparticles Assembled on Graphene for Catalytic Methanolysis of Ammonia Borane and Hydrogenation of Nitro/Nitrile Compounds

    International Nuclear Information System (INIS)

    Yu, Chao

    2017-01-01

    Here we report a solution phase synthesis of 16 nm CuNi nanoparticles (NPs) with the Cu/Ni composition control. These NPs are assembled on graphene (G) and show Cu/Ni composition-dependent catalysis for methanolysis of ammonia borane (AB) and hydrogenation of aromatic nitro (nitrile) compounds to primary amines in methanol at room temperature. Among five different CuNi NPs studied, the G-Cu 36 Ni 64 NPs are the best catalyst for both AB methanolysis (TOF = 49.1 mol H2 mol CuNi -1 min -1 and E a = 24.4 kJ/mol) and hydrogenation reactions (conversion yield >97%). In conclusion, the G-CuNi represents a unique noble-metal-free catalyst for hydrogenation reactions in a green environment without using pure hydrogen.

  11. A diffuse neutron scattering study of clustering kinetics in Cu-Ni alloys

    International Nuclear Information System (INIS)

    Vrijen, J.; Radelaar, S.; Schwahn, D.

    1977-01-01

    Diffuse scattering of thermal neutrons was used to investigate the kinetics of clustering in Cu-Ni alloys. In order to optimize the experimental conditions the isotopes 65 Cu and 62 Ni were alloyed. The time evolution of the diffuse scattered intensity at 400 0 C has been measured for eight Cu-Ni alloys, varying in composition between 30 and 80 at. pour cent Ni. The relaxation of the so called null matrix, containing 56.5 at. pour cent Ni has also been investigated at 320, 340, 425 and 450 0 C. Using Cook's model from all these measurements information has been deduced about diffusion at low temperatures and about thermodynamic properties of the Cu-Ni system. It turns out that Cook's model is not sufficiently detailed for an accurate description of the initial stages of these relaxations

  12. Low temperature interdiffusion in Cu/Ni thin films

    International Nuclear Information System (INIS)

    Lefakis, H.; Cain, J.F.; Ho, P.S.

    1983-01-01

    Interdiffusion in Cu/Ni thin films was studied by means of Auger electron spectroscopy in conjunction with Ar + ion sputter profiling. The experimental conditions used aimed at simulating those of typical chip-packaging fabrication processes. The Cu/Ni couple (from 10 μm to 60 nm thick) was produced by sequential vapor deposition on fused-silica substrates at 360, 280 and 25 0 C in 10 - 6 Torr vacuum. Diffusion anneals were performed between 280 and 405 0 C for times up to 20 min. Such conditions define grain boundary diffusion in the regimes of B- and C-type kinetics. The data were analyzed according to the Whipple-Suzuoka model. Some deviations from the assumptions of this model, as occurred in the present study, are discussed but cannot fully account for the typical data scatter. The grain boundary diffusion coefficients were determined allowing calculation of respective permeation distances. (Auth.)

  13. DO22-(Cu,Ni)3Sn intermetallic compound nanolayer formed in Cu/Sn-nanolayer/Ni structures

    International Nuclear Information System (INIS)

    Liu Lilin; Huang, Haiyou; Fu Ran; Liu Deming; Zhang Tongyi

    2009-01-01

    The present work conducts crystal characterization by High Resolution Transmission Electron Microscopy (HRTEM) on Cu/Sn-nanolayer/Ni sandwich structures associated with the use of Energy Dispersive X-ray (EDX) analysis. The results show that DO 22 -(Cu,Ni) 3 Sn intermetallic compound (IMC) ordered structure is formed in the sandwich structures at the as-electrodeposited state. The formed DO 22 -(Cu,Ni) 3 Sn IMC is a homogeneous layer with a thickness about 10 nm. The DO 22 -(Cu,Ni) 3 Sn IMC nanolayer is stable during annealing at 250 deg. C for 810 min. The formation and stabilization of the metastable DO 22 -(Cu,Ni) 3 Sn IMC nanolayer are attributed to the less strain energy induced by lattice mismatch between the DO 22 IMC and fcc Cu crystals in comparison with that between the equilibrium DO 3 IMC and fcc Cu crystals.

  14. Background for the research and subsequent developments in the research program

    International Nuclear Information System (INIS)

    This report summarizes the historical background for the research and its subsequent development. Various aspects of the research were supported by the USAEC, ERDA, CCNY, CUNY, MHMC and personally by the principal investigator

  15. Star point centroid algorithm based on background forecast

    Science.gov (United States)

    Wang, Jin; Zhao, Rujin; Zhu, Nan

    2014-09-01

    The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.

  16. The CUNY Fatherhood Academy: A Qualitative Evaluation. Research Report

    Science.gov (United States)

    McDaniel, Marla; Simms, Margaret C.; Monson, William; de Leon, Erwin

    2015-01-01

    Knowing the economic challenges young fathers without postsecondary education face in providing for their families, New York City's Young Men's Initiative launched a fatherhood program housed in LaGuardia Community College in spring 2012. The CUNY Fatherhood Academy (CFA) aims to connect young fathers to academic and employment opportunities while…

  17. The CUNY Fatherhood Academy: A Qualitative Evaluation. Executive Summary

    Science.gov (United States)

    McDaniel, Marla; Simms, Margaret C.; Monson, William; de Leon, Erwin

    2015-01-01

    Knowing the economic challenges young fathers without postsecondary education face in providing for their families, New York City's Young Men's Initiative launched a fatherhood program housed in LaGuardia Community College in spring 2012. The CUNY Fatherhood Academy (CFA) aims to connect young fathers to academic and employment opportunities while…

  18. Cu-Ni nanowire-based TiO{sub 2} hybrid for the dynamic photodegradation of acetaldehyde gas pollutant under visible light

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Shuying [Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai 200050 (China); University of Chinese Academy of Sciences, 19 Yuquan Road, Beijing 100049 (China); Xie, Xiaofeng, E-mail: xxfshcn@163.com [Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai 200050 (China); Chen, Sheng-Chieh [College of Science and Engineering, University of Minnesota, Minneapolis, MN 55455 (United States); Tong, Shengrui [Institute of Chemistry, Chinese Academy of Sciences, Beijing 100190 (China); Lu, Guanhong [Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai 200050 (China); Pui, David Y.H. [College of Science and Engineering, University of Minnesota, Minneapolis, MN 55455 (United States); Sun, Jing [Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai 200050 (China)

    2017-06-30

    Graphical abstract: One-dimensional Cu-Ni bimetallic nanowires were introduced into TiO{sub 2}-based matrix to enhance their photocatalysis efficiency and expand their light absorption range. - Highlights: • Cu-Ni nanowire-based TiO{sub 2} hybrid photocatalyst. • One-dimensional electron pathways and surface plasmon resonance effects. • Dynamic photodegradation of acetaldehyde gas pollutant. - Abstract: One-dimensional bimetallic nanowires were introduced into TiO{sub 2}-based matrix to enhance their photocatalysis efficiency and expand their light absorption range in this work. Recently, metal nanowires have attracted many attention in photocatalyst research fields because of their favorable electronic transmission properties and especially in the aspect of surface plasmon resonance effects. Moreover, Cu-Ni bimetallic nanowires (Cu-Ni NWs) have shown better chemical stability than ordinary monometallic nanowires in our recent works. Interestingly, it has been found that Ni sleeves of the bimetallic nanowires also can modify the Schottky barrier of interface between TiO{sub 2} and metallic conductor, so that be beneficial to the separation of photogenerated carriers in the Cu-Ni/TiO{sub 2} network topology. Hence, a novel heterostructured photocatalyst composed of Cu-Ni NWs and TiO{sub 2} nanoparticles (NPs) was fabricated by one-step hydrolysis approach to explore its photocatalytic performance. TEM and EDX mapping images of this TiO{sub 2} NPs @Cu-Ni NWs (TCN) hybrid displayed that Cu-Ni NWs were wrapped by compact TiO{sub 2} layer and retained the one-dimensional structure in matrix. In experiments, the photocatalytic performance of the TCN nanocomposite was significantly enhanced comparing to pure TiO{sub 2}. Acetaldehyde, as a common gas pollutant in the environment, was employed to evaluate the photodegradation efficiency of a series of TCN nanocomposites under continuous feeding. The TCN exhibited excellent potodegradation performance, where the

  19. Solution-Based Epitaxial Growth of Magnetically Responsive Cu@Ni Nanowires

    KAUST Repository

    Zhang, Shengmao; Zeng, Hua Chun

    2010-01-01

    An experiment was conducted to show the solution-based epitaxial growth of magnetically responsive Cu@Ni nanowires. The Ni-sheathed Cu nanowires were synthesized with a one-pot approach. 30 mL of high concentration NaOH, Cu(NO3)2. 3H2O, Cu(NO3)2. 3H2O and 0.07-0.30 mL of Ni(NO3)2. 6H 2O aqueous solutions were added into a plastic reactor with a capacity of 50.0 mL. A varying amount of ethylenediamine (EDA) and hydrazine were also added sequentially, followed by thorough mixing of all reagents. The dimension, morphology, and chemical composition of the products were examined with scanning electron microscopy with energy dispersive X-ray spectroscopy. The XPS analysis on the as formed Cu nanowires confirms that there is indeed no nickel inclusion in the nanowires prior to the formation of nickel overcoat, which rules out the possibility of Cu-Ni alloy formation.

  20. Solution-Based Epitaxial Growth of Magnetically Responsive Cu@Ni Nanowires

    KAUST Repository

    Zhang, Shengmao

    2010-02-23

    An experiment was conducted to show the solution-based epitaxial growth of magnetically responsive Cu@Ni nanowires. The Ni-sheathed Cu nanowires were synthesized with a one-pot approach. 30 mL of high concentration NaOH, Cu(NO3)2. 3H2O, Cu(NO3)2. 3H2O and 0.07-0.30 mL of Ni(NO3)2. 6H 2O aqueous solutions were added into a plastic reactor with a capacity of 50.0 mL. A varying amount of ethylenediamine (EDA) and hydrazine were also added sequentially, followed by thorough mixing of all reagents. The dimension, morphology, and chemical composition of the products were examined with scanning electron microscopy with energy dispersive X-ray spectroscopy. The XPS analysis on the as formed Cu nanowires confirms that there is indeed no nickel inclusion in the nanowires prior to the formation of nickel overcoat, which rules out the possibility of Cu-Ni alloy formation.

  1. 75 FR 62838 - Award of a Single-Source Expansion Supplement to the Research Foundation of CUNY on Behalf of...

    Science.gov (United States)

    2010-10-13

    ...-Source Expansion Supplement to the Research Foundation of CUNY on Behalf of Hunter College School of... single-source program expansion supplement to the Research Foundation of CUNY on behalf of Hunter College... removal, of the relative's options to become a placement resource for the child. The supplemental funding...

  2. Investigation of optical properties of Cu/Ni multilayer nanowires embedded in etched ion-track template

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Lu [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Graduate School of the Chinese Academy of Sciences, Beijing 100049 (China); Yao, Huijun, E-mail: Yaohuijun@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Duan, Jinglai; Chen, Yonghui [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Lyu, Shuangbao [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Graduate School of the Chinese Academy of Sciences, Beijing 100049 (China); Maaz, Khan [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Nanomaterials Research Group, Physics Division, PINSTECH, Nilore 45650, Islamabad (Pakistan); Mo, Dan [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Liu, Jie, E-mail: J.Liu@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Sun, Youmei; Hou, Mingdong [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China)

    2016-12-01

    Graphical abstract: The schematic diagram of measurement of extinction spectra of Cu/Ni multilayer nanowire arrays embedded in the template after removing the gold/copper substrate. - Highlights: • The optical properties of Cu/Ni multilayer nanowire arrays were first investigated by UV/Vis/NIR spectrometer and it was confirmed that the extinction peaks strongly related to the periodicity of the multilayer nanowire. • The Ni segment was thought as a kind of impurity which can change the surface electron distribution and thereby the extinction peaks of nanowire. • Current work supplied the clear layer thickness information of Cu and Ni in Cu/Ni multilayer nanowire with TEM and EDS line-scan profile analysis. - Abstract: For understanding the interaction between light and noble/magnetism multilayer nanowires, Cu/Ni multilayer nanowires are fabricated by a multi-potential step deposition technique in etched ion-track polycarbonate template. The component and the corresponding layer thickness of multilayer nanowire are confirmed by TEM and EDS line-scan analysis. By tailoring the nanowire diameter, the Cu layer thickness and the periodicity of the nanowire, the extinction spectral of nanowire arrays exhibit an extra sensitivity to the change of structural parameters. The resonance wavelength caused by surface plasmon resonance increases obviously with increasing the nanowire diameter, the Cu layer thickness and the periodicity. The observations in our work can be explained by the “impurity effect” and coupled effect and can also be optimized for developing optical devices based on multilayer nanowires.

  3. Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms

    Directory of Open Access Journals (Sweden)

    Hazem Khaled

    2015-01-01

    Full Text Available Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI. The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.

  4. A study of the annealing and mechanical behaviour of electrodeposited Cu-Ni multilayers

    Energy Technology Data Exchange (ETDEWEB)

    Pickup, C.J.

    1997-08-01

    The mechanical strength of electrodeposited Cu-Ni multilayers is known to vary with deposition wavelength. Since layered coatings are harder and more resistant to wear and abrasion than non-layered coatings, this technique is of industrial interest. Optimisation of the process requires a better understanding of the strengthening mechanisms and the microstructural changes which affect such mechanisms. The work presented in this thesis presents the characterisation a series of Cu-Ni multilayers, covering a wide range of thicknesses of the individual layers in the multilayer, using X-ray diffraction, cross-section TEM, hardness testing and tensile testing. Further, the effects of high temperature annealing on interdiffusion and on changes in internal stresses are documented. (au). 176 refs.

  5. A study of the annealing and mechanical behaviour of electrodeposited Cu-Ni multilayers

    International Nuclear Information System (INIS)

    Pickup, C.J.

    1997-08-01

    The mechanical strength of electrodeposited Cu-Ni multilayers is known to vary with deposition wavelength. Since layered coatings are harder and more resistant to wear and abrasion than non-layered coatings, this technique is of industrial interest. Optimisation of the process requires a better understanding of the strengthening mechanisms and the microstructural changes which affect such mechanisms. The work presented in this thesis presents the characterisation a series of Cu-Ni multilayers, covering a wide range of thicknesses of the individual layers in the multilayer, using X-ray diffraction, cross-section TEM, hardness testing and tensile testing. Further, the effects of high temperature annealing on interdiffusion and on changes in internal stresses are documented. (au)

  6. The Effect of Surfactant Content over Cu-Ni Coatings Electroplated by the sc-CO₂ Technique.

    Science.gov (United States)

    Chuang, Ho-Chiao; Sánchez, Jorge; Cheng, Hsiang-Yun

    2017-04-19

    Co-plating of Cu-Ni coatings by supercritical CO₂ (sc-CO₂) and conventional electroplating processes was studied in this work. 1,4-butynediol was chosen as the surfactant and the effects of adjusting the surfactant content were described. Although the sc-CO₂ process displayed lower current efficiency, it effectively removed excess hydrogen that causes defects on the coating surface, refined grain size, reduced surface roughness, and increased electrochemical resistance. Surface roughness of coatings fabricated by the sc-CO₂ process was reduced by an average of 10%, and a maximum of 55%, compared to conventional process at different fabrication parameters. Cu-Ni coatings produced by the sc-CO₂ process displayed increased corrosion potential of ~0.05 V over Cu-Ni coatings produced by the conventional process, and 0.175 V over pure Cu coatings produced by the conventional process. For coatings ~10 µm thick, internal stress developed from the sc-CO₂ process were ~20 MPa lower than conventional process. Finally, the preferred crystal orientation of the fabricated coatings remained in the (111) direction regardless of the process used or surfactant content.

  7. Study on the occurrence of platinum in Xinjie Cu-Ni sulfide deposits by a combination of SPM and NAA

    International Nuclear Information System (INIS)

    Li Xiaolin; Zhu Jieqing; Lu Rongrong; Gu Yingmei; Wu Xiankang; Chen Youhong

    1997-01-01

    A combination of neutron-activation analysis (NAA) and scanning proton microprobe (SPM) was used to study the distribution of platinum-group elements (PGEs) in rocks and ores from Xinjie Cu-Ni deposit. The minimum detection limits of PGEs by NAA had been much improved by means of a nickel-sulfide fire-assay technique for pre-concentration of PGEs in the ore samples. A simple and effective method was developed for true element mapping in SPM experiments. A pair of moveable absorption filters was set up in the target chamber for high sensitivities of both major and trace elements. The bulk analysis results by NNA indicated that the PGE mineralization occurred at the base of Xinjie layered intrusion in clino-pyroxenite rocks and the Cu-Ni sulfide minerals disseminated within the rocks had high abundance level of PGEs. However, the micro-PIXE analysis of the Cu-Ni sulfide mineral grains did not find PGEs above the MDL of (6-9) x 10 -6 for Rh, Ru and Pd, and 6- x 10 -6 for Pt. The search for platinum occurrence in sulfide minerals was followed by scanning analysis of SPM when some smaller platinum enriched grains were found in the sulfide minerals. The microscopic analysis results suggested that platinum occurred in the Cu-Ni sulfide matrix as independent arsenide mineral grains. The chemical formula of the arsenide sperrylite was PtAs2. The information of the platinum occurrence was helpful to future mineralogical research and mineral processing and beneficiation of the Cu-Ni deposit

  8. Typical failures of CuNi 90/10 seawater tubing systems and how to avoid them

    Energy Technology Data Exchange (ETDEWEB)

    Schleich, Wilhelm [Technical Advisory Service, KM Europa Metal AG, Klosterstr. 29, 49074 Osnabrueck (Germany)

    2004-07-01

    For many decades, copper-nickel alloy CuNi 90/10 (UNS C70600) has extensively been used as a piping material for seawater systems in shipbuilding, offshore, and desalination industries. Attractive characteristics of this alloy combine excellent resistance to uniform corrosion, remarkable resistance to localised corrosion in chlorinated seawater, and higher erosion resistance than other copper alloys and steel. Furthermore, CuNi 90/10 is resistant to biofouling providing various economic benefits. In spite of the appropriate properties of the alloy, instances of failure have been experienced in practice. The reasons are mostly attributed to the composition and production of CuNi 90/10 products compounds, occurrence of erosion-corrosion and corrosion damage in polluted waters. This paper covers important areas which have to be considered to ensure successful application of the alloy for seawater tubing. For this purpose, the optimum and critical operating conditions are evaluated. It includes metallurgical, design and fabrication considerations. For the prevention of erosion-corrosion, the importance of hydrodynamics is demonstrated. In addition, commissioning, shut-down and start-up measures are compiled that are necessary for the establishment and re-establishment of the protective layer. (author)

  9. Molecular dynamics simulation of effects of twin interfaces on Cu/Ni multilayers

    International Nuclear Information System (INIS)

    Fu, Tao; Peng, Xianghe; Weng, Shayuan; Zhao, Yinbo; Gao, Fengshan; Deng, Lijun; Wang, Zhongchang

    2016-01-01

    We perform molecular dynamics simulation of the indentation on pure Cu and Ni films and Cu/Ni multilayered films with a cylindrical indenter, aimed to investigate the effects of the cubic-on-cubic interface and hetero-twin interface on their mechanical properties. We also investigate systematically the formation of twin boundary in the pure metals and the effects of the cubic-on-cubic and hetero-twin interface on mechanical properties of the multilayers. We find that the slip of the horizontal stacking fault can release the internal stress, resulting in insignificant strengthening. The change in the crystal orientation by horizontal movement of the atoms in a layer-by-layer manner is found to initiate the movement of twin boundary, and the hetero-twin interface is beneficial to the hardening of multilayers. Moreover, we also find that increasing number of hetero-twin interfaces can harden the Cu/Ni multilayers.

  10. The Pobei Cu-Ni and Fe ore deposits in NW China are comagmatic evolution products: evidence from ore microscopy, zircon U-Pb chronology and geochemistry

    Energy Technology Data Exchange (ETDEWEB)

    Liu, G.I.; Li, W.Y.; Lu, X.B.; Huo, Y.H.; Zhang, B.

    2017-11-01

    The Pobei mafic-ultramafic complex in northwestern China comprises magmatic Cu-Ni sulfide ore deposits coexisting with Fe-Ti oxide deposits. The Poshi, Poyi, and Podong ultramafic intrusions host the Cu-Ni ore. The ultramafic intrusions experienced four stages during its formation. The intrusion sequence was as follows: dunite, hornblende-peridotite, wehrlite and pyroxenite. The wall rock of the ultramafic intrusions is the gabbro intrusion in the southwestern of the Pobei complex. The Xiaochangshan magmatic deposit outcrops in the magnetitemineralized gabbro in the northeastern part of the Pobei complex. The main emplacement events related to the mineralization in the Pobei complex, are the magnetite-mineralized gabbro related to the Xiaochangshan Fe deposit, the gabbro intrusion associated to the Poyi, Poshi and Podong Cu-Ni deposits, and the ultramafic intrusions that host Cu-Ni deposits (Poyi and Poshi). The U-Pb age of the magnetite-mineralized gabbro is 276±1.7Ma, which is similar to that of the Pobei mafic intrusions. The εHf(t) value of zircon in the magnetite-mineralized gabbro is almost the same as that of the gabbro around the Poyi and Poshi Cu-Ni deposits, indicating that the rocks related to Cu-Ni and magnetite deposits probably originated from the same parental magma. There is a trend of crystallization differentiation evolution in the Harker diagram from the dunite in the Cu-Ni deposit to the magnetite-mineralized gabbro. The monosulfide solid solution fractional crystallization was weak in Pobei; thus, the Pd/Ir values were only influenced by the crystallization of silicate minerals. The more complete the magma evolution is, the greater is the Pd/Ir ratio. The Pd/Ir values of dunite, the lithofacies containing sulfide (including hornblende peridotite, wehrlite, and pyroxenite) in the Poyi Cu-Ni deposit, magnetite-mineralized gabbro, and massive magnetite, are 8.55, 12.18, 12.26, and 18.14, respectively. Thus, the massive magnetite was probably the

  11. The Pobei Cu-Ni and Fe ore deposits in NW China are comagmatic evolution products: evidence from ore microscopy, zircon U-Pb chronology and geochemistry

    International Nuclear Information System (INIS)

    Liu, G.I.; Li, W.Y.; Lu, X.B.; Huo, Y.H.; Zhang, B.

    2017-01-01

    The Pobei mafic-ultramafic complex in northwestern China comprises magmatic Cu-Ni sulfide ore deposits coexisting with Fe-Ti oxide deposits. The Poshi, Poyi, and Podong ultramafic intrusions host the Cu-Ni ore. The ultramafic intrusions experienced four stages during its formation. The intrusion sequence was as follows: dunite, hornblende-peridotite, wehrlite and pyroxenite. The wall rock of the ultramafic intrusions is the gabbro intrusion in the southwestern of the Pobei complex. The Xiaochangshan magmatic deposit outcrops in the magnetitemineralized gabbro in the northeastern part of the Pobei complex. The main emplacement events related to the mineralization in the Pobei complex, are the magnetite-mineralized gabbro related to the Xiaochangshan Fe deposit, the gabbro intrusion associated to the Poyi, Poshi and Podong Cu-Ni deposits, and the ultramafic intrusions that host Cu-Ni deposits (Poyi and Poshi). The U-Pb age of the magnetite-mineralized gabbro is 276±1.7Ma, which is similar to that of the Pobei mafic intrusions. The εHf(t) value of zircon in the magnetite-mineralized gabbro is almost the same as that of the gabbro around the Poyi and Poshi Cu-Ni deposits, indicating that the rocks related to Cu-Ni and magnetite deposits probably originated from the same parental magma. There is a trend of crystallization differentiation evolution in the Harker diagram from the dunite in the Cu-Ni deposit to the magnetite-mineralized gabbro. The monosulfide solid solution fractional crystallization was weak in Pobei; thus, the Pd/Ir values were only influenced by the crystallization of silicate minerals. The more complete the magma evolution is, the greater is the Pd/Ir ratio. The Pd/Ir values of dunite, the lithofacies containing sulfide (including hornblende peridotite, wehrlite, and pyroxenite) in the Poyi Cu-Ni deposit, magnetite-mineralized gabbro, and massive magnetite, are 8.55, 12.18, 12.26, and 18.14, respectively. Thus, the massive magnetite was probably the

  12. AstroCom NYC: A Partnership Between Astronomers at CUNY, AMNH, and Columbia University

    Science.gov (United States)

    Paglione, Timothy; Ford, K. S.; Robbins, D.; Mac Low, M.; Agueros, M. A.

    2014-01-01

    AstroCom NYC is a new program designed to improve urban minority student access to opportunities in astrophysical research by greatly enhancing partnerships between research astronomers in New York City. The partners are minority serving institutions of the City University of New York, and the astrophysics research departments of the American Museum of Natural History and Columbia. AstroCom NYC provides centralized, personalized mentoring as well as financial and academic support, to CUNY undergraduates throughout their studies, plus the resources and opportunities to further CUNY faculty research with students. The goal is that students’ residency at AMNH helps them build a sense of belonging in the field, and inspires and prepares them for graduate study. AstroCom NYC prepares students for research with a rigorous Methods of Scientific Research course developed specifically to this purpose, a laptop, a research mentor, career mentor, involvement in Columbia outreach activities, scholarships and stipends, Metrocards, and regular assessment for maximum effectiveness. Stipends in part alleviate the burdens at home typical for CUNY students so they may concentrate on their academic success. AMNH serves as the central hub for our faculty and students, who are otherwise dispersed among all five boroughs of the City. With our first cohort we experienced the expected challenges from their diverse preparedness, but also far greater than anticipated challenges in scheduling, academic advisement, and molding their expectations. We review Year 1 operations and outcomes, as well as plans for Year 2, when our current students progress to be peer mentors.

  13. Influence of Ni Solute segregation on the intrinsic growth stresses in Cu(Ni) thin films

    International Nuclear Information System (INIS)

    Kaub, T.M.; Felfer, P.; Cairney, J.M.; Thompson, G.B.

    2016-01-01

    Using intrinsic solute segregation in alloys, the compressive stress in a series of Cu(Ni) thin films has been studied. The highest compressive stress was noted in the 5 at.% Ni alloy, with increasing Ni concentration resulting in a subsequent reduction of stress. Atom probe tomography quantified Ni's Gibbsian interfacial excess in the grain boundaries and confirmed that once grain boundary saturation is achieved, the compressive stress was reduced. This letter provides experimental support in elucidating how interfacial segregation of excess adatoms contributes to the post-coalescence compressive stress generation mechanism in thin films. - Graphical abstract: Cu(Ni) film stress relationship with Ni additions. Atom probe characterization confirms solute enrichment in the boundaries, which was linked to stress response.

  14. An improved algorithm of laser spot center detection in strong noise background

    Science.gov (United States)

    Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong

    2018-01-01

    Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.

  15. A Robust Algorithm to Determine the Topology of Space from the Cosmic Microwave Background Radiation

    OpenAIRE

    Weeks, Jeffrey R.

    2001-01-01

    Satellite measurements of the cosmic microwave back-ground radiation will soon provide an opportunity to test whether the universe is multiply connected. This paper presents a new algorithm for deducing the topology of the universe from the microwave background data. Unlike an older algorithm, the new algorithm gives the curvature of space and the radius of the last scattering surface as outputs, rather than requiring them as inputs. The new algorithm is also more tolerant of erro...

  16. A CuNi bimetallic cathode with nanostructured copper array for enhanced hydrodechlorination of trichloroethylene (TCE).

    Science.gov (United States)

    Liu, Bo; Zhang, Hao; Lu, Qi; Li, Guanghe; Zhang, Fang

    2018-09-01

    To address the challenges of low hydrodechlorination efficiency by non-noble metals, a CuNi bimetallic cathode with nanostructured copper array film was fabricated for effective electrochemical dechlorination of trichloroethylene (TCE) in aqueous solution. The CuNi bimetallic cathodes were prepared by a simple one-step electrodeposition of copper onto the Ni foam substrate, with various electrodeposition time of 5/10/15/20 min. The optimum electrodeposition time was 10 min when copper was coated as a uniform nanosheet array on the nickel foam substrate surface. This cathode exhibited the highest TCE removal, which was twice higher compared to that of the nickel foam cathode. At the same passed charge of 1080C, TCE removal increased from 33.9 ± 3.3% to 99.7 ± 0.1% with the increasing operation current from 5 to 20 mA cm -2 , while the normalized energy consumption decreased from 15.1 ± 1.0 to 2.6 ± 0.01 kWh log -1  m -3 . The decreased normalized energy consumption at a higher current density was due to the much higher removal efficiency at a higher current. These results suggest that CuNi cathodes prepared by simple electrodeposition method represent a promising and cost-effective approach for enhanced electrochemical dechlorination. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Dissociated Structure of Dislocation Loops with Burgers Vector alpha in Electron-Irradiated Cu-Ni

    DEFF Research Database (Denmark)

    Bilde-Sørensen, Jørgen; Leffers, Torben; Barlow, P.

    1977-01-01

    The rectangular dislocation loops with total Burgers vector a100 which are formed in Cu-Ni alloys during 1 MeV electron irradiation at elevated temperatures have been examined by weak-beam electron microscopy. The loop edges were found to take up a Hirth-lock configuration, dissociating into two ...

  18. Background Traffic-Based Retransmission Algorithm for Multimedia Streaming Transfer over Concurrent Multipaths

    Directory of Open Access Journals (Sweden)

    Yuanlong Cao

    2012-01-01

    Full Text Available The content-rich multimedia streaming will be the most attractive services in the next-generation networks. With function of distribute data across multipath end-to-end paths based on SCTP's multihoming feature, concurrent multipath transfer SCTP (CMT-SCTP has been regarded as the most promising technology for the efficient multimedia streaming transmission. However, the current researches on CMT-SCTP mainly focus on the algorithms related to the data delivery performance while they seldom consider the background traffic factors. Actually, background traffic of realistic network environments has an important impact on the performance of CMT-SCTP. In this paper, we firstly investigate the effect of background traffic on the performance of CMT-SCTP based on a close realistic simulation topology with reasonable background traffic in NS2, and then based on the localness nature of background flow, a further improved retransmission algorithm, named RTX_CSI, is proposed to reach more benefits in terms of average throughput and achieve high users' experience of quality for multimedia streaming services.

  19. Fatigue of thin walled tubes in copper alloy CuNi10

    DEFF Research Database (Denmark)

    Lambertsen, Søren Heide; Damkilde, Lars; Jepsen, Michael S.

    2016-01-01

    The current work concerns the investigation of the fatigue resistance of CuNi10 tubes, which are frequently used in heat exchangers of large ship engines. The lifetime performances of the exchanger tubes are greatly affected by the environmental conditions, where especially the temperature...... by means of the ASTM E739 guideline and one-sided tolerance limits factor method. The tests show good fatigue resistance and the risk for a failure is low in aspect to the case of a ship heat exchanger....

  20. A study of the composition and microstructure of nanodispersed Cu-Ni alloys obtained by different routes from copper and nickel oxides

    Energy Technology Data Exchange (ETDEWEB)

    Cangiano, Maria de los A; Ojeda, Manuel W., E-mail: mojeda@unsl.edu.ar; Carreras, Alejo C.; Gonzalez, Jorge A.; Ruiz, Maria del C

    2010-11-15

    Mixtures of CuO and NiO were prepared by two different techniques, and then the oxides were reduced with H{sub 2}. Method A involved the preparation of mechanical mixtures of CuO and NiO using different milling and pelletizing processes. Method B involved the chemical synthesis of the mixture of CuO and NiO. The route used to prepare the copper and nickel oxide mixture was found to have great influence on the characteristics of bimetallic Cu-Ni particles obtained. Observations performed using the X-ray diffraction (XRD) technique showed that although both methods led to the Cu-Ni solid solution, the diffractogram of the alloy obtained with method A revealed the presence of NiO together with the alloy. The temperature-programmed reduction (TPR) experiments indicated that the alloy is formed at lower temperatures when using method B. The scanning electron microscopy (SEM) studies revealed notable differences in the morphology and size distribution of the bimetallic particles synthesized by different routes. The results of the electron probe microanalysis (EPMA) studies evidenced the existence of a small amount of oxygen in both cases and demonstrated that the alloy synthesized using method B presented a homogeneous composition with a Cu-Ni ratio close to 1:1. On the contrary, the alloy obtained using method A was not homogeneous in all the volume of the solid. The homogeneity depended on the mechanical treatment undergone by the mixture of the oxides. - Research Highlights: {yields}Study of the properties of Cu-Ni alloys synthesized by two different routes. {yields}Mixtures of Cu and Ni oxides prepared by two techniques were reduced with H{sub 2}. {yields}Mixtures of oxides were obtained by a mechanical process and the citrate-gel route. {yields}The characterizations were carried out by TPR, XRD, SEM and EPMA. {yields}The route used to prepare oxide mixtures influences on the Cu-Ni alloy obtained.

  1. A study of the composition and microstructure of nanodispersed Cu-Ni alloys obtained by different routes from copper and nickel oxides

    International Nuclear Information System (INIS)

    Cangiano, Maria de los A; Ojeda, Manuel W.; Carreras, Alejo C.; Gonzalez, Jorge A.; Ruiz, Maria del C

    2010-01-01

    Mixtures of CuO and NiO were prepared by two different techniques, and then the oxides were reduced with H 2 . Method A involved the preparation of mechanical mixtures of CuO and NiO using different milling and pelletizing processes. Method B involved the chemical synthesis of the mixture of CuO and NiO. The route used to prepare the copper and nickel oxide mixture was found to have great influence on the characteristics of bimetallic Cu-Ni particles obtained. Observations performed using the X-ray diffraction (XRD) technique showed that although both methods led to the Cu-Ni solid solution, the diffractogram of the alloy obtained with method A revealed the presence of NiO together with the alloy. The temperature-programmed reduction (TPR) experiments indicated that the alloy is formed at lower temperatures when using method B. The scanning electron microscopy (SEM) studies revealed notable differences in the morphology and size distribution of the bimetallic particles synthesized by different routes. The results of the electron probe microanalysis (EPMA) studies evidenced the existence of a small amount of oxygen in both cases and demonstrated that the alloy synthesized using method B presented a homogeneous composition with a Cu-Ni ratio close to 1:1. On the contrary, the alloy obtained using method A was not homogeneous in all the volume of the solid. The homogeneity depended on the mechanical treatment undergone by the mixture of the oxides. - Research Highlights: →Study of the properties of Cu-Ni alloys synthesized by two different routes. →Mixtures of Cu and Ni oxides prepared by two techniques were reduced with H 2 . →Mixtures of oxides were obtained by a mechanical process and the citrate-gel route. →The characterizations were carried out by TPR, XRD, SEM and EPMA. →The route used to prepare oxide mixtures influences on the Cu-Ni alloy obtained.

  2. Autoradiographical Detection of Tritium in Cu-Ni Alloy by Scanning Electron Microscopy

    OpenAIRE

    高安, 紀; 中野, 美樹; 竹内, 豊三郎

    1981-01-01

    The autoradiograph of tritium dispersed in Cu-Ni alloy sheet by 6Li(n,α)3H reaction was obtained by a scanning electron microscope. Prior to the irradiation of neutrons 6Li was deposited on the sheet by evaporation. The liquid emulsion, Fuji-ER, was used in this study. The distribution of tritium was detected by the dispersion of silver grains remaining in the emulsion after the development was carried out.

  3. Influence of ni thickness on oscillation coupling in Cu/Ni multilayers

    Energy Technology Data Exchange (ETDEWEB)

    Gagorowska, B; Dus-Sitek, M [Institute of Physics, Czestochowa University of Technology, Al. Armii Krajowej 19, 42-200 Czestochowa (Poland)

    2007-08-15

    The results of investigation of magnetic properties of [Cu/Ni]x100 were presented. Samples were deposited by face-to-face sputtering method onto the silicon substrate, the thickness of Cu layer was constant (d{sub Cu} = 2 nm) and the thickness of Ni layer - variable (1 nm {<=} d{sub Ni} {<=} 6 nm). In Cu/Ni multilayers, for the thickness of Ni layer bigger than 2 nm antiferromagnetic coupling (A-F) were observed, for the thickness of Ni smaller than 2 nm A-F coupling is absent.

  4. Influence of ni thickness on oscillation coupling in Cu/Ni multilayers

    International Nuclear Information System (INIS)

    Gagorowska, B; Dus-Sitek, M

    2007-01-01

    The results of investigation of magnetic properties of [Cu/Ni]x100 were presented. Samples were deposited by face-to-face sputtering method onto the silicon substrate, the thickness of Cu layer was constant (d Cu = 2 nm) and the thickness of Ni layer - variable (1 nm ≤ d Ni ≤ 6 nm). In Cu/Ni multilayers, for the thickness of Ni layer bigger than 2 nm antiferromagnetic coupling (A-F) were observed, for the thickness of Ni smaller than 2 nm A-F coupling is absent

  5. Electrode kinetics of ethanol oxidation on novel CuNi alloy supported catalysts synthesized from PTFE suspension

    Science.gov (United States)

    Sen Gupta, S.; Datta, J.

    An understanding of the kinetics and mechanism of the electrochemical oxidation of ethanol is of considerable interest for the optimization of the direct ethanol fuel cell. In this paper, the electro-oxidation of ethanol in sodium hydroxide solution has been studied over 70:30 CuNi alloy supported binary platinum electrocatalysts. These comprised mixed deposits of Pt with Ru or Mo. The electrodepositions were carried out under galvanostatic condition from a dilute suspension of polytetrafluoroethylene (PTFE) containing the respective metal salts. Characterization of the catalyst layers by scanning electron microscope (SEM)-energy dispersive X-ray (EDX) indicated that this preparation technique yields well-dispersed catalyst particles on the CuNi alloy substrate. Cyclic voltammetry, polarization study and electrochemical impedance spectroscopy were used to investigate the kinetics and mechanism of ethanol electro-oxidation over a range of NaOH and ethanol concentrations. The relevant parameters such as Tafel slope, charge transfer resistance and the reaction orders in respect of OH - ions and ethanol were determined.

  6. Magnetic susceptibility, specific heat and magnetic structure of CuNi2(PO4)2

    International Nuclear Information System (INIS)

    Escobal, Jaione; Pizarro, Jose L.; Mesa, Jose L.; Larranaga, Aitor; Fernandez, Jesus Rodriguez; Arriortua, Maria I.; Rojo, Teofilo

    2006-01-01

    The CuNi 2 (PO 4 ) 2 phosphate has been synthesized by the ceramic method at 800 deg. C in air. The crystal structure consists of a three-dimensional skeleton constructed from MO 4 (M II =Cu and Ni) planar squares and M 2 O 8 dimers with square pyramidal geometry, which are interconnected by (PO 4 ) 3- oxoanions with tetrahedral geometry. The magnetic behavior has been studied on powdered sample by using susceptibility, specific heat and neutron diffraction data. The bimetallic copper(II)-nickel(II) orthophosphate exhibits a three-dimensional magnetic ordering at, approximately, 29.8 K. However, its complex crystal structure hampers any parametrization of the J-exchange parameter. The specific heat measurements exhibit a three-dimensional magnetic ordering (λ-type) peak at 29.5 K. The magnetic structure of this phosphate shows ferromagnetic interactions inside the Ni 2 O 8 dimers, whereas the sublattice of Cu(II) ions presents antiferromagnetic couplings along the y-axis. The change of the sign in the magnetic unit-cell, due to the [1/2, 0, 1/2] propagation vector determines a purely antiferromagnetic structure. - Graphical abstract: Magnetic structure of CuNi2(PO4)2

  7. The Effect of Modulation Ratio of Cu/Ni Multilayer Films on the Fretting Damage Behaviour of Ti-811 Titanium Alloy.

    Science.gov (United States)

    Zhang, Xiaohua; Liu, Daoxin; Li, Xiaoying; Dong, Hanshan; Xi, Yuntao

    2017-05-26

    To improve the fretting damage (fretting wear and fretting fatigue) resistance of Ti-811 titanium alloy, three Cu/Ni multilayer films with the same modulation period thickness (200 nm) and different modulation ratios (3:1, 1:1, 1:3) were deposited on the surface of the alloy via ion-assisted magnetron sputtering deposition (IAD). The bonding strength, micro-hardness, and toughness of the films were evaluated, and the effect of the modulation ratio on the room-temperature fretting wear (FW) and fretting fatigue (FF) resistance of the alloy was determined. The results indicated that the IAD technique can be successfully used to prepare Cu/Ni multilayer films, with high bonding strength, low-friction, and good toughness, which yield improved room-temperature FF and FW resistance of the alloy. For the same modulation period (200 nm), the micro-hardness, friction, and FW resistance of the coated alloy increased, decreased, and improved, respectively, with increasing modulation ratio of the Ni-to-Cu layer thickness. However, the FF resistance of the coated alloy increased non-monotonically with the increasing modulation ratio. Among the three Cu/Ni multilayer films, those with a modulation ratio of 1:1 can confer the highest FF resistance to the Ti-811 alloy, owing mainly to their unique combination of good toughness, high strength, and low-friction.

  8. Local radiofrequency-induced hyperthermia using CuNi nanoparticles with therapeutically suitable Curie temperature

    International Nuclear Information System (INIS)

    Kuznetsov, Anatoly A.; Leontiev, Vladimir G.; Brukvin, Vladimir A.; Vorozhtsov, Georgy N.; Kogan, Boris Ya.; Shlyakhtin, Oleg A.; Yunin, Alexander M.; Tsybin, Oleg I.; Kuznetsov, Oleg A.

    2007-01-01

    Copper-nickel (CuNi) alloy nanoparticles with Curie temperatures (T c ) from 40 to 60 o C were synthesized by several techniques. Varying the synthesis parameters and post-treatment, as well as separations by size and T c , allow producing mediator nanoparticles for magnetic fluid hyperthermia with parametric feedback temperature control with desired parameters. In vitro and in vivo animal experiments have demonstrated the feasibility of the temperature-controlled heating of the tissue, laden with the particles, by an external alternating magnetic field

  9. Gömülmüs Atom Potansiyeli Kullanarak CuNi Alasımının Moleküler Dinamik Simulasyonu

    Directory of Open Access Journals (Sweden)

    Eşe Ergün AKPINAR

    2009-04-01

    Full Text Available Bu çalısmada, CuNi alasımının moleküler dinamik simulasyonu, Sutton-Chen (SC potansiyeli kullanılarak incelendi. Bu potansiyel Cu, Ni ve CuNi in deneysel bilgilerinin fonksiyon parametrelerine fit edilmesiyle elde edildi. CuNi alasımının kristalizasyon sürecini atomik olarak tanımlamak için, gömülmüs atom yöntemini esas alan sabit basınç, sabit sıcaklık (NPT moleküler dinamik simulasyonu uygulandı. Sıvı fazda iken 4x1011 K/s sogutma hızında sogutulan CuNi alasımının yapısı ve kristallesme olusum yetenegi radyal dagılım fonksiyonuyla incelendi. Simulasyon, üç temel dogrultu boyunca periyodik sınır sartlarını saglayan kübik bir hücrede 1024 atom içeren sistemle gerçeklestirildi. Hareket denklemleri Verlet algoritması kullanılarak sayısal olarak çözüldü. Sogutma deneyi için sıvı hal baslangıcı, katının sıvı sıcaklıgına ısıtılmasıyla elde edildi. Sistem 1300-1550K sıvılasma bölgesi üzerindeki sıcaklıkta eritildi ve homojenize edildi ve hızla oda sıcaklıgına sogutuldu.

  10. Local radiofrequency-induced hyperthermia using CuNi nanoparticles with therapeutically suitable Curie temperature

    Energy Technology Data Exchange (ETDEWEB)

    Kuznetsov, Anatoly A. [Institute of Biochemical Physics, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation); Leontiev, Vladimir G. [Institute of Metallurgy, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation); Brukvin, Vladimir A. [Institute of Metallurgy, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation); Vorozhtsov, Georgy N. [NIOPIK Organic Intermediates and Dyes Institute, Moscow 103787 (Russian Federation); Kogan, Boris Ya. [NIOPIK Organic Intermediates and Dyes Institute, Moscow 103787 (Russian Federation); Shlyakhtin, Oleg A. [Institute of Chemical Physics, Russian Academy of Sciences (RAS), Kosygin St. 4, Moscow 119991 (Russian Federation); Yunin, Alexander M. [Institute of Biochemical Physics, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation); Tsybin, Oleg I. [Institute of Metallurgy, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation); Kuznetsov, Oleg A. [Institute of Biochemical Physics, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation)]. E-mail: kuznetsov_oa@yahoo.com

    2007-04-15

    Copper-nickel (CuNi) alloy nanoparticles with Curie temperatures (T{sub c}) from 40 to 60{sup o}C were synthesized by several techniques. Varying the synthesis parameters and post-treatment, as well as separations by size and T{sub c}, allow producing mediator nanoparticles for magnetic fluid hyperthermia with parametric feedback temperature control with desired parameters. In vitro and in vivo animal experiments have demonstrated the feasibility of the temperature-controlled heating of the tissue, laden with the particles, by an external alternating magnetic field.

  11. Microstructure, thickness and sheet resistivity of Cu/Ni thin film produced by electroplating technique on the variation of electrolyte temperature

    Science.gov (United States)

    Toifur, M.; Yuningsih, Y.; Khusnani, A.

    2018-03-01

    In this research, it has been made Cu/Ni thin film produced with electroplating technique. The deposition process was done in the plating bath using Cu and Ni as cathode and anode respectively. The electrolyte solution was made from the mixture of HBrO3 (7.5g), NiSO4 (100g), NiCl2 (15g), and aquadest (250 ml). Electrolyte temperature was varied from 40°C up to 80°C, to make the Ni ions in the solution easy to move to Cu cathode. The deposition was done during 2 minutes on the potential of 1.5 volt. Many characterizations were done including the thickness of Ni film, microstructure, and sheet resistivity. The results showed that at all samples Ni had attacked on the Cu substrate to form Cu/Ni. The raising of electrolyte temperature affected the increasing of Ni thickness that is the Ni thickness increase with the increasing electrolyte temperature. From the EDS spectrum, it can be informed that samples already contain Ni and Cu elements and NiO and CuO compounds. Addition element and compound are found for sample Cu/Ni resulted from 70° electrolyte temperature of Ni deposition, that are Pt and PtO2. From XRD pattern, there are several phases which have crystal structure i.e. Cu, Ni, and NiO, while CuO and PtO2 have amorphous structure. The sheet resistivity linearly decreases with the increasing electrolyte temperature.

  12. A semi-supervised classification algorithm using the TAD-derived background as training data

    Science.gov (United States)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  13. A dilute Cu(Ni) alloy for synthesis of large-area Bernal stacked bilayer graphene using atmospheric pressure chemical vapour deposition

    Energy Technology Data Exchange (ETDEWEB)

    Madito, M. J.; Bello, A.; Dangbegnon, J. K.; Momodu, D. Y.; Masikhwa, T. M.; Barzegar, F.; Manyala, N., E-mail: ncholu.manyala@up.ac.za [Department of Physics, Institute of Applied Materials, SARCHI Chair in Carbon Technology and Materials, University of Pretoria, Pretoria 0028 (South Africa); Oliphant, C. J.; Jordaan, W. A. [National Metrology Institute of South Africa, Private Bag X34, Lynwood Ridge, Pretoria 0040 (South Africa); Fabiane, M. [Department of Physics, Institute of Applied Materials, SARCHI Chair in Carbon Technology and Materials, University of Pretoria, Pretoria 0028 (South Africa); Department of Physics, National University of Lesotho, P.O. Roma 180 (Lesotho)

    2016-01-07

    A bilayer graphene film obtained on copper (Cu) foil is known to have a significant fraction of non-Bernal (AB) stacking and on copper/nickel (Cu/Ni) thin films is known to grow over a large-area with AB stacking. In this study, annealed Cu foils for graphene growth were doped with small concentrations of Ni to obtain dilute Cu(Ni) alloys in which the hydrocarbon decomposition rate of Cu will be enhanced by Ni during synthesis of large-area AB-stacked bilayer graphene using atmospheric pressure chemical vapour deposition. The Ni doped concentration and the Ni homogeneous distribution in Cu foil were confirmed with inductively coupled plasma optical emission spectrometry and proton-induced X-ray emission. An electron backscatter diffraction map showed that Cu foils have a single (001) surface orientation which leads to a uniform growth rate on Cu surface in early stages of graphene growth and also leads to a uniform Ni surface concentration distribution through segregation kinetics. The increase in Ni surface concentration in foils was investigated with time-of-flight secondary ion mass spectrometry. The quality of graphene, the number of graphene layers, and the layers stacking order in synthesized bilayer graphene films were confirmed by Raman and electron diffraction measurements. A four point probe station was used to measure the sheet resistance of graphene films. As compared to Cu foil, the prepared dilute Cu(Ni) alloy demonstrated the good capability of growing large-area AB-stacked bilayer graphene film by increasing Ni content in Cu surface layer.

  14. Analysis of coincidence {gamma}-ray spectra using advanced background elimination, unfolding and fitting algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Morhac, M. E-mail: fyzimiro@savba.skfyzimiro@flnr.jinr.ru; Matousek, V. E-mail: matousek@savba.sk; Kliman, J.; Krupa, L.L.; Jandel, M

    2003-04-21

    The efficient algorithms to analyze multiparameter {gamma}-ray spectra are presented. They allow to search for peaks, to separate peaks from background, to improve the resolution and to fit 1-, 2-, 3-parameter {gamma}-ray spectra.

  15. Application of Remote-Sensing Observations for Detecting Patterns of Localization of Cu-Ni Mineralization of the Norilsk Ore Region

    Science.gov (United States)

    Milovsky, G. A.; Ishmukhametova, V. T.; Shemyakina, E. M.

    2017-12-01

    The methods of a complex analysis of materials of space, gravimetric, and magnetometric surveys were developed on the basis of a study of reference fields of the Norilsk ore region (Imangda, etc.) for detection patterns of the localization of Cu-Ni (with PGMs) mineralization in intrusive complexes of the northwestern frame of the Siberian Platform.

  16. Research on the algorithm of infrared target detection based on the frame difference and background subtraction method

    Science.gov (United States)

    Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian

    2015-09-01

    As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.

  17. Effect of chemical etching on the Cu/Ni metallization of poly (ether ether ketone)/carbon fiber composites

    International Nuclear Information System (INIS)

    Di Lizhi; Liu Bin; Song Jianjing; Shan Dan; Yang Dean

    2011-01-01

    Poly(ether ether ketone)/carbon fiber composites (PEEK/Cf) were chemical etched by Cr 2 O 3 /H 2 SO 4 solution, electroless plated with copper and then electroplated with nickel. The effects of chemical etching time and temperature on the adhesive strength between PEEK/Cf and Cu/Ni layers were studied by thermal shock method. The electrical resistance of some samples was measured. X-ray photoelectron spectroscopy (XPS) was used to analyze the surface composition and functional groups. Scanning electron microscopy (SEM) was performed to observe the surface morphology of the composite, the chemical etched sample, the plated sample and the peeled metal layer. The results indicated that C=O bond increased after chemical etching. With the increasing of etching temperature and time, more and more cracks and partially exposed carbon fibers appeared at the surface of PEEK/Cf composites, and the adhesive strength increased consequently. When the composites were etched at 60 deg. C for 25 min and at 70-80 deg. C for more than 15 min, the Cu/Ni metallization layer could withstand four thermal shock cycles without bubbling, and the electrical resistivity of the metal layer of these samples increased with the increasing of etching temperature and time.

  18. Self-consistent electronic structure and segregation profiles of the Cu-Ni (001) random-alloy surface

    DEFF Research Database (Denmark)

    Ruban, Andrei; Abrikosov, I. A.; Kats, D. Ya.

    1994-01-01

    We have calculated the electronic structure and segregation profiles of the (001) surface of random Cu-Ni alloys with varying bulk concentrations by means of the coherent potential approximation and the linear muffin-tin-orbitals method. Exchange and correlation were included within the local......-density approximation. Temperature effects were accounted for by means of the cluster-variation method and, for comparison, by mean-field theory. The necessary interaction parameters were calculated by the Connolly-Williams method generalized to the case of a surface of a random alloy. We find the segregation profiles...

  19. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Justin, E-mail: justin.solomon@duke.edu [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 and Departments of Biomedical Engineering and Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was

  20. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    International Nuclear Information System (INIS)

    Solomon, Justin; Samei, Ehsan

    2014-01-01

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was

  1. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  2. CO2 activation on bimetallic CuNi nanoparticles

    Directory of Open Access Journals (Sweden)

    Natalie Austin

    2016-10-01

    Full Text Available Density functional theory calculations have been performed to investigate the structural, electronic, and CO2 adsorption properties of 55-atom bimetallic CuNi nanoparticles (NPs in core-shell and decorated architectures, as well as of their monometallic counterparts. Our results revealed that with respect to the monometallic Cu55 and Ni55 parents, the formation of decorated Cu12Ni43 and core-shell Cu42Ni13 are energetically favorable. We found that CO2 chemisorbs on monometallic Ni55, core-shell Cu13Ni42, and decorated Cu12Ni43 and Cu43Ni12, whereas, it physisorbs on monometallic Cu55 and core-shell Cu42Ni13. The presence of surface Ni on the NPs is key in strongly adsorbing and activating the CO2 molecule (linear to bent transition and elongation of C˭O bonds. This activation occurs through a charge transfer from the NPs to the CO2 molecule, where the local metal d-orbital density localization on surface Ni plays a pivotal role. This work identifies insightful structure-property relationships for CO2 activation and highlights the importance of keeping a balance between NP stability and CO2 adsorption behavior in designing catalytic bimetallic NPs that activate CO2.

  3. A Dynamic Enhancement With Background Reduction Algorithm: Overview and Application to Satellite-Based Dust Storm Detection

    Science.gov (United States)

    Miller, Steven D.; Bankert, Richard L.; Solbrig, Jeremy E.; Forsythe, John M.; Noh, Yoo-Jeong; Grasso, Lewis D.

    2017-12-01

    This paper describes a Dynamic Enhancement Background Reduction Algorithm (DEBRA) applicable to multispectral satellite imaging radiometers. DEBRA uses ancillary information about the clear-sky background to reduce false detections of atmospheric parameters in complex scenes. Applied here to the detection of lofted dust, DEBRA enlists a surface emissivity database coupled with a climatological database of surface temperature to approximate the clear-sky equivalent signal for selected infrared-based multispectral dust detection tests. This background allows for suppression of false alarms caused by land surface features while retaining some ability to detect dust above those problematic surfaces. The algorithm is applicable to both day and nighttime observations and enables weighted combinations of dust detection tests. The results are provided quantitatively, as a detection confidence factor [0, 1], but are also readily visualized as enhanced imagery. Utilizing the DEBRA confidence factor as a scaling factor in false color red/green/blue imagery enables depiction of the targeted parameter in the context of the local meteorology and topography. In this way, the method holds utility to both automated clients and human analysts alike. Examples of DEBRA performance from notable dust storms and comparisons against other detection methods and independent observations are presented.

  4. Mechanical properties and bending strain effect on Cu-Ni sheathed MgB2 superconducting tape

    International Nuclear Information System (INIS)

    Fu, Minyi; Chen, Jiangxing; Jiao, Zhengkuan; Kumakura, H.; Togano, K.; Ding, Liren; Zhang, Yong; Chen, Zhiyou; Han, Hanmin; Chen, Jinglin

    2004-01-01

    The Young's modulus (E) of Cu-Ni sheathed MgB 2 monofilament tape was measured using electric method. It is about 8.05 x 10 10 Pa, the same order of Cu and its alloys. We found that the lower E value of the MgB 2 component seemed to relate to the lower filament density. The benefits of pre-compression in filaments were discussed in terms of improving stress distribution in the wires and tapes during winding and operation of superconducting magnets. The magnetic field dependence of J c was investigated on the sample subjected to various strain levels through bending with different radii at 4.2 K

  5. Speech Enhancement of Mobile Devices Based on the Integration of a Dual Microphone Array and a Background Noise Elimination Algorithm.

    Science.gov (United States)

    Chen, Yung-Yue

    2018-05-08

    Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.

  6. Speech Enhancement of Mobile Devices Based on the Integration of a Dual Microphone Array and a Background Noise Elimination Algorithm

    Directory of Open Access Journals (Sweden)

    Yung-Yue Chen

    2018-05-01

    Full Text Available Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H2 estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.

  7. An augmented space recursive method for the first principles study of concentration profiles at CuNi alloy surfaces

    International Nuclear Information System (INIS)

    Dasgupta, I.; Mookerjee, A.

    1995-07-01

    We present here a first principle method for the calculation of effective cluster interactions for semi-infinite solid alloys required for the study of surface segregation and surface ordering on disordered surfaces. Our method is based on the augmented space recursion coupled with the orbital peeling method of Burke in the framework of the TB-LMTO. Our study of surface segregation in CuNi alloys demonstrates strong copper segregation and a monotonic concentration profile throughout the concentration range. (author). 35 refs, 4 figs, 2 tabs

  8. Investigations on Cu-Ni and Cu-Al systems with secondary ion mass spectrometry (SIMS)

    International Nuclear Information System (INIS)

    Rodriguez-Murcia, H.; Beske, H.E.

    1976-04-01

    The ratio of the ionization coefficients of secondary atomic ions emitted from the two component systems Cu-Ni and Cu-Al was investigated as a function of the concentration of the two components. In the low concentration range the ratio of the ionization coefficients is a constant. An influence of the phase composition on the ratio of the ionization coefficients was found in the Cu-Al system. In addition, the cluster ion emission was investigated as a function of the concentration and the phase composition of the samples. The secondary atomic ion intensity was influenced by the presence of cluster ions. The importance of the cluster ions in quantitative analysis and phase determination by means of secondary ion mass spectrometry are discussed. (orig.) [de

  9. Phase unwrapping algorithm based on multi-frequency fringe projection and fringe background for fringe projection profilometry

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Zhao, Hong; Gu, Feifei; Ma, Yueyang

    2015-01-01

    A phase unwrapping algorithm specially designed for the phase-shifting fringe projection profilometry (FPP) is proposed. It combines a revised dual-frequency fringe projectionalgorithm and a proposed fringe background based quality guided phase unwrapping algorithm (FB-QGPUA). Phase demodulated from the high-frequency fringe patterns is partially unwrapped by that demodulated from the low-frequency ones. Then FB-QGPUA is adopted to further unwrap the partially unwrapped phase. Influences of the phase error on the measurement are researched. Strategy to select the fringe pitch is given. Experiments demonstrate that the proposed method is very robust and efficient. (paper)

  10. CuNi NPs supported on MIL-101 as highly active catalysts for the hydrolysis of ammonia borane

    Science.gov (United States)

    Gao, Doudou; Zhang, Yuhong; Zhou, Liqun; Yang, Kunzhou

    2018-01-01

    The catalysts containing Cu, Ni bi-metallic nanoparticles were successfully synthesized by in-situ reduction of Cu2+ and Ni2+ salts into the highly porous and hydrothermally stable metal-organic framework MIL-101 via a simple liquid impregnation method. When the total amount of loading metal is 3 × 10-4 mol, Cu2Ni1@MIL-101 catalyst shows higher catalytic activity comparing to CuxNiy@MIL-101 with different molar ratio of Cu and Ni (x, y = 0, 0.5, 1.5, 2, 2.5, 3). Cu2Ni1@MIL-101 catalyst has the highest catalytic activity comparing to mono-metallic Cu and Ni counterparts and pure bi-metallic CuNi nanoparticles in hydrolytic dehydrogeneration of ammonia borane (AB) at room temperature. Additionally, in the hydrolysis reaction, the Cu2Ni1@MIL- 101 catalyst possesses excellent catalytic performances, which exhibit highly catalytic activity with turn over frequency (TOF) value of 20.9 mol H2 min-1 Cu mol-1 and a very low activation energy value of 32.2 kJ mol-1. The excellent catalytic activity has been successfully achieved thanks to the strong bi-metallic synergistic effects, uniform distribution of nanoparticles and the bi-functional effects between CuNi nanoparticles and the host of MIL-101. Moreover, the catalyst also displays satisfied durable stability after five cycles for the hydrolytically releasing H2 from AB. The non-noble metal catalysts have broad prospects for commercial applications in the field of hydrogen-stored materials due to the low prices and excellent catalytic activity.

  11. Effect of preparation conditions on the diffusion parameters of Cu/Ni thin films

    Energy Technology Data Exchange (ETDEWEB)

    Rammo, N.N.; Makadsi, M.N. [College of Science, Baghdad University, Baghdad (Iraq); Abdul-Lettif, A.M. [College of Science, Babylon University, Hilla (Iraq)

    2004-11-01

    Diffusion coefficients of vacuum-deposited Cu/Ni bilayer thin films were determined in the temperature range 200-500 C using X-ray photoelectron spectroscopy, sheet resistance measurements, and X-ray diffraction analysis. The difference between the results of the present work and those of previous relevant investigations may be attributed to the difference in the film microstructure, which is controlled by the preparation conditions. Therefore, the effects of deposition rate, substrate temperature, film thickness, and substrate structure on the diffusion parameters were separately investigated. It is shown that the diffusion activation energy (Q) decreases as deposition rate increases, whereas Q increases as substrate temperature and film thickness increase. The value of Q for films deposited on amorphous substrates is less than that for films deposited on single-crystal substrates. (copyright 2004 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  12. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    Science.gov (United States)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  13. A novel robust and efficient algorithm for charge particle tracking in high background flux

    International Nuclear Information System (INIS)

    Fanelli, C; Cisbani, E; Dotto, A Del

    2015-01-01

    The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 10 39 cm -2 s -1 . To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results. (paper)

  14. Morphology, optical and electrical properties of Cu-Ni nanoparticles in a-C:H prepared by co-deposition of RF-sputtering and RF-PECVD

    Energy Technology Data Exchange (ETDEWEB)

    Ghodselahi, T., E-mail: ghodselahi@ipm.ir [School of Physics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Vesaghi, M.A. [School of Physics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Department of Physics, Sharif University of Technology, P.O. Box 11365-9161, Tehran (Iran, Islamic Republic of); Gelali, A.; Zahrabi, H.; Solaymani, S. [Young Researchers Club, Islamic Azad University, Kermanshah Branch, Kermanshah (Iran, Islamic Republic of)

    2011-11-01

    We report optical and electrical properties of Cu-Ni nanoparticles in hydrogenated amorphous carbon (Cu-Ni NPs - a-C:H) with different surface morphology. Ni NPs with layer thicknesses of 5, 10 and 15 nm over Cu NPs - a-C:H were prepared by co-deposition of RF-sputtering and RF-Plasma Enhanced Chemical Vapor Deposition (RF-PECVD) from acetylene gas and Cu and Ni targets. A nonmetal-metal transition was observed as the thickness of Ni over layer increases. The surface morphology of the sample was described by a two dimensional (2D) Gaussian self-affine fractal, except the sample with 10 nm thickness of Ni over layer, which is in the nonmetal-metal transition region. X-ray diffraction profile indicates that Cu NPs and Ni NPs with fcc crystalline structure are formed in these films. Localized Surface Plasmon Resonance (LSPR) peak of Cu NPs is observed around 600 nm in visible spectra, which is widen and shifted to lower wavelengths as the thickness of Ni over layer increases. The variation of LSPR peak width correlates with conductivity variation of these bilayers. We assign both effects to surface electron delocalization of Cu NPs.

  15. A Low-Complexity Algorithm for Static Background Estimation from Cluttered Image Sequences in Surveillance Contexts

    Directory of Open Access Journals (Sweden)

    Reddy Vikas

    2011-01-01

    Full Text Available Abstract For the purposes of foreground estimation, the true background model is unavailable in many practical circumstances and needs to be estimated from cluttered image sequences. We propose a sequential technique for static background estimation in such conditions, with low computational and memory requirements. Image sequences are analysed on a block-by-block basis. For each block location a representative set is maintained which contains distinct blocks obtained along its temporal line. The background estimation is carried out in a Markov Random Field framework, where the optimal labelling solution is computed using iterated conditional modes. The clique potentials are computed based on the combined frequency response of the candidate block and its neighbourhood. It is assumed that the most appropriate block results in the smoothest response, indirectly enforcing the spatial continuity of structures within a scene. Experiments on real-life surveillance videos demonstrate that the proposed method obtains considerably better background estimates (both qualitatively and quantitatively than median filtering and the recently proposed "intervals of stable intensity" method. Further experiments on the Wallflower dataset suggest that the combination of the proposed method with a foreground segmentation algorithm results in improved foreground segmentation.

  16. Assessment of AlSi21CuNi Alloy’s Quality with Use of ATND Method

    Directory of Open Access Journals (Sweden)

    Pezda J.

    2013-12-01

    Full Text Available Majority of combustion engines is produced (poured from Al-Si alloys with low thermal expansion coefficient, so called piston silumins. Hypereutectic alloys normally contain coarse, primary angular Si particles together with eutectic Si phase. The structure and mechanical properties of these alloys are highly dependent upon cooling rate, composition, modification and heat-treatment operations. In the paper one depicts use of the ATND method (thermal-voltage-derivative analysis and regression analysis to assessment of quality of the AlSi21CuNi alloy modified with Cu-P on stage of its preparation, in aspect of obtained mechanical properties (R0,02, Rm, A5, HB. Obtained dependencies enable prediction of mechanical properties of the investigated alloy in laboratory conditions, using values of characteristic points from curves of the ATND method.

  17. Background subtraction theory and practice

    CERN Document Server

    Elgammal, Ahmed

    2014-01-01

    Background subtraction is a widely used concept for detection of moving objects in videos. In the last two decades there has been a lot of development in designing algorithms for background subtraction, as well as wide use of these algorithms in various important applications, such as visual surveillance, sports video analysis, motion capture, etc. Various statistical approaches have been proposed to model scene backgrounds. The concept of background subtraction also has been extended to detect objects from videos captured from moving cameras. This book reviews the concept and practice of back

  18. Influence of preparation method on supported Cu-Ni alloys and their catalytic properties in high pressure CO hydrogenation

    DEFF Research Database (Denmark)

    Wu, Qiongxiao; Eriksen, Winnie L.; Duchstein, Linus Daniel Leonhard

    2014-01-01

    (50 bar CO and 50 bar H2). These alloy catalysts are highly selective (more than 99 mol%) and active for methanol synthesis; however, loss of Ni caused by nickel carbonyl formation is found to be a serious issue. The Ni carbonyl formation should be considered, if Ni-containing catalysts (even...... high surface area silica supported catalysts (BET surface area up to 322 m2 g-1, and metal area calculated from X-ray diffraction particle size up to 29 m2 g-1). The formation of bimetallic Cu-Ni alloy nanoparticles has been studied during reduction using in situ X-ray diffraction. Compared...

  19. Real-Time Adaptive Foreground/Background Segmentation

    Directory of Open Access Journals (Sweden)

    Sridha Sridharan

    2005-08-01

    Full Text Available The automatic analysis of digital video scenes often requires the segmentation of moving objects from a static background. Historically, algorithms developed for this purpose have been restricted to small frame sizes, low frame rates, or offline processing. The simplest approach involves subtracting the current frame from the known background. However, as the background is rarely known beforehand, the key is how to learn and model it. This paper proposes a new algorithm that represents each pixel in the frame by a group of clusters. The clusters are sorted in order of the likelihood that they model the background and are adapted to deal with background and lighting variations. Incoming pixels are matched against the corresponding cluster group and are classified according to whether the matching cluster is considered part of the background. The algorithm has been qualitatively and quantitatively evaluated against three other well-known techniques. It demonstrated equal or better segmentation and proved capable of processing 320×240 PAL video at full frame rate using only 35%–40% of a 1.8 GHz Pentium 4 computer.

  20. Gas leak detection in infrared video with background modeling

    Science.gov (United States)

    Zeng, Xiaoxia; Huang, Likun

    2018-03-01

    Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.

  1. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  2. Instructional Modules for Training Special Education Teachers: A Final Report on the Development and Field Testing of the CUNY-CBTEP Special Education Modules. Case 30-76. Toward Competence Instructional Materials for Teacher Education.

    Science.gov (United States)

    City Univ. of New York, NY. Center for Advanced Study in Education.

    The City University of New York Competency Based Teacher Education Project (CUNY-CBTEP) in Special Education studied Modularization, focusing on the variables in the instructional setting that facilitate learning from modular materials for a wide range of students. Four of the five modules for the training of special education teachers developed…

  3. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  4. The activation energy for loop growth in Cu and Cu-Ni alloys

    International Nuclear Information System (INIS)

    Barlow, P.; Leffers, T.; Singh, B.N.

    1978-08-01

    The apparent activation energy for the growth of interstitial dislocation loops in copper, Cu-1%Ni, Cu-2%Ni, and Cu-5%Ni during high voltage electron microscope irradiation was determined. The apparent activation energy for loop growth in all these materials can be taken to be 0.34eV+-0.02eV. This value together with the corresponding value of 0.44eV+-0.02eV determined earlier for Cu-10%Ni is discussed with reference to the void growth rates observed in these materials. The apparent activation energy for loop growth in copper (and in Cu-1%Ni that has a void growth rate similar to that in pure copper) is interpreted as twice the vacancy migration energy (indicating that divacancies do not play any significant role). For the materials with higher Ni content (in which the void growth rate is much lower than that in Cu and Cu-1%Ni) the measured apparent activation energy is interpreted to be characteristic of loops positioned fairly close to the foil surface and not of loops in ''bulk material''. From the present results in combination with the earlier results for Cu-10%Ni it is concluded that interstitial trapping is the most likely explanation of the reduced void growth rate in Cu-Ni alloys. (author)

  5. Polycrystalline oxides formation during transient oxidation of (001) Cu-Ni binary alloys studied by in situ TEM and XRD

    International Nuclear Information System (INIS)

    Yang, J.C.; Li, Z.Q.; Sun, L.; Zhou, G.W.; Eastman, J.A.; Fong, D.D.; Fuoss, P.H.; Baldo, P.M.; Rehn, L.E.; Thompson, L.J.

    2009-01-01

    The nucleation and growth of Cu 2 O and NiO islands due to oxidation of Cu x Ni 1-x (001) films were monitored, at various temperatures, by in situ ultra-high vacuum (UHV) transmission electron microscopy (TEM) and in situ synchrotron X-ray diffraction (XRD). In remarkable contrast to our previous observations of Cu and Cu-Au oxidation, irregular-shaped polycrystalline oxide islands formed with respect to the Cu-Ni alloy film, and an unusual second oxide nucleation stage was noted. In situ XRD experiments revealed that NiO formed first epitaxially, then other orientations appeared, and finally polycrystalline Cu 2 O developed as the oxidation pressure was increased. The segregation of Ni and Cu towards or away, respectively, from the alloy surface during oxidation could disrupt the surface and cause polycrystalline oxide formation.

  6. Crystal identification for a dual-layer-offset LYSO based PET system via Lu-176 background radiation and mean shift algorithm

    Science.gov (United States)

    Wei, Qingyang; Ma, Tianyu; Xu, Tianpeng; Zeng, Ming; Gu, Yu; Dai, Tiantian; Liu, Yaqiang

    2018-01-01

    Modern positron emission tomography (PET) detectors are made from pixelated scintillation crystal arrays and readout by Anger logic. The interaction position of the gamma-ray should be assigned to a crystal using a crystal position map or look-up table. Crystal identification is a critical procedure for pixelated PET systems. In this paper, we propose a novel crystal identification method for a dual-layer-offset LYSO based animal PET system via Lu-176 background radiation and mean shift algorithm. Single photon event data of the Lu-176 background radiation are acquired in list-mode for 3 h to generate a single photon flood map (SPFM). Coincidence events are obtained from the same data using time information to generate a coincidence flood map (CFM). The CFM is used to identify the peaks of the inner layer using the mean shift algorithm. The response of the inner layer is deducted from the SPFM by subtracting CFM. Then, the peaks of the outer layer are also identified using the mean shift algorithm. The automatically identified peaks are manually inspected by a graphical user interface program. Finally, a crystal position map is generated using a distance criterion based on these peaks. The proposed method is verified on the animal PET system with 48 detector blocks on a laptop with an Intel i7-5500U processor. The total runtime for whole system peak identification is 67.9 s. Results show that the automatic crystal identification has 99.98% and 99.09% accuracy for the peaks of the inner and outer layers of the whole system respectively. In conclusion, the proposed method is suitable for the dual-layer-offset lutetium based PET system to perform crystal identification instead of external radiation sources.

  7. Background modelling of diffraction data in the presence of ice rings

    Directory of Open Access Journals (Sweden)

    James M. Parkhurst

    2017-09-01

    Full Text Available An algorithm for modelling the background for each Bragg reflection in a series of X-ray diffraction images containing Debye–Scherrer diffraction from ice in the sample is presented. The method involves the use of a global background model which is generated from the complete X-ray diffraction data set. Fitting of this model to the background pixels is then performed for each reflection independently. The algorithm uses a static background model that does not vary over the course of the scan. The greatest improvement can be expected for data where ice rings are present throughout the data set and the local background shape at the size of a spot on the detector does not exhibit large time-dependent variation. However, the algorithm has been applied to data sets whose background showed large pixel variations (variance/mean > 2 and has been shown to improve the results of processing for these data sets. It is shown that the use of a simple flat-background model as in traditional integration programs causes systematic bias in the background determination at ice-ring resolutions, resulting in an overestimation of reflection intensities at the peaks of the ice rings and an underestimation of reflection intensities either side of the ice ring. The new global background-model algorithm presented here corrects for this bias, resulting in a noticeable improvement in R factors following refinement.

  8. Universal algorithm of time sharing

    International Nuclear Information System (INIS)

    Silin, I.N.; Fedyun'kin, E.D.

    1979-01-01

    Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed

  9. A multilevel system of algorithms for detecting and isolating signals in a background of noise

    Science.gov (United States)

    Gurin, L. S.; Tsoy, K. A.

    1978-01-01

    Signal information is processed with the help of algorithms, and then on the basis of such processing, a part of the information is subjected to further processing with the help of more precise algorithms. Such a system of algorithms is studied, a comparative evaluation of a series of lower level algorithms is given, and the corresponding algorithms of higher level are characterized.

  10. A retention-time-shift-tolerant background subtraction and noise reduction algorithm (BgS-NoRA) for extraction of drug metabolites in liquid chromatography/mass spectrometry data from biological matrices.

    Science.gov (United States)

    Zhu, Peijuan; Ding, Wei; Tong, Wei; Ghosal, Anima; Alton, Kevin; Chowdhury, Swapan

    2009-06-01

    A retention-time-shift-tolerant background subtraction and noise reduction algorithm (BgS-NoRA) is implemented using the statistical programming language R to remove non-drug-related ion signals from accurate mass liquid chromatography/mass spectrometry (LC/MS) data. The background-subtraction part of the algorithm is similar to a previously published procedure (Zhang H and Yang Y. J. Mass Spectrom. 2008, 43: 1181-1190). The noise reduction algorithm (NoRA) is an add-on feature to help further clean up the residual matrix ion noises after background subtraction. It functions by removing ion signals that are not consistent across many adjacent scans. The effectiveness of BgS-NoRA was examined in biological matrices by spiking blank plasma extract, bile and urine with diclofenac and ibuprofen that have been pre-metabolized by microsomal incubation. Efficient removal of background ions permitted the detection of drug-related ions in in vivo samples (plasma, bile, urine and feces) obtained from rats orally dosed with (14)C-loratadine with minimal interference. Results from these experiments demonstrate that BgS-NoRA is more effective in removing analyte-unrelated ions than background subtraction alone. NoRA is shown to be particularly effective in the early retention region for urine samples and middle retention region for bile samples, where the matrix ion signals still dominate the total ion chromatograms (TICs) after background subtraction. In most cases, the TICs after BgS-NoRA are in excellent qualitative correlation to the radiochromatograms. BgS-NoRA will be a very useful tool in metabolite detection and identification work, especially in first-in-human (FIH) studies and multiple dose toxicology studies where non-radio-labeled drugs are administered. Data from these types of studies are critical to meet the latest FDA guidance on Metabolite in Safety Testing (MIST). Copyright (c) 2009 John Wiley & Sons, Ltd.

  11. Weakly supervised semantic segmentation using fore-background priors

    Science.gov (United States)

    Han, Zheng; Xiao, Zhitao; Yu, Mingjun

    2017-07-01

    Weakly-supervised semantic segmentation is a challenge in the field of computer vision. Most previous works utilize the labels of the whole training set and thereby need the construction of a relationship graph about image labels, thus result in expensive computation. In this study, we tackle this problem from a different perspective. We proposed a novel semantic segmentation algorithm based on background priors, which avoids the construction of a huge graph in whole training dataset. Specifically, a random forest classifier is obtained using weakly supervised training data .Then semantic texton forest (STF) feature is extracted from image superpixels. Finally, a CRF based optimization algorithm is proposed. The unary potential of CRF derived from the outputting probability of random forest classifier and the robust saliency map as background prior. Experiments on the MSRC21 dataset show that the new algorithm outperforms some previous influential weakly-supervised segmentation algorithms. Furthermore, the use of efficient decision forests classifier and parallel computing of saliency map significantly accelerates the implementation.

  12. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  13. Automatic development of normal zone in composite MgB2/CuNi wires with different diameters

    Science.gov (United States)

    Jokinen, A.; Kajikawa, K.; Takahashi, M.; Okada, M.

    2010-06-01

    One of the promising applications with superconducting technology for hydrogen utilization is a sensor with a magnesium-diboride (MgB2) superconductor to detect the position of boundary between the liquid hydrogen and the evaporated gas stored in a Dewar vessel. In our previous experiment for the level sensor, the normal zone has been automatically developed and therefore any energy input with the heater has not been required for normal operation. Although the physical mechanism for such a property of the MgB2 wire has not been clarified yet, the deliberate application might lead to the realization of a simpler superconducting level sensor without heater system. In the present study, the automatic development of normal zone with increasing a transport current is evaluated for samples consisting of three kinds of MgB2 wires with CuNi sheath and different diameters immersed in liquid helium. The influences of the repeats of current excitation and heat cycle on the normal zone development are discussed experimentally. The aim of this paper is to confirm the suitability of MgB2 wire in a heater free level sensor application. This could lead to even more optimized design of the liquid hydrogen level sensor and the removal of extra heater input.

  14. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  15. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  16. The Optical Properties of Cu-Ni Nanoparticles Produced via Pulsed Laser Dewetting of Ultrathin Films: The Effect of Nanoparticle Size and Composition on the Plasmon Response

    International Nuclear Information System (INIS)

    Wu, Yeuyeng; Fowlkes, Jason Davidson; Rack, Philip D.

    2011-01-01

    Thin film Cu-Ni alloys ranging from 2-8nm were synthesized and their optical properties were measured as-deposited and after a laser treatment which dewet the films into arrays of spatially correlated nanoparticles. The resultant nanoparticle size and spacing are attributed to laser induced spinodal dewetting process. The evolution of the spinodal dewetting process is investigated as a function of the thin film composition which ultimately dictates the size distribution and spacing of the nanoparticles. The optical measurements of the copper rich alloy nanoparticles reveal a signature absorption peak suggestive of a plasmonic peak which red-shifts with increasing nanoparticle size and blue shifts and dampens with increasing nickel concentration.

  17. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    Science.gov (United States)

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  18. Holistic approach for automated background EEG assessment in asphyxiated full-term infants

    Science.gov (United States)

    Matic, Vladimir; Cherian, Perumpillichira J.; Koolen, Ninah; Naulaers, Gunnar; Swarte, Renate M.; Govaert, Paul; Van Huffel, Sabine; De Vos, Maarten

    2014-12-01

    Objective. To develop an automated algorithm to quantify background EEG abnormalities in full-term neonates with hypoxic ischemic encephalopathy. Approach. The algorithm classifies 1 h of continuous neonatal EEG (cEEG) into a mild, moderate or severe background abnormality grade. These classes are well established in the literature and a clinical neurophysiologist labeled 272 1 h cEEG epochs selected from 34 neonates. The algorithm is based on adaptive EEG segmentation and mapping of the segments into the so-called segments’ feature space. Three features are suggested and further processing is obtained using a discretized three-dimensional distribution of the segments’ features represented as a 3-way data tensor. Further classification has been achieved using recently developed tensor decomposition/classification methods that reduce the size of the model and extract a significant and discriminative set of features. Main results. Effective parameterization of cEEG data has been achieved resulting in high classification accuracy (89%) to grade background EEG abnormalities. Significance. For the first time, the algorithm for the background EEG assessment has been validated on an extensive dataset which contained major artifacts and epileptic seizures. The demonstrated high robustness, while processing real-case EEGs, suggests that the algorithm can be used as an assistive tool to monitor the severity of hypoxic insults in newborns.

  19. Using background knowledge for picture organization and retrieval

    Science.gov (United States)

    Quintana, Yuri

    1997-01-01

    A picture knowledge base management system is described that is used to represent, organize and retrieve pictures from a frame knowledge base. Experiments with human test subjects were conducted to obtain further descriptions of pictures from news magazines. These descriptions were used to represent the semantic content of pictures in frame representations. A conceptual clustering algorithm is described which organizes pictures not only on the observable features, but also on implicit properties derived from the frame representations. The algorithm uses inheritance reasoning to take into account background knowledge in the clustering. The algorithm creates clusters of pictures using a group similarity function that is based on the gestalt theory of picture perception. For each cluster created, a frame is generated which describes the semantic content of pictures in the cluster. Clustering and retrieval experiments were conducted with and without background knowledge. The paper shows how the use of background knowledge and semantic similarity heuristics improves the speed, precision, and recall of queries processed. The paper concludes with a discussion of how natural language processing of can be used to assist in the development of knowledge bases and the processing of user queries.

  20. Numerical method for IR background and clutter simulation

    Science.gov (United States)

    Quaranta, Carlo; Daniele, Gina; Balzarotti, Giorgio

    1997-06-01

    The paper describes a fast and accurate algorithm of IR background noise and clutter generation for application in scene simulations. The process is based on the hypothesis that background might be modeled as a statistical process where amplitude of signal obeys to the Gaussian distribution rule and zones of the same scene meet a correlation function with exponential form. The algorithm allows to provide an accurate mathematical approximation of the model and also an excellent fidelity with reality, that appears from a comparison with images from IR sensors. The proposed method shows advantages with respect to methods based on the filtering of white noise in time or frequency domain as it requires a limited number of computation and, furthermore, it is more accurate than the quasi random processes. The background generation starts from a reticule of few points and by means of growing rules the process is extended to the whole scene of required dimension and resolution. The statistical property of the model are properly maintained in the simulation process. The paper gives specific attention to the mathematical aspects of the algorithm and provides a number of simulations and comparisons with real scenes.

  1. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra

    2017-07-02

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  2. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra; Li, Xin; Richtarik, Peter

    2017-01-01

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  3. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  4. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  5. Generative electronic background music system

    Energy Technology Data Exchange (ETDEWEB)

    Mazurowski, Lukasz [Faculty of Computer Science, West Pomeranian University of Technology in Szczecin, Zolnierska Street 49, Szczecin, PL (Poland)

    2015-03-10

    In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions.

  6. Generative electronic background music system

    International Nuclear Information System (INIS)

    Mazurowski, Lukasz

    2015-01-01

    In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions

  7. An algorithm for determination of peak regions and baseline elimination in spectroscopic data

    International Nuclear Information System (INIS)

    Morhac, Miroslav

    2009-01-01

    In the paper we propose a new algorithm for the determination of peaks containing regions and their separation from peak-free regions. Further based on this algorithm we propose a new background elimination algorithm which allows more accurate estimate of the background beneath the peaks than the algorithms known so far. The algorithm is based on a clipping operation with the window adjustable automatically to the widths of identified peak regions. The illustrative examples presented in the paper prove in favor of the proposed algorithms.

  8. Phase shift extraction and wavefront retrieval from interferograms with background and contrast fluctuations

    International Nuclear Information System (INIS)

    Liu, Qian; Wang, Yang; He, Jianguo; Ji, Fang

    2015-01-01

    The fluctuations of background and contrast cause measurement errors in the phase-shifting technique. To extract the phase shifts from interferograms with background and contrast fluctuations, an iterative algorithm is represented. The phase shifts and wavefront phase are calculated in two individual steps with the least-squares method. The fluctuation factors are determined when the phase shifts are calculated, and the fluctuations are compensated when the wavefront phase is calculated. The advantage of the algorithm lies in its ability to extract phase shifts from interferograms with background and contrast fluctuations converging stably and rapidly. Simulations and experiments verify the effectiveness and reliability of the proposed algorithm. The convergence accuracy and speed are demonstrated by the simulation results. The experiment results show its ability for suppressing phase retrieval errors. (paper)

  9. Low Background Micromegas in CAST

    CERN Document Server

    Garza, J.G.; Aznar, F.; Calvet, D.; Castel, J.F.; Christensen, F.E.; Dafni, T.; Davenport, M.; Decker, T.; Ferrer-Ribas, E.; Galán, J.; García, J.A.; Giomataris, I.; Hill, R.M.; Iguaz, F.J.; Irastorza, I.G.; Jakobsen, A.C.; Jourde, D.; Mirallas, H.; Ortega, I.; Papaevangelou, T.; Pivovaroff, M.J.; Ruz, J.; Tomás, A.; Vafeiadis, T.; Vogel, J.K.

    2015-11-16

    Solar axions could be converted into x-rays inside the strong magnetic field of an axion helioscope, triggering the detection of this elusive particle. Low background x-ray detectors are an essential component for the sensitivity of these searches. We report on the latest developments of the Micromegas detectors for the CERN Axion Solar Telescope (CAST), including technological pathfinder activities for the future International Axion Observatory (IAXO). The use of low background techniques and the application of discrimination algorithms based on the high granularity of the readout have led to background levels below 10$^{-6}$ counts/keV/cm$^2$/s, more than a factor 100 lower than the first generation of Micromegas detectors. The best levels achieved at the Canfranc Underground Laboratory (LSC) are as low as 10$^{-7}$ counts/keV/cm$^2$/s, showing good prospects for the application of this technology in IAXO. The current background model, based on underground and surface measurements, is presented, as well as ...

  10. X-ray diffraction study of chalcopyrite CuFeS2, pentlandite (Fe,Ni)9S8 and Pyrrhotite Fe1-xS obtained from Cu-Ni orebodies

    International Nuclear Information System (INIS)

    Nkoma, J.S.; Ekosse, G.

    1998-05-01

    The X-ray Diffraction (XRD) technique is applied to study five samples of Cu-Ni orebodies, and it is shown that they contain chalcopyrite CuFeS 2 as the source of Cu, pentlandite (Fe,Ni) 9 S 8 as the source of Ni and pyrrhotite Fe 1-x S as a dominant compound. There are also other less dominant compounds such as bunsenite NiO, chalcocite Cu 2 S, penrosite (Ni, Cu)Se 2 and magnetite Fe 3 O 4 . Using the obtained XRD data, we obtain the lattice parameters for tetragonal chalcopyrite as a=b=5.3069A and c=10.3836A, cubic pentlandite as a=b=c=10.0487A, and hexagonal pyrrhotite as a=b=6.8820A and c=22.8037A. (author)

  11. Background elimination methods for multidimensional coincidence γ-ray spectra

    International Nuclear Information System (INIS)

    Morhac, M.

    1997-01-01

    In the paper new methods to separate useful information from background in one, two, three and multidimensional spectra (histograms) measured in large multidetector γ-ray arrays are derived. The sensitive nonlinear peak clipping algorithm is the basis of the methods for estimation of the background in multidimensional spectra. The derived procedures are simple and therefore have a very low cost in terms of computing time. (orig.)

  12. Optical diffraction tomography in an inhomogeneous background medium

    International Nuclear Information System (INIS)

    Devaney, A; Cheng, J

    2008-01-01

    The filtered back-propagation algorithm (FBP algorithm) is a computationally fast and efficient inversion algorithm for reconstructing the 3D index of refraction distribution of weak scattering samples in free space from scattered field data collected in a set of coherent optical scattering experiments. This algorithm is readily derived using classical Fourier analysis applied to the Born or Rytov weak scattering models appropriate to scatterers embedded in a non-attenuating uniform background. In this paper, the inverse scattering problem for optical diffraction tomography (ODT) is formulated using the so-called distorted wave Born and Rytov approximations and a generalized version of the FBP algorithm is derived that applies to weakly scattering samples that are embedded in realistic, multiple scattering ODT experimental configurations. The new algorithms are based on the generalized linear inverse of the linear transformation relating the scattered field data to the complex index of refraction distribution of the scattering samples and are in the form of a superposition of filtered data, computationally back propagated into the ODT experimental configuration. The paper includes a computer simulation comparing the generalized Born and Rytov based FBP inversion algorithms as well as reconstructions generated using the generalized Born based FBP algorithm of a step index optical fiber from experimental ODT data

  13. Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium

    International Nuclear Information System (INIS)

    Chen, Xudong

    2010-01-01

    This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging

  14. Background Noise Removal in Ultrasonic B-scan Images Using Iterative Statistical Techniques

    NARCIS (Netherlands)

    Wells, I.; Charlton, P. C.; Mosey, S.; Donne, K. E.

    2008-01-01

    The interpretation of ultrasonic B-scan images can be a time-consuming process and its success depends on operator skills and experience. Removal of the image background will potentially improve its quality and hence improve operator diagnosis. An automatic background noise removal algorithm is

  15. The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Fang Hu

    2014-04-01

    Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.

  16. Adaptive cancellation of geomagnetic background noise for magnetic anomaly detection using coherence

    International Nuclear Information System (INIS)

    Liu, Dunge; Xu, Xin; Huang, Chao; Zhu, Wanhua; Liu, Xiaojun; Fang, Guangyou; Yu, Gang

    2015-01-01

    Magnetic anomaly detection (MAD) is an effective method for the detection of ferromagnetic targets against background magnetic fields. Currently, the performance of MAD systems is mainly limited by the background geomagnetic noise. Several techniques have been developed to detect target signatures, such as the synchronous reference subtraction (SRS) method. In this paper, we propose an adaptive coherent noise suppression (ACNS) method. The proposed method is capable of evaluating and detecting weak anomaly signals buried in background geomagnetic noise. Tests with real-world recorded magnetic signals show that the ACNS method can excellently remove the background geomagnetic noise by about 21 dB or more in high background geomagnetic field environments. Additionally, as a general form of the SRS method, the ACNS method offers appreciable advantages over the existing algorithms. Compared to the SRS method, the ACNS algorithm can eliminate the false target signals and represents a noise suppressing capability improvement of 6.4 dB. The positive outcomes in terms of intelligibility make this method a potential candidate for application in MAD systems. (paper)

  17. An improved algorithm for calculating cloud radiation

    International Nuclear Information System (INIS)

    Yuan Guibin; Sun Xiaogang; Dai Jingmin

    2005-01-01

    Clouds radiation characteristic is very important in cloud scene simulation, weather forecasting, pattern recognition, and other fields. In order to detect missiles against cloud backgrounds, to enhance the fidelity of simulation, it is critical to understand a cloud's thermal radiation model. Firstly, the definition of cloud layer infrared emittance is given. Secondly, the discrimination conditions of judging a pixel of focal plane on a satellite in daytime or night time are shown and equations are given. Radiance such as reflected solar radiance, solar scattering, diffuse solar radiance, solar and thermal sky shine, solar and thermal path radiance, cloud blackbody and background radiance are taken into account. Thirdly, the computing methods of background radiance for daytime and night time are given. Through simulations and comparison, this algorithm is proved to be an effective calculating algorithm for cloud radiation

  18. A new algorithm for hip fracture surgery

    DEFF Research Database (Denmark)

    Palm, Henrik; Krasheninnikoff, Michael; Holck, Kim

    2012-01-01

    Background and purpose Treatment of hip fracture patients is controversial. We implemented a new operative and supervision algorithm (the Hvidovre algorithm) for surgical treatment of all hip fractures, primarily based on own previously published results. Methods 2,000 consecutive patients over 50...... years of age who were admitted and operated on because of a hip fracture were prospectively included. 1,000 of these patients were included after implementation of the algorithm. Demographic parameters, hospital treatment, and reoperations within the first postoperative year were assessed from patient...... by reoperations was reduced from 24% of total hospitalization before the algorithm was introduced to 18% after it was introduced. Interpretation It is possible to implement an algorithm for treatment of all hip fracture patients in a large teaching hospital. In our case, the Hvidovre algorithm both raised...

  19. Test of TEDA, Tsunami Early Detection Algorithm

    Science.gov (United States)

    Bressan, Lidia; Tinti, Stefano

    2010-05-01

    Tsunami detection in real-time, both offshore and at the coastline, plays a key role in Tsunami Warning Systems since it provides so far the only reliable and timely proof of tsunami generation, and is used to confirm or cancel tsunami warnings previously issued on the basis of seismic data alone. Moreover, in case of submarine or coastal landslide generated tsunamis, which are not announced by clear seismic signals and are typically local, real-time detection at the coastline might be the fastest way to release a warning, even if the useful time for emergency operations might be limited. TEDA is an algorithm for real-time detection of tsunami signal on sea-level records, developed by the Tsunami Research Team of the University of Bologna. The development and testing of the algorithm has been accomplished within the framework of the Italian national project DPC-INGV S3 and the European project TRANSFER. The algorithm is to be implemented at station level, and it is based therefore only on sea-level data of a single station, either a coastal tide-gauge or an offshore buoy. TEDA's principle is to discriminate the first tsunami wave from the previous background signal, which implies the assumption that the tsunami waves introduce a difference in the previous sea-level signal. Therefore, in TEDA the instantaneous (most recent) and the previous background sea-level elevation gradients are characterized and compared by proper functions (IS and BS) that are updated at every new data acquisition. Detection is triggered when the instantaneous signal function passes a set threshold and at the same time it is significantly bigger compared to the previous background signal. The functions IS and BS depend on temporal parameters that allow the algorithm to be adapted different situations: in general, coastal tide-gauges have a typical background spectrum depending on the location where the instrument is installed, due to local topography and bathymetry, while offshore buoys are

  20. Texture orientation-based algorithm for detecting infrared maritime targets.

    Science.gov (United States)

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  1. Chemical Source Localization Fusing Concentration Information in the Presence of Chemical Background Noise.

    Science.gov (United States)

    Pomareda, Víctor; Magrans, Rudys; Jiménez-Soto, Juan M; Martínez, Dani; Tresánchez, Marcel; Burgués, Javier; Palacín, Jordi; Marco, Santiago

    2017-04-20

    We present the estimation of a likelihood map for the location of the source of a chemical plume dispersed under atmospheric turbulence under uniform wind conditions. The main contribution of this work is to extend previous proposals based on Bayesian inference with binary detections to the use of concentration information while at the same time being robust against the presence of background chemical noise. For that, the algorithm builds a background model with robust statistics measurements to assess the posterior probability that a given chemical concentration reading comes from the background or from a source emitting at a distance with a specific release rate. In addition, our algorithm allows multiple mobile gas sensors to be used. Ten realistic simulations and ten real data experiments are used for evaluation purposes. For the simulations, we have supposed that sensors are mounted on cars which do not have among its main tasks navigating toward the source. To collect the real dataset, a special arena with induced wind is built, and an autonomous vehicle equipped with several sensors, including a photo ionization detector (PID) for sensing chemical concentration, is used. Simulation results show that our algorithm, provides a better estimation of the source location even for a low background level that benefits the performance of binary version. The improvement is clear for the synthetic data while for real data the estimation is only slightly better, probably because our exploration arena is not able to provide uniform wind conditions. Finally, an estimation of the computational cost of the algorithmic proposal is presented.

  2. A simple algorithm for measuring particle size distributions on an uneven background from TEM images

    DEFF Research Database (Denmark)

    Gontard, Lionel Cervera; Ozkaya, Dogan; Dunin-Borkowski, Rafal E.

    2011-01-01

    Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence of a...... application to images of heterogeneous catalysts is presented.......Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence...

  3. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    International Nuclear Information System (INIS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-01-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method. (paper)

  4. Adaptive sensor fusion using genetic algorithms

    International Nuclear Information System (INIS)

    Fitzgerald, D.S.; Adams, D.G.

    1994-01-01

    Past attempts at sensor fusion have used some form of Boolean logic to combine the sensor information. As an alteniative, an adaptive ''fuzzy'' sensor fusion technique is described in this paper. This technique exploits the robust capabilities of fuzzy logic in the decision process as well as the optimization features of the genetic algorithm. This paper presents a brief background on fuzzy logic and genetic algorithms and how they are used in an online implementation of adaptive sensor fusion

  5. Whitening of Background Brain Activity via Parametric Modeling

    Directory of Open Access Journals (Sweden)

    Nidal Kamel

    2007-01-01

    Full Text Available Several signal subspace techniques have been recently suggested for the extraction of the visual evoked potential signals from brain background colored noise. The majority of these techniques assume the background noise as white, and for colored noise, it is suggested to be whitened, without further elaboration on how this might be done. In this paper, we investigate the whitening capabilities of two parametric techniques: a direct one based on Levinson solution of Yule-Walker equations, called AR Yule-Walker, and an indirect one based on the least-squares solution of forward-backward linear prediction (FBLP equations, called AR-FBLP. The whitening effect of the two algorithms is investigated with real background electroencephalogram (EEG colored noise and compared in time and frequency domains.

  6. Multicore and GPU algorithms for Nussinov RNA folding

    Science.gov (United States)

    2014-01-01

    Background One segment of a RNA sequence might be paired with another segment of the same RNA sequence due to the force of hydrogen bonds. This two-dimensional structure is called the RNA sequence's secondary structure. Several algorithms have been proposed to predict an RNA sequence's secondary structure. These algorithms are referred to as RNA folding algorithms. Results We develop cache efficient, multicore, and GPU algorithms for RNA folding using Nussinov's algorithm. Conclusions Our cache efficient algorithm provides a speedup between 1.6 and 3.0 relative to a naive straightforward single core code. The multicore version of the cache efficient single core algorithm provides a speedup, relative to the naive single core algorithm, between 7.5 and 14.0 on a 6 core hyperthreaded CPU. Our GPU algorithm for the NVIDIA C2050 is up to 1582 times as fast as the naive single core algorithm and between 5.1 and 11.2 times as fast as the fastest previously known GPU algorithm for Nussinov RNA folding. PMID:25082539

  7. A Moving Object Detection Algorithm Based on Color Information

    International Nuclear Information System (INIS)

    Fang, X H; Xiong, W; Hu, B J; Wang, L T

    2006-01-01

    This paper designed a new algorithm of moving object detection for the aim of quick moving object detection and orientation, which used a pixel and its neighbors as an image vector to represent that pixel and modeled different chrominance component pixel as a mixture of Gaussians, and set up different mixture model of Gauss for different YUV chrominance components. In order to make full use of the spatial information, color segmentation and background model were combined. Simulation results show that the algorithm can detect intact moving objects even when the foreground has low contrast with background

  8. Spatio-temporal Background Models for Outdoor Surveillance

    Directory of Open Access Journals (Sweden)

    Pless Robert

    2005-01-01

    Full Text Available Video surveillance in outdoor areas is hampered by consistent background motion which defeats systems that use motion to identify intruders. While algorithms exist for masking out regions with motion, a better approach is to develop a statistical model of the typical dynamic video appearance. This allows the detection of potential intruders even in front of trees and grass waving in the wind, waves across a lake, or cars moving past. In this paper we present a general framework for the identification of anomalies in video, and a comparison of statistical models that characterize the local video dynamics at each pixel neighborhood. A real-time implementation of these algorithms runs on an 800 MHz laptop, and we present qualitative results in many application domains.

  9. Adaptive Filtering Algorithms and Practical Implementation

    CERN Document Server

    Diniz, Paulo S R

    2013-01-01

    In the fourth edition of Adaptive Filtering: Algorithms and Practical Implementation, author Paulo S.R. Diniz presents the basic concepts of adaptive signal processing and adaptive filtering in a concise and straightforward manner. The main classes of adaptive filtering algorithms are presented in a unified framework, using clear notations that facilitate actual implementation. The main algorithms are described in tables, which are detailed enough to allow the reader to verify the covered concepts. Many examples address problems drawn from actual applications. New material to this edition includes: Analytical and simulation examples in Chapters 4, 5, 6 and 10 Appendix E, which summarizes the analysis of set-membership algorithm Updated problems and references Providing a concise background on adaptive filtering, this book covers the family of LMS, affine projection, RLS and data-selective set-membership algorithms as well as nonlinear, sub-band, blind, IIR adaptive filtering, and more. Several problems are...

  10. Algorithmic foundation of multi-scale spatial representation

    CERN Document Server

    Li, Zhilin

    2006-01-01

    With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...

  11. Improved radiological/nuclear source localization in variable NORM background: An MLEM approach with segmentation data

    Energy Technology Data Exchange (ETDEWEB)

    Penny, Robert D., E-mail: robert.d.penny@leidos.com [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Crowley, Tanya M.; Gardner, Barbara M.; Mandell, Myron J.; Guo, Yanlin; Haas, Eric B.; Knize, Duane J.; Kuharski, Robert A.; Ranta, Dale; Shyffer, Ryan [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Labov, Simon; Nelson, Karl; Seilhan, Brandon [Lawrence Livermore National Laboratory, Livermore, CA (United States); Valentine, John D. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2015-06-01

    A novel approach and algorithm have been developed to rapidly detect and localize both moving and static radiological/nuclear (R/N) sources from an airborne platform. Current aerial systems with radiological sensors are limited in their ability to compensate for variable naturally occurring radioactive material (NORM) background. The proposed approach suppresses the effects of NORM background by incorporating additional information to segment the survey area into regions over which the background is likely to be uniform. The method produces pixelated Source Activity Maps (SAMs) of both target and background radionuclide activity over the survey area. The task of producing the SAMs requires (1) the development of a forward model which describes the transformation of radionuclide activity to detector measurements and (2) the solution of the associated inverse problem. The inverse problem is ill-posed as there are typically fewer measurements than unknowns. In addition the measurements are subject to Poisson statistical noise. The Maximum-Likelihood Expectation-Maximization (MLEM) algorithm is used to solve the inverse problem as it is well suited for under-determined problems corrupted by Poisson noise. A priori terrain information is incorporated to segment the reconstruction space into regions within which we constrain NORM background activity to be uniform. Descriptions of the algorithm and examples of performance with and without segmentation on simulated data are presented.

  12. Comparison of spatial models for foreground-background segmentation in underwater videos

    OpenAIRE

    Radolko, Martin

    2015-01-01

    The low-level task of foreground-background segregation is an important foundation for many high-level computer vision tasks and has been intensively researched in the past. Nonetheless, unregulated environments usually impose challenging problems, especially the difficult and often neglected underwater environment. There, among others, the edges are blurred, the contrast is impaired and the colors attenuated. Our approach to this problem uses an efficient Background Subtraction algorithm and...

  13. Study of robot landmark recognition with complex background

    Science.gov (United States)

    Huang, Yuqing; Yang, Jia

    2007-12-01

    It's of great importance for assisting robot in path planning, position navigating and task performing by perceiving and recognising environment characteristic. To solve the problem of monocular-vision-oriented landmark recognition for mobile intelligent robot marching with complex background, a kind of nested region growing algorithm which fused with transcendental color information and based on current maximum convergence center is proposed, allowing invariance localization to changes in position, scale, rotation, jitters and weather conditions. Firstly, a novel experiment threshold based on RGB vision model is used for the first image segmentation, which allowing some objects and partial scenes with similar color to landmarks also are detected with landmarks together. Secondly, with current maximum convergence center on segmented image as each growing seed point, the above region growing algorithm accordingly starts to establish several Regions of Interest (ROI) orderly. According to shape characteristics, a quick and effectual contour analysis based on primitive element is applied in deciding whether current ROI could be reserved or deleted after each region growing, then each ROI is judged initially and positioned. When the position information as feedback is conveyed to the gray image, the whole landmarks are extracted accurately with the second segmentation on the local image that exclusive to landmark area. Finally, landmarks are recognised by Hopfield neural network. Results issued from experiments on a great number of images with both campus and urban district as background show the effectiveness of the proposed algorithm.

  14. An ultra-low-background detector for axion searches

    International Nuclear Information System (INIS)

    Aune, S; Ferrer Ribas, E; Giomataris, I; Mols, J P; Papaevangelou, T; Dafni, T; Lacarra, J Galan; Iguaz, F J; Irastorza, I G; Morales, J; Ruz, J; Tomas, A; Fanourakis, G; Geralis, T; Kousouris, K; Vafeiadis, T

    2009-01-01

    A low background Micromegas detector has been operating in the CAST experiment at CERN for the search of solar axions since the start of data taking in 2002. The detector, made out of low radioactivity materials, operated efficiently and achieved a very low level of background (5x10 -5 keV -1 -cm -2 -s -1 ) without any shielding. New manufacturing techniques (Bulk/Microbulk) have led to further improvement of the characteristics of the detector such as uniformity, stability and energy resolution. These characteristics, the implementation of passive shielding and the improvement of the analysis algorithms have dramatically reduced the background level (2x10 -7 keV -1 -cm -2 |s -1 ), improving thus the overall sensitivity of the experiment and opening new possibilities for future searches.

  15. A linear-time algorithm for Euclidean feature transform sets

    NARCIS (Netherlands)

    Hesselink, Wim H.

    2007-01-01

    The Euclidean distance transform of a binary image is the function that assigns to every pixel the Euclidean distance to the background. The Euclidean feature transform is the function that assigns to every pixel the set of background pixels with this distance. We present an algorithm to compute the

  16. Parallel pipeline algorithm of real time star map preprocessing

    Science.gov (United States)

    Wang, Hai-yong; Qin, Tian-mu; Liu, Jia-qi; Li, Zhi-feng; Li, Jian-hua

    2016-03-01

    To improve the preprocessing speed of star map and reduce the resource consumption of embedded system of star tracker, a parallel pipeline real-time preprocessing algorithm is presented. The two characteristics, the mean and the noise standard deviation of the background gray of a star map, are firstly obtained dynamically by the means that the intervene of the star image itself to the background is removed in advance. The criterion on whether or not the following noise filtering is needed is established, then the extraction threshold value is assigned according to the level of background noise, so that the centroiding accuracy is guaranteed. In the processing algorithm, as low as two lines of pixel data are buffered, and only 100 shift registers are used to record the connected domain label, by which the problems of resources wasting and connected domain overflow are solved. The simulating results show that the necessary data of the selected bright stars could be immediately accessed in a delay time as short as 10us after the pipeline processing of a 496×496 star map in 50Mb/s is finished, and the needed memory and registers resource total less than 80kb. To verify the accuracy performance of the algorithm proposed, different levels of background noise are added to the processed ideal star map, and the statistic centroiding error is smaller than 1/23 pixel under the condition that the signal to noise ratio is greater than 1. The parallel pipeline algorithm of real time star map preprocessing helps to increase the data output speed and the anti-dynamic performance of star tracker.

  17. Monte Carlo Algorithms for a Bayesian Analysis of the Cosmic Microwave Background

    Science.gov (United States)

    Jewell, Jeffrey B.; Eriksen, H. K.; ODwyer, I. J.; Wandelt, B. D.; Gorski, K.; Knox, L.; Chu, M.

    2006-01-01

    A viewgraph presentation on the review of Bayesian approach to Cosmic Microwave Background (CMB) analysis, numerical implementation with Gibbs sampling, a summary of application to WMAP I and work in progress with generalizations to polarization, foregrounds, asymmetric beams, and 1/f noise is given.

  18. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  19. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  20. Weighted Low-Rank Approximation of Matrices and Background Modeling

    KAUST Repository

    Dutta, Aritra

    2018-04-15

    We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\\\\ell_1$ norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.

  1. Weighted Low-Rank Approximation of Matrices and Background Modeling

    KAUST Repository

    Dutta, Aritra; Li, Xin; Richtarik, Peter

    2018-01-01

    We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\\ell_1$ norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.

  2. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  3. A method of background noise cancellation for SQUID applications

    International Nuclear Information System (INIS)

    He, D F; Yoshizawa, M

    2003-01-01

    When superconducting quantum inference devices (SQUIDs) operate in low-cost shielding or unshielded environments, the environmental background noise should be reduced to increase the signal-to-noise ratio. In this paper we present a background noise cancellation method based on a spectral subtraction algorithm. We first measure the background noise and estimate the noise spectrum using fast Fourier transform (FFT), then we subtract the spectrum of background noise from that of the observed noisy signal and the signal can be reconstructed by inverse FFT of the subtracted spectrum. With this method, the background noise, especially stationary inferences, can be suppressed well and the signal-to-noise ratio can be increased. Using high-T C radio-frequency SQUID gradiometer and magnetometer, we have measured the magnetic field produced by a watch, which was placed 35 cm under a SQUID. After noise cancellation, the signal-to-noise ratio could be greatly increased. We also used this method to eliminate the vibration noise of a cryocooler SQUID

  4. Accuracy of a Diagnostic Algorithm to Diagnose Breakthrough Cancer Pain as Compared With Clinical Assessment.

    Science.gov (United States)

    Webber, Katherine; Davies, Andrew N; Cowie, Martin R

    2015-10-01

    Breakthrough cancer pain (BTCP) is a heterogeneous condition, and there are no internationally agreed standardized criteria to diagnose it. There are published algorithms to assist with diagnosis, but these differ in content. There are no comparative data to support use. To compare the diagnostic ability of a simple algorithm against a comprehensive clinical assessment to diagnose BTCP and to assess if verbal rating descriptors can adequately discriminate controlled background pain. Patients with cancer pain completed a three-step algorithm with a researcher to determine if they had controlled background pain and BTCP. This was followed by a detailed pain consultation with a clinical specialist who was blinded to the algorithm results and determined an independent pain diagnosis. The sensitivity, specificity, and positive and negative predictive values were calculated for the condition of BTCP. Further analysis determined which verbal pain severity descriptors corresponded with the condition of controlled background pain. The algorithm had a sensitivity of 0.54 and a specificity of 0.76 in the identification of BTCP. The positive predictive value was 0.7, and the negative predictive value was 0.62. The sensitivity of a background pain severity rating of mild or less to accurately categorize controlled background pain was 0.69 compared with 0.97 for severity of moderate or less; however, this was balanced by a higher specificity rating for mild or less, 0.78 compared with 0.2. The diagnostic breakthrough pain algorithm had a good positive predictive value but limited sensitivity using a cutoff score of "mild" to define controlled background pain. When the cutoff level was changed to moderate, the sensitivity increased, but specificity reduced. A comprehensive clinical assessment remains the preferred method to diagnose BTCP. Copyright © 2015 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  5. Seizure detection algorithms based on EMG signals

    DEFF Research Database (Denmark)

    Conradsen, Isa

    Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective...... on the amplitude of the signal. The other algorithm was based on information of the signal in the frequency domain, and it focused on synchronisation of the electrical activity in a single muscle during the seizure. Results: The amplitude-based algorithm reliably detected seizures in 2 of the patients, while...... the frequency-based algorithm was efficient for detecting the seizures in the third patient. Conclusion: Our results suggest that EMG signals could be used to develop an automatic seizuredetection system. However, different patients might require different types of algorithms /approaches....

  6. Algorithm for lamotrigine dose adjustment before, during, and after pregnancy

    DEFF Research Database (Denmark)

    Sabers, A

    2012-01-01

    Sabers A. Algorithm for lamotrigine dose adjustment before, during, and after pregnancy. Acta Neurol Scand: DOI: 10.1111/j.1600-0404.2011.01627.x. © 2011 John Wiley & Sons A/S. Background -  Treatment with lamotrigine (LTG) during pregnancy is associated with a pronounced risk of seizure deterior......Sabers A. Algorithm for lamotrigine dose adjustment before, during, and after pregnancy. Acta Neurol Scand: DOI: 10.1111/j.1600-0404.2011.01627.x. © 2011 John Wiley & Sons A/S. Background -  Treatment with lamotrigine (LTG) during pregnancy is associated with a pronounced risk of seizure...

  7. The influence of the marine aerobic Pseudomonas strain on the corrosion of 70/30 Cu-Ni alloy

    International Nuclear Information System (INIS)

    Yuan, S.J.; Choong, Amy M.F.; Pehkonen, S.O.

    2007-01-01

    A comparative study of the corrosion behavior of the 70/30 Cu-Ni alloy in a nutrient-rich simulated seawater-based nutrient-rich medium in the presence and the absence of a marine aerobic Pseudomonas bacterium was carried out by electrochemical experiments, microscopic methods and X-ray photoelectron spectroscopy (XPS). The results of Tafel plot measurements showed the noticeable increase in the corrosion rate of the alloy in the presence of the Pseudomonas bacteria as compared to the corresponding control samples. The E1S data demonstrated that the charge transfer resistance, R ct , and the resistance of oxide film, R f , gradually increased with time in the abiotic medium; whereas, both of them dramatically decreased with time in the biotic medium inoculated with the Pseudomonas, indicative of the acceleration of corrosion rates of the alloy. The bacterial cells preferentially attached themselves to the alloy surface to form patchy or blotchy biofilms, as observed by fluorescent microscopy (FM). Scanning electron microscopy (SEM) images revealed the occurrence of micro-pitting corrosion underneath the biofilms on the alloy surface after the biofilm removal. XPS studies presented the evolution of the passive film on the alloy surface with time in the presence and the absence of the Pseudomonas bacteria under experimental conditions, and further revealed that the presence of the Pseudomonas cells and its extra-cellular polymers (EPS) on the alloy surface retarded the formation process or impaired the protective nature of the oxide film. Furthermore, XPS results verified the difference in the chelating functional groups between the conditioning layers and the bacterial cells and the EPS in the biofilms, which was believed to connect with the loss of the passivity of the protective oxide film

  8. Systematic Assessment of Neutron and Gamma Backgrounds Relevant to Operational Modeling and Detection Technology Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Daniel E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hornback, Donald Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nicholson, Andrew D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peplow, Douglas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ayaz-Maierhafer, Birsen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-01

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  9. An ultra-low-background detector for axion searches

    Energy Technology Data Exchange (ETDEWEB)

    Aune, S; Ferrer Ribas, E; Giomataris, I; Mols, J P; Papaevangelou, T [IRFU, Centre d' Etudes de Saclay, Gif sur Yvette CEDEX (France); Dafni, T; Lacarra, J Galan; Iguaz, F J; Irastorza, I G; Morales, J; Ruz, J; Tomas, A [Instituto de Fisica Nuclear y Altas EnergIas, Zaragoza (Spain); Fanourakis, G; Geralis, T; Kousouris, K [Institute of Nuclear Physics, NCSR Demokritos, Athens (Greece); Vafeiadis, T, E-mail: Thomas.Papaevangelou@cern.c [Physics Department, Aristotle University, Thessaloniki (Greece)

    2009-07-01

    A low background Micromegas detector has been operating in the CAST experiment at CERN for the search of solar axions since the start of data taking in 2002. The detector, made out of low radioactivity materials, operated efficiently and achieved a very low level of background (5x10{sup -5} keV{sup -1}-cm{sup -2}-s{sup -1}) without any shielding. New manufacturing techniques (Bulk/Microbulk) have led to further improvement of the characteristics of the detector such as uniformity, stability and energy resolution. These characteristics, the implementation of passive shielding and the improvement of the analysis algorithms have dramatically reduced the background level (2x10{sup -7} keV{sup -1}-cm{sup -2}|s{sup -1}), improving thus the overall sensitivity of the experiment and opening new possibilities for future searches.

  10. NLSE: Parameter-Based Inversion Algorithm

    Science.gov (United States)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  11. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  12. Infrared image background modeling based on improved Susan filtering

    Science.gov (United States)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  13. Efficient motif finding algorithms for large-alphabet inputs

    Directory of Open Access Journals (Sweden)

    Pavlovic Vladimir

    2010-10-01

    Full Text Available Abstract Background We consider the problem of identifying motifs, recurring or conserved patterns, in the biological sequence data sets. To solve this task, we present a new deterministic algorithm for finding patterns that are embedded as exact or inexact instances in all or most of the input strings. Results The proposed algorithm (1 improves search efficiency compared to existing algorithms, and (2 scales well with the size of alphabet. On a synthetic planted DNA motif finding problem our algorithm is over 10× more efficient than MITRA, PMSPrune, and RISOTTO for long motifs. Improvements are orders of magnitude higher in the same setting with large alphabets. On benchmark TF-binding site problems (FNP, CRP, LexA we observed reduction in running time of over 12×, with high detection accuracy. The algorithm was also successful in rapidly identifying protein motifs in Lipocalin, Zinc metallopeptidase, and supersecondary structure motifs for Cadherin and Immunoglobin families. Conclusions Our algorithm reduces computational complexity of the current motif finding algorithms and demonstrate strong running time improvements over existing exact algorithms, especially in important and difficult cases of large-alphabet sequences.

  14. On-orbit real-time robust cooperative target identification in complex background

    Directory of Open Access Journals (Sweden)

    Wen Zhuoman

    2015-10-01

    Full Text Available Cooperative target identification is the prerequisite for the relative position and orientation measurement between the space robot arm and the to-be-arrested object. We propose an on-orbit real-time robust algorithm for cooperative target identification in complex background using the features of circle and lines. It first extracts only the interested edges in the target image using an adaptive threshold and refines them to about single-pixel-width with improved non-maximum suppression. Adapting a novel tracking approach, edge segments changing smoothly in tangential directions are obtained. With a small amount of calculation, large numbers of invalid edges are removed. From the few remained edges, valid circular arcs are extracted and reassembled to obtain circles according to a reliable criterion. Finally, the target is identified if there are certain numbers of straight lines whose relative positions with the circle match the known target pattern. Experiments demonstrate that the proposed algorithm accurately identifies the cooperative target within the range of 0.3–1.5 m under complex background at the speed of 8 frames per second, regardless of lighting condition and target attitude. The proposed algorithm is very suitable for real-time visual measurement of space robot arm because of its robustness and small memory requirement.

  15. Surface functionalization of Cu-Ni alloys via grafting of a bactericidal polymer for inhibiting biocorrosion by Desulfovibrio desulfuricans in anaerobic seawater.

    Science.gov (United States)

    Yuan, S J; Liu, C K; Pehkonen, S O; Bai, R B; Neoh, K G; Ting, Y P; Kang, E T

    2009-01-01

    A novel surface modification technique was developed to provide a copper nickel alloy (M) surface with bactericidal and anticorrosion properties for inhibiting biocorrosion. 4-(chloromethyl)-phenyl tricholorosilane (CTS) was first coupled to the hydroxylated alloy surface to form a compact silane layer, as well as to confer the surface with chloromethyl functional groups. The latter allowed the coupling of 4-vinylpyridine (4VP) to generate the M-CTS-4VP surface with biocidal functionality. Subsequent surface graft polymerization of 4VP, in the presence of benzoyl peroxide (BPO) initiator, from the M-CTS-4VP surface produced the poly(4-vinylpyridine) (P(4VP)) grafted surface, or the M-CTS-P(4VP) surface. The pyridine nitrogen moieties on the M-CTS-P(4VP) surface were quaternized with hexylbromide to produce a high concentration of quaternary ammonium groups. Each surface functionalization step was ascertained by X-ray photoelectron spectroscopy (XPS) and static water contact angle measurements. The alloy with surface-quaternized pyridinium cation groups (N+) exhibited good bactericidal efficiency in a Desulfovibrio desulfuricans-inoculated seawater-based modified Barr's medium, as indicated by viable cell counts and fluorescence microscopy (FM) images of the surface. The anticorrosion capability of the organic layers was verified by the polarization curve and electrochemical impedance spectroscopy (EIS) measurements. In comparison, the pristine (surface hydroxylated) Cu-Ni alloy was found to be readily susceptible to biocorrosion under the same environment.

  16. An AK-LDMeans algorithm based on image clustering

    Science.gov (United States)

    Chen, Huimin; Li, Xingwei; Zhang, Yongbin; Chen, Nan

    2018-03-01

    Clustering is an effective analytical technique for handling unmarked data for value mining. Its ultimate goal is to mark unclassified data quickly and correctly. We use the roadmap for the current image processing as the experimental background. In this paper, we propose an AK-LDMeans algorithm to automatically lock the K value by designing the Kcost fold line, and then use the long-distance high-density method to select the clustering centers to further replace the traditional initial clustering center selection method, which further improves the efficiency and accuracy of the traditional K-Means Algorithm. And the experimental results are compared with the current clustering algorithm and the results are obtained. The algorithm can provide effective reference value in the fields of image processing, machine vision and data mining.

  17. [A new peak detection algorithm of Raman spectra].

    Science.gov (United States)

    Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing

    2014-01-01

    The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.

  18. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    Energy Technology Data Exchange (ETDEWEB)

    Kagie, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lanterman, Aaron D. [Georgia Inst. of Technology, Atlanta, GA (United States)

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  19. Optimization of the Regularization in Background and Foreground Modeling

    Directory of Open Access Journals (Sweden)

    Si-Qi Wang

    2014-01-01

    Full Text Available Background and foreground modeling is a typical method in the application of computer vision. The current general “low-rank + sparse” model decomposes the frames from the video sequences into low-rank background and sparse foreground. But the sparse assumption in such a model may not conform with the reality, and the model cannot directly reflect the correlation between the background and foreground either. Thus, we present a novel model to solve this problem by decomposing the arranged data matrix D into low-rank background L and moving foreground M. Here, we only need to give the priori assumption of the background to be low-rank and let the foreground be separated from the background as much as possible. Based on this division, we use a pair of dual norms, nuclear norm and spectral norm, to regularize the foreground and background, respectively. Furthermore, we use a reweighted function instead of the normal norm so as to get a better and faster approximation model. Detailed explanation based on linear algebra about our two models will be presented in this paper. By the observation of the experimental results, we can see that our model can get better background modeling, and even simplified versions of our algorithms perform better than mainstream techniques IALM and GoDec.

  20. Spatial dual-orthogonal (SDO) phase-shifting algorithm by pre-recomposing the interference fringe.

    Science.gov (United States)

    Wang, Yi; Li, Bingbo; Zhong, Liyun; Tian, Jindong; Lu, Xiaoxu

    2017-07-24

    In the case that the phase distribution of interferogram is nonuniform and the background/modulation amplitude change rapidly, the current self-calibration algorithms with better performance like principal components analysis (PCA) and advanced iterative algorithm (AIA) cannot work well. In this study, from three or more phase-shifting interferograms with unknown phase-shifts, we propose a spatial dual-orthogonal (SDO) phase-shifting algorithm with high accuracy through using the spatial orthogonal property of interference fringe, in which a new sequence of fringe patterns with uniform phase distribution can be constructed by pre-recomposing original interferograms to determine their corresponding optimum combination coefficients, which are directly related with the phase shifts. Both simulation and experimental results show that using the proposed SDO algorithm, we can achieve accurate phase from the phase-shifting interferograms with nonuniform phase distribution, non-constant background and arbitrary phase shifts. Specially, it is found that the accuracy of phase retrieval with the proposed SDO algorithm is insensitive to the variation of fringe pattern, and this will supply a guarantee for high accuracy phase measurement and application.

  1. Research on machine learning framework based on random forest algorithm

    Science.gov (United States)

    Ren, Qiong; Cheng, Hui; Han, Hai

    2017-03-01

    With the continuous development of machine learning, industry and academia have released a lot of machine learning frameworks based on distributed computing platform, and have been widely used. However, the existing framework of machine learning is limited by the limitations of machine learning algorithm itself, such as the choice of parameters and the interference of noises, the high using threshold and so on. This paper introduces the research background of machine learning framework, and combined with the commonly used random forest algorithm in machine learning classification algorithm, puts forward the research objectives and content, proposes an improved adaptive random forest algorithm (referred to as ARF), and on the basis of ARF, designs and implements the machine learning framework.

  2. NEUTRON ALGORITHM VERIFICATION TESTING

    International Nuclear Information System (INIS)

    COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-01-01

    Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility

  3. A synergistic effort among geoscience, physics, computer science and mathematics at Hunter College of CUNY as a Catalyst for educating Earth scientists.

    Science.gov (United States)

    Salmun, H.; Buonaiuto, F. S.

    2016-12-01

    The Catalyst Scholarship Program at Hunter College of The City University of New York (CUNY) was established with a four-year award from the National Science Foundation (NSF) to fund scholarships for academically talented but financially disadvantaged students majoring in four disciplines of science, technology, engineering and mathematics (STEM). Led by Earth scientists the Program awarded scholarships to students in their junior or senior years majoring in computer science, geosciences, mathematics and physics to create two cohorts of students that spent a total of four semesters in an interdisciplinary community. The program included mentoring of undergraduate students by faculty and graduate students (peer-mentoring), a sequence of three semesters of a one-credit seminar course and opportunities to engage in research activities, research seminars and other enriching academic experiences. Faculty and peer-mentoring were integrated into all parts of the scholarship activities. The one-credit seminar course, although designed to expose scholars to the diversity STEM disciplines and to highlight research options and careers in these disciplines, was thematically focused on geoscience, specifically on ocean and atmospheric science. The program resulted in increased retention rates relative to institutional averages. In this presentation we will discuss the process of establishing the program, from the original plans to its implementation, as well as the impact of this multidisciplinary approach to geoscience education at our institution and beyond. An overview of accomplishments, lessons learned and potential for best practices will be presented.

  4. OccuPeak: ChIP-Seq peak calling based on internal background modelling

    NARCIS (Netherlands)

    de Boer, Bouke A.; van Duijvenboden, Karel; van den Boogaard, Malou; Christoffels, Vincent M.; Barnett, Phil; Ruijter, Jan M.

    2014-01-01

    ChIP-seq has become a major tool for the genome-wide identification of transcription factor binding or histone modification sites. Most peak-calling algorithms require input control datasets to model the occurrence of background reads to account for local sequencing and GC bias. However, the

  5. Density-Based Clustering with Geographical Background Constraints Using a Semantic Expression Model

    Directory of Open Access Journals (Sweden)

    Qingyun Du

    2016-05-01

    Full Text Available A semantics-based method for density-based clustering with constraints imposed by geographical background knowledge is proposed. In this paper, we apply an ontological approach to the DBSCAN (Density-Based Geospatial Clustering of Applications with Noise algorithm in the form of knowledge representation for constraint clustering. When used in the process of clustering geographic information, semantic reasoning based on a defined ontology and its relationships is primarily intended to overcome the lack of knowledge of the relevant geospatial data. Better constraints on the geographical knowledge yield more reasonable clustering results. This article uses an ontology to describe the four types of semantic constraints for geographical backgrounds: “No Constraints”, “Constraints”, “Cannot-Link Constraints”, and “Must-Link Constraints”. This paper also reports the implementation of a prototype clustering program. Based on the proposed approach, DBSCAN can be applied with both obstacle and non-obstacle constraints as a semi-supervised clustering algorithm and the clustering results are displayed on a digital map.

  6. Leakage Detection and Estimation Algorithm for Loss Reduction in Water Piping Networks

    Directory of Open Access Journals (Sweden)

    Kazeem B. Adedeji

    2017-10-01

    Full Text Available Water loss through leaking pipes constitutes a major challenge to the operational service of water utilities. In recent years, increasing concern about the financial loss and environmental pollution caused by leaking pipes has been driving the development of efficient algorithms for detecting leakage in water piping networks. Water distribution networks (WDNs are disperse in nature with numerous number of nodes and branches. Consequently, identifying the segment(s of the network and the exact leaking pipelines connected to this segment(s where higher background leakage outflow occurs is a challenging task. Background leakage concerns the outflow from small cracks or deteriorated joints. In addition, because they are diffuse flow, they are not characterised by quick pressure drop and are not detectable by measuring instruments. Consequently, they go unreported for a long period of time posing a threat to water loss volume. Most of the existing research focuses on the detection and localisation of burst type leakages which are characterised by a sudden pressure drop. In this work, an algorithm for detecting and estimating background leakage in water distribution networks is presented. The algorithm integrates a leakage model into a classical WDN hydraulic model for solving the network leakage flows. The applicability of the developed algorithm is demonstrated on two different water networks. The results of the tested networks are discussed and the solutions obtained show the benefits of the proposed algorithm. A noteworthy evidence is that the algorithm permits the detection of critical segments or pipes of the network experiencing higher leakage outflow and indicates the probable pipes of the network where pressure control can be performed. However, the possible position of pressure control elements along such critical pipes will be addressed in future work.

  7. A Teaching Approach from the Exhaustive Search Method to the Needleman-Wunsch Algorithm

    Science.gov (United States)

    Xu, Zhongneng; Yang, Yayun; Huang, Beibei

    2017-01-01

    The Needleman-Wunsch algorithm has become one of the core algorithms in bioinformatics; however, this programming requires more suitable explanations for students with different major backgrounds. In supposing sample sequences and using a simple store system, the connection between the exhaustive search method and the Needleman-Wunsch algorithm…

  8. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  9. Improved algorithms for approximate string matching (extended abstract

    Directory of Open Access Journals (Sweden)

    Papamichail Georgios

    2009-01-01

    Full Text Available Abstract Background The problem of approximate string matching is important in many different areas such as computational biology, text processing and pattern recognition. A great effort has been made to design efficient algorithms addressing several variants of the problem, including comparison of two strings, approximate pattern identification in a string or calculation of the longest common subsequence that two strings share. Results We designed an output sensitive algorithm solving the edit distance problem between two strings of lengths n and m respectively in time O((s - |n - m|·min(m, n, s + m + n and linear space, where s is the edit distance between the two strings. This worst-case time bound sets the quadratic factor of the algorithm independent of the longest string length and improves existing theoretical bounds for this problem. The implementation of our algorithm also excels in practice, especially in cases where the two strings compared differ significantly in length. Conclusion We have provided the design, analysis and implementation of a new algorithm for calculating the edit distance of two strings with both theoretical and practical implications. Source code of our algorithm is available online.

  10. Adaptation of Rejection Algorithms for a Radar Clutter

    Directory of Open Access Journals (Sweden)

    D. Popov

    2017-09-01

    Full Text Available In this paper, the algorithms for adaptive rejection of a radar clutter are synthesized for the case of a priori unknown spectral-correlation characteristics at wobbulation of a repetition period of the radar signal. The synthesis of algorithms for the non-recursive adaptive rejection filter (ARF of a given order is reduced to determination of the vector of weighting coefficients, which realizes the best effectiveness index for radar signal extraction from the moving targets on the background of the received clutter. As the effectiveness criterion, we consider the averaged (over the Doppler signal phase shift improvement coefficient for a signal-to-clutter ratio (SCR. On the base of extreme properties of the characteristic numbers (eigennumbers of the matrices, the optimal vector (according to this criterion maximum is defined as the eigenvector of the clutter correlation matrix corresponding to its minimal eigenvalue. The general type of the vector of optimal ARF weighting coefficients is de-termined and specific adaptive algorithms depending upon the ARF order are obtained, which in the specific cases can be reduced to the known algorithms confirming its authenticity. The comparative analysis of the synthesized and known algorithms is performed. Significant bene-fits are established in clutter rejection effectiveness by the offered processing algorithms compared to the known processing algorithms.

  11. SWCD: a sliding window and self-regulated learning-based background updating method for change detection in videos

    Science.gov (United States)

    Işık, Şahin; Özkan, Kemal; Günal, Serkan; Gerek, Ömer Nezih

    2018-03-01

    Change detection with background subtraction process remains to be an unresolved issue and attracts research interest due to challenges encountered on static and dynamic scenes. The key challenge is about how to update dynamically changing backgrounds from frames with an adaptive and self-regulated feedback mechanism. In order to achieve this, we present an effective change detection algorithm for pixelwise changes. A sliding window approach combined with dynamic control of update parameters is introduced for updating background frames, which we called sliding window-based change detection. Comprehensive experiments on related test videos show that the integrated algorithm yields good objective and subjective performance by overcoming illumination variations, camera jitters, and intermittent object motions. It is argued that the obtained method makes a fair alternative in most types of foreground extraction scenarios; unlike case-specific methods, which normally fail for their nonconsidered scenarios.

  12. Polarization of Cosmic Microwave Background

    International Nuclear Information System (INIS)

    Buzzelli, A; Cabella, P; De Gasperis, G; Vittorio, N

    2016-01-01

    In this work we present an extension of the ROMA map-making code for data analysis of Cosmic Microwave Background polarization, with particular attention given to the inflationary polarization B-modes. The new algorithm takes into account a possible cross- correlated noise component among the different detectors of a CMB experiment. We tested the code on the observational data of the BOOMERanG (2003) experiment and we show that we are provided with a better estimate of the power spectra, in particular the error bars of the BB spectrum are smaller up to 20% for low multipoles. We point out the general validity of the new method. A possible future application is the LSPE balloon experiment, devoted to the observation of polarization at large angular scales. (paper)

  13. Application of Monte Carlo algorithms to the Bayesian analysis of the Cosmic Microwave Background

    Science.gov (United States)

    Jewell, J.; Levin, S.; Anderson, C. H.

    2004-01-01

    Power spectrum estimation and evaluation of associated errors in the presence of incomplete sky coverage; nonhomogeneous, correlated instrumental noise; and foreground emission are problems of central importance for the extraction of cosmological information from the cosmic microwave background (CMB).

  14. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.

    Science.gov (United States)

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.

  15. BLINCK?A diagnostic algorithm for skin cancer diagnosis combining clinical features with dermatoscopy findings

    OpenAIRE

    Bourne, Peter; Rosendahl, Cliff; Keir, Jeff; Cameron, Alan

    2012-01-01

    Background: Deciding whether a skin lesion requires biopsy to exclude skin cancer is often challenging for primary care clinicians in Australia. There are several published algorithms designed to assist with the diagnosis of skin cancer but apart from the clinical ABCD rule, these algorithms only evaluate the dermatoscopic features of a lesion. Objectives: The BLINCK algorithm explores the effect of combining clinical history and examination with fundamental dermatoscopic assessment in primar...

  16. Research on Statistical Flow of the Complex Background Based on Image Method

    Directory of Open Access Journals (Sweden)

    Yang Huanhai

    2014-06-01

    Full Text Available Along with our country city changes a process continues to accelerate, city road traffic system pressure increasing. Therefore, the importance of intelligent transportation system based on computer vision technology is becoming more and more significant. Using the image processing technology for the vehicle detection has become a hot topic in the research field of. Only accurately segmented from the background of vehicle can recognize and track vehicles. Therefore, the application of video vehicle detection technology and image processing technology, identify a number of the same sight many car can, types and moving characteristics, can provide real-time basis for intelligent traffic control. This paper first introduces the concept of intelligent transportation system, the importance and the image processing technology in vehicle recognition in statistics, overview of video vehicle detection method, and the video detection technology and other detection technology, puts forward the superiority of video detection technology. Finally we design a real-time and reliable background subtraction method and the area of the vehicle recognition method based on information fusion algorithm, which is implemented with the MATLAB/GUI development tool in Windows operating system platform. In this paper, the application of the algorithm to study the frame traffic flow image. The experimental results show that, the algorithm of recognition of vehicle flow statistics, the effect is very good.

  17. Object Detection and Tracking using Modified Diamond Search Block Matching Motion Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Apurva Samdurkar

    2018-06-01

    Full Text Available Object tracking is one of the main fields within computer vision. Amongst various methods/ approaches for object detection and tracking, the background subtraction approach makes the detection of object easier. To the detected object, apply the proposed block matching algorithm for generating the motion vectors. The existing diamond search (DS and cross diamond search algorithms (CDS are studied and experiments are carried out on various standard video data sets and user defined data sets. Based on the study and analysis of these two existing algorithms a modified diamond search pattern (MDS algorithm is proposed using small diamond shape search pattern in initial step and large diamond shape (LDS in further steps for motion estimation. The initial search pattern consists of five points in small diamond shape pattern and gradually grows into a large diamond shape pattern, based on the point with minimum cost function. The algorithm ends with the small shape pattern at last. The proposed MDS algorithm finds the smaller motion vectors and fewer searching points than the existing DS and CDS algorithms. Further, object detection is carried out by using background subtraction approach and finally, MDS motion estimation algorithm is used for tracking the object in color video sequences. The experiments are carried out by using different video data sets containing a single object. The results are evaluated and compared by using the evaluation parameters like average searching points per frame and average computational time per frame. The experimental results show that the MDS performs better than DS and CDS on average search point and average computation time.

  18. Cosmic Microwave Background Mapmaking with a Messenger Field

    Science.gov (United States)

    Huffenberger, Kevin M.; Næss, Sigurd K.

    2018-01-01

    We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.

  19. AN IMPROVED FUZZY CLUSTERING ALGORITHM FOR MICROARRAY IMAGE SPOTS SEGMENTATION

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-11-01

    Full Text Available An automatic cDNA microarray image processing using an improved fuzzy clustering algorithm is presented in this paper. The spot segmentation algorithm proposed uses the gridding technique developed by the authors earlier, for finding the co-ordinates of each spot in an image. Automatic cropping of spots from microarray image is done using these co-ordinates. The present paper proposes an improved fuzzy clustering algorithm Possibility fuzzy local information c means (PFLICM to segment the spot foreground (FG from background (BG. The PFLICM improves fuzzy local information c means (FLICM algorithm by incorporating typicality of a pixel along with gray level information and local spatial information. The performance of the algorithm is validated using a set of simulated cDNA microarray images added with different levels of AWGN noise. The strength of the algorithm is tested by computing the parameters such as the Segmentation matching factor (SMF, Probability of error (pe, Discrepancy distance (D and Normal mean square error (NMSE. SMF value obtained for PFLICM algorithm shows an improvement of 0.9 % and 0.7 % for high noise and low noise microarray images respectively compared to FLICM algorithm. The PFLICM algorithm is also applied on real microarray images and gene expression values are computed.

  20. A new algorithmic approach for fingers detection and identification

    Science.gov (United States)

    Mubashar Khan, Arslan; Umar, Waqas; Choudhary, Taimoor; Hussain, Fawad; Haroon Yousaf, Muhammad

    2013-03-01

    Gesture recognition is concerned with the goal of interpreting human gestures through mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Hand gesture detection in a real time environment, where the time and memory are important issues, is a critical operation. Hand gesture recognition largely depends on the accurate detection of the fingers. This paper presents a new algorithmic approach to detect and identify fingers of human hand. The proposed algorithm does not depend upon the prior knowledge of the scene. It detects the active fingers and Metacarpophalangeal (MCP) of the inactive fingers from an already detected hand. Dynamic thresholding technique and connected component labeling scheme are employed for background elimination and hand detection respectively. Algorithm proposed a new approach for finger identification in real time environment keeping the memory and time constraint as low as possible.

  1. Fuzzy Information Retrieval Using Genetic Algorithms and Relevance Feedback.

    Science.gov (United States)

    Petry, Frederick E.; And Others

    1993-01-01

    Describes an approach that combines concepts from information retrieval, fuzzy set theory, and genetic programing to improve weighted Boolean query formulation via relevance feedback. Highlights include background on information retrieval systems; genetic algorithms; subproblem formulation; and preliminary results based on a testbed. (Contains 12…

  2. Testing mapping algorithms of the cancer-specific EORTC QLQ-C30 onto EQ-5D in malignant mesothelioma

    NARCIS (Netherlands)

    D.T. Arnold (David); D. Rowen (Donna); M.M. Versteegh (Matthijs); A. Morley (Anna); C.E. Hooper (Clare); N.A. Maskell (Nicholas)

    2015-01-01

    markdownabstract__Background:__ In order to estimate utilities for cancer studies where the EQ-5D was not used, the EORTC QLQ-C30 can be used to estimate EQ-5D using existing mapping algorithms. Several mapping algorithms exist for this transformation, however, algorithms tend to lose accuracy in

  3. Stall Recovery Guidance Algorithms Based on Constrained Control Approaches

    Science.gov (United States)

    Stepanyan, Vahram; Krishnakumar, Kalmanje; Kaneshige, John; Acosta, Diana

    2016-01-01

    Aircraft loss-of-control, in particular approach to stall or fully developed stall, is a major factor contributing to aircraft safety risks, which emphasizes the need to develop algorithms that are capable of assisting the pilots to identify the problem and providing guidance to recover the aircraft. In this paper we present several stall recovery guidance algorithms, which are implemented in the background without interfering with flight control system and altering the pilot's actions. They are using input and state constrained control methods to generate guidance signals, which are provided to the pilot in the form of visual cues. It is the pilot's decision to follow these signals. The algorithms are validated in the pilot-in-the loop medium fidelity simulation experiment.

  4. Accuracy evaluation of a new real-time continuous glucose monitoring algorithm in hypoglycemia

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Jensen, Morten Hasselstrøm; Johansen, Mette Dencker

    2014-01-01

    UNLABELLED: Abstract Background: The purpose of this study was to evaluate the performance of a new continuous glucose monitoring (CGM) calibration algorithm and to compare it with the Guardian(®) REAL-Time (RT) (Medtronic Diabetes, Northridge, CA) calibration algorithm in hypoglycemia. SUBJECTS...... AND METHODS: CGM data were obtained from 10 type 1 diabetes patients undergoing insulin-induced hypoglycemia. Data were obtained in two separate sessions using the Guardian RT CGM device. Data from the same CGM sensor were calibrated by two different algorithms: the Guardian RT algorithm and a new calibration...... algorithm. The accuracy of the two algorithms was compared using four performance metrics. RESULTS: The median (mean) of absolute relative deviation in the whole range of plasma glucose was 20.2% (32.1%) for the Guardian RT calibration and 17.4% (25.9%) for the new calibration algorithm. The mean (SD...

  5. Mean-variance model for portfolio optimization with background risk based on uncertainty theory

    Science.gov (United States)

    Zhai, Jia; Bai, Manying

    2018-04-01

    The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.

  6. THE QUASIPERIODIC AUTOMATED TRANSIT SEARCH ALGORITHM

    International Nuclear Information System (INIS)

    Carter, Joshua A.; Agol, Eric

    2013-01-01

    We present a new algorithm for detecting transiting extrasolar planets in time-series photometry. The Quasiperiodic Automated Transit Search (QATS) algorithm relaxes the usual assumption of strictly periodic transits by permitting a variable, but bounded, interval between successive transits. We show that this method is capable of detecting transiting planets with significant transit timing variations without any loss of significance— s mearing — as would be incurred with traditional algorithms; however, this is at the cost of a slightly increased stochastic background. The approximate times of transit are standard products of the QATS search. Despite the increased flexibility, we show that QATS has a run-time complexity that is comparable to traditional search codes and is comparably easy to implement. QATS is applicable to data having a nearly uninterrupted, uniform cadence and is therefore well suited to the modern class of space-based transit searches (e.g., Kepler, CoRoT). Applications of QATS include transiting planets in dynamically active multi-planet systems and transiting planets in stellar binary systems.

  7. Artificial bee colony algorithm for single-trial electroencephalogram analysis.

    Science.gov (United States)

    Hsu, Wei-Yen; Hu, Ya-Ping

    2015-04-01

    In this study, we propose an analysis system combined with feature selection to further improve the classification accuracy of single-trial electroencephalogram (EEG) data. Acquiring event-related brain potential data from the sensorimotor cortices, the system comprises artifact and background noise removal, feature extraction, feature selection, and feature classification. First, the artifacts and background noise are removed automatically by means of independent component analysis and surface Laplacian filter, respectively. Several potential features, such as band power, autoregressive model, and coherence and phase-locking value, are then extracted for subsequent classification. Next, artificial bee colony (ABC) algorithm is used to select features from the aforementioned feature combination. Finally, selected subfeatures are classified by support vector machine. Comparing with and without artifact removal and feature selection, using a genetic algorithm on single-trial EEG data for 6 subjects, the results indicate that the proposed system is promising and suitable for brain-computer interface applications. © EEG and Clinical Neuroscience Society (ECNS) 2014.

  8. A novel algorithm for automatic localization of human eyes

    Institute of Scientific and Technical Information of China (English)

    Liang Tao (陶亮); Juanjuan Gu (顾涓涓); Zhenquan Zhuang (庄镇泉)

    2003-01-01

    Based on geometrical facial features and image segmentation, we present a novel algorithm for automatic localization of human eyes in grayscale or color still images with complex background. Firstly, a determination criterion of eye location is established by the prior knowledge of geometrical facial features. Secondly,a range of threshold values that would separate eye blocks from others in a segmented face image (I.e.,a binary image) are estimated. Thirdly, with the progressive increase of the threshold by an appropriate step in that range, once two eye blocks appear from the segmented image, they will be detected by the determination criterion of eye location. Finally, the 2D correlation coefficient is used as a symmetry similarity measure to check the factuality of the two detected eyes. To avoid the background interference, skin color segmentation can be applied in order to enhance the accuracy of eye detection. The experimental results demonstrate the high efficiency of the algorithm and correct localization rate.

  9. Chandra Source Catalog: Background Determination and Source Detection

    Science.gov (United States)

    McCollough, Michael L.; Rots, A. H.; Primini, F. A.; Evans, I. N.; Glotfelty, K. J.; Hain, R.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory will used to generate the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).

  10. Pole placement algorithm for control of civil structures subjected to earthquake excitation

    Directory of Open Access Journals (Sweden)

    Nikos Pnevmatikos

    2017-04-01

    Full Text Available In this paper the control algorithm for controlled civil structures subjected to earthquake excitation is thoroughly investigated. The objective of this work is the control of structures by means of the pole placement algorithm, in order to improve their response against earthquake actions. Successful application of the algorithm requires judicious placement of the closed-loop eigenvalues from the part of the designer. The pole placement algorithm was widely applied to control mechanical systems. In this paper, a modification in the mathematical background of the algorithm in order to be suitable for civil fixed structures is primarily presented. The proposed approach is demonstrated by numerical simulations for the control of both single and multi-degree of freedom systems subjected to seismic actions. Numerical results have shown that the control algorithm is efficient in reducing the response of building structures, with small amount of required control forces.

  11. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history

    OpenAIRE

    Cherry, Joshua L.

    2017-01-01

    Background Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Results Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data....

  12. Effective traffic features selection algorithm for cyber-attacks samples

    Science.gov (United States)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  13. Introduction to genetic algorithms as a modeling tool

    International Nuclear Information System (INIS)

    Wildberger, A.M.; Hickok, K.A.

    1990-01-01

    Genetic algorithms are search and classification techniques modeled on natural adaptive systems. This is an introduction to their use as a modeling tool with emphasis on prospects for their application in the power industry. It is intended to provide enough background information for its audience to begin to follow technical developments in genetic algorithms and to recognize those which might impact on electric power engineering. Beginning with a discussion of genetic algorithms and their origin as a model of biological adaptation, their advantages and disadvantages are described in comparison with other modeling tools such as simulation and neural networks in order to provide guidance in selecting appropriate applications. In particular, their use is described for improving expert systems from actual data and they are suggested as an aid in building mathematical models. Using the Thermal Performance Advisor as an example, it is suggested how genetic algorithms might be used to make a conventional expert system and mathematical model of a power plant adapt automatically to changes in the plant's characteristics

  14. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  15. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  16. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  17. The LabelHash algorithm for substructure matching

    Directory of Open Access Journals (Sweden)

    Bryant Drew H

    2010-11-01

    Full Text Available Abstract Background There is an increasing number of proteins with known structure but unknown function. Determining their function would have a significant impact on understanding diseases and designing new therapeutics. However, experimental protein function determination is expensive and very time-consuming. Computational methods can facilitate function determination by identifying proteins that have high structural and chemical similarity. Results We present LabelHash, a novel algorithm for matching substructural motifs to large collections of protein structures. The algorithm consists of two phases. In the first phase the proteins are preprocessed in a fashion that allows for instant lookup of partial matches to any motif. In the second phase, partial matches for a given motif are expanded to complete matches. The general applicability of the algorithm is demonstrated with three different case studies. First, we show that we can accurately identify members of the enolase superfamily with a single motif. Next, we demonstrate how LabelHash can complement SOIPPA, an algorithm for motif identification and pairwise substructure alignment. Finally, a large collection of Catalytic Site Atlas motifs is used to benchmark the performance of the algorithm. LabelHash runs very efficiently in parallel; matching a motif against all proteins in the 95% sequence identity filtered non-redundant Protein Data Bank typically takes no more than a few minutes. The LabelHash algorithm is available through a web server and as a suite of standalone programs at http://labelhash.kavrakilab.org. The output of the LabelHash algorithm can be further analyzed with Chimera through a plugin that we developed for this purpose. Conclusions LabelHash is an efficient, versatile algorithm for large-scale substructure matching. When LabelHash is running in parallel, motifs can typically be matched against the entire PDB on the order of minutes. The algorithm is able to identify

  18. Search and optimization by metaheuristics techniques and algorithms inspired by nature

    CERN Document Server

    Du, Ke-Lin

    2016-01-01

    This textbook provides a comprehensive introduction to nature-inspired metaheuristic methods for search and optimization, including the latest trends in evolutionary algorithms and other forms of natural computing. Over 100 different types of these methods are discussed in detail. The authors emphasize non-standard optimization problems and utilize a natural approach to the topic, moving from basic notions to more complex ones. An introductory chapter covers the necessary biological and mathematical backgrounds for understanding the main material. Subsequent chapters then explore almost all of the major metaheuristics for search and optimization created based on natural phenomena, including simulated annealing, recurrent neural networks, genetic algorithms and genetic programming, differential evolution, memetic algorithms, particle swarm optimization, artificial immune systems, ant colony optimization, tabu search and scatter search, bee and bacteria foraging algorithms, harmony search, biomolecular computin...

  19. Compressive sensing based algorithms for electronic defence

    CERN Document Server

    Mishra, Amit Kumar

    2017-01-01

    This book details some of the major developments in the implementation of compressive sensing in radio applications for electronic defense and warfare communication use. It provides a comprehensive background to the subject and at the same time describes some novel algorithms. It also investigates application value and performance-related parameters of compressive sensing in scenarios such as direction finding, spectrum monitoring, detection, and classification.

  20. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    International Nuclear Information System (INIS)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; McIntyre, Justin

    2017-01-01

    The aim of this study is to compare radioxenon beta–gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Finally, our results show that existing algorithms can be improved and some newer algorithms can be better than the ones currently used.

  1. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    International Nuclear Information System (INIS)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; McIntyre, Justin

    2017-01-01

    The aim of this paper is to compare radioxenon beta-gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Our results show that existing algorithms can be improved and some newer algorithms can be better than the ones currently used. (author)

  2. Motif finding in DNA sequences based on skipping nonconserved positions in background Markov chains.

    Science.gov (United States)

    Zhao, Xiaoyan; Sze, Sing-Hoi

    2011-05-01

    One strategy to identify transcription factor binding sites is through motif finding in upstream DNA sequences of potentially co-regulated genes. Despite extensive efforts, none of the existing algorithms perform very well. We consider a string representation that allows arbitrary ignored positions within the nonconserved portion of single motifs, and use O(2(l)) Markov chains to model the background distributions of motifs of length l while skipping these positions within each Markov chain. By focusing initially on positions that have fixed nucleotides to define core occurrences, we develop an algorithm to identify motifs of moderate lengths. We compare the performance of our algorithm to other motif finding algorithms on a few benchmark data sets, and show that significant improvement in accuracy can be obtained when the sites are sufficiently conserved within a given sample, while comparable performance is obtained when the site conservation rate is low. A software program (PosMotif ) and detailed results are available online at http://faculty.cse.tamu.edu/shsze/posmotif.

  3. Developing the Many-Sided Background of the Preschool Children Learning Activities by means of Algorismic Skills Development

    Directory of Open Access Journals (Sweden)

    L. V. Voronina

    2013-01-01

    Full Text Available The paper deals with the current problem of modern education – developing the many-sided background for preschool children’s learning activities. At the given stage it is necessary to develop the algorithmic skills – the capability of and readiness for solving different kinds of problems in the strict sequence of operations according to the given patterns. Such algorithmic skills have a meta-disciplinary character and can be developed in class and at home. The paper highlights the algorithmic skills components (personal, regulatory, cognitive and communicative and the key indicators of their formation. The method for developing the algorithmic skills of preschool children is given including the three age related stages: ability to perform the linear algorithms (middle group, working with the branched cyclic algorithms (senior group, mastering the acquired skills and ability to perform some self- dependent tasks (preparatory group. The paper is addressed to the specialists working in the preschool educational sphere: preschool teachers, methodists, psychologists, directors of kindergartens. 

  4. Clinical algorithms to aid osteoarthritis guideline dissemination

    DEFF Research Database (Denmark)

    Meneses, S. R. F.; Goode, A. P.; Nelson, A. E

    2016-01-01

    Background: Numerous scientific organisations have developed evidence-based recommendations aiming to optimise the management of osteoarthritis (OA). Uptake, however, has been suboptimal. The purpose of this exercise was to harmonize the recent recommendations and develop a user-friendly treatment...... algorithm to facilitate translation of evidence into practice. Methods: We updated a previous systematic review on clinical practice guidelines (CPGs) for OA management. The guidelines were assessed using the Appraisal of Guidelines for Research and Evaluation for quality and the standards for developing...... to facilitate the implementation of guidelines in clinical practice are necessary. The algorithms proposed are examples of how to apply recommendations in the clinical context, helping the clinician to visualise the patient flow and timing of different treatment modalities. (C) 2016 Osteoarthritis Research...

  5. A novel approach to background subtraction in contrast-enhanced dual-energy digital mammography with commercially available mammography devices: Noise minimization

    International Nuclear Information System (INIS)

    Contillo, Adriano; Di Domenico, Giovanni; Cardarelli, Paolo; Gambaccini, Mauro; Taibi, Angelo

    2016-01-01

    Purpose: Dual-energy image subtraction represents a useful tool to improve the detectability of small lesions, especially in dense breasts. A feature it shares with all x-ray imaging techniques is the appearance of fluctuations in the texture of the background, which can obscure the visibility of interesting details. The aim of the work is to investigate the main noise sources, in order to create a better performing subtraction mechanism. In particular, the structural noise cancellation was achieved by means of a suitable extension of the dual-energy algorithm. Methods: The effect of the cancellation procedure was tested on an analytical simulation of a target with varying structural composition. Subsequently, the subtraction algorithm was also applied to a set of actual radiographs of a breast phantom exhibiting a nonuniform background pattern. The background power spectra of the outcomes were computed and compared to the ones obtained from a standard subtraction algorithm. Results: The comparison between the standard and the proposed cancellations showed an overall suppression of the magnitudes of the spectra, as well as a flattening of the frequency dependence of the structural component of the noise. Conclusions: The proposed subtraction procedure provides an effective cancellation of the residual background fluctuations. When combined with the polychromatic correction already described in a companion publication, it results in a high performing dual-energy subtraction scheme for commercial mammography units.

  6. A novel approach to background subtraction in contrast-enhanced dual-energy digital mammography with commercially available mammography devices: Noise minimization

    Energy Technology Data Exchange (ETDEWEB)

    Contillo, Adriano, E-mail: contillo@fe.infn.it; Di Domenico, Giovanni; Cardarelli, Paolo; Gambaccini, Mauro; Taibi, Angelo [Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, Via Saragat 1, I-44122 Ferrara (Italy)

    2016-06-15

    Purpose: Dual-energy image subtraction represents a useful tool to improve the detectability of small lesions, especially in dense breasts. A feature it shares with all x-ray imaging techniques is the appearance of fluctuations in the texture of the background, which can obscure the visibility of interesting details. The aim of the work is to investigate the main noise sources, in order to create a better performing subtraction mechanism. In particular, the structural noise cancellation was achieved by means of a suitable extension of the dual-energy algorithm. Methods: The effect of the cancellation procedure was tested on an analytical simulation of a target with varying structural composition. Subsequently, the subtraction algorithm was also applied to a set of actual radiographs of a breast phantom exhibiting a nonuniform background pattern. The background power spectra of the outcomes were computed and compared to the ones obtained from a standard subtraction algorithm. Results: The comparison between the standard and the proposed cancellations showed an overall suppression of the magnitudes of the spectra, as well as a flattening of the frequency dependence of the structural component of the noise. Conclusions: The proposed subtraction procedure provides an effective cancellation of the residual background fluctuations. When combined with the polychromatic correction already described in a companion publication, it results in a high performing dual-energy subtraction scheme for commercial mammography units.

  7. Kepler Planet Detection Metrics: Automatic Detection of Background Objects Using the Centroid Robovetter

    Science.gov (United States)

    Mullally, Fergal

    2017-01-01

    We present an automated method of identifying background eclipsing binaries masquerading as planet candidates in the Kepler planet candidate catalogs. We codify the manual vetting process for Kepler Objects of Interest (KOIs) described in Bryson et al. (2013) with a series of measurements and tests that can be performed algorithmically. We compare our automated results with a sample of manually vetted KOIs from the catalog of Burke et al. (2014) and find excellent agreement. We test the performance on a set of simulated transits and find our algorithm correctly identifies simulated false positives approximately 50 of the time, and correctly identifies 99 of simulated planet candidates.

  8. An efficient background modeling approach based on vehicle detection

    Science.gov (United States)

    Wang, Jia-yan; Song, Li-mei; Xi, Jiang-tao; Guo, Qing-hua

    2015-10-01

    The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.

  9. e-DMDAV: A new privacy preserving algorithm for wearable enterprise information systems

    Science.gov (United States)

    Zhang, Zhenjiang; Wang, Xiaoni; Uden, Lorna; Zhang, Peng; Zhao, Yingsi

    2018-04-01

    Wearable devices have been widely used in many fields to improve the quality of people's lives. More and more data on individuals and businesses are collected by statistical organizations though those devices. Almost all of this data holds confidential information. Statistical Disclosure Control (SDC) seeks to protect statistical data in such a way that it can be released without giving away confidential information that can be linked to specific individuals or entities. The MDAV (Maximum Distance to Average Vector) algorithm is an efficient micro-aggregation algorithm belonging to SDC. However, the MDAV algorithm cannot survive homogeneity and background knowledge attacks because it was designed for static numerical data. This paper proposes a systematic dynamic-updating anonymity algorithm based on MDAV called the e-DMDAV algorithm. This algorithm introduces a new parameter and a table to ensure that the k records in one cluster with the range of the distinct values in each cluster is no less than e for numerical and non-numerical datasets. This new algorithm has been evaluated and compared with the MDAV algorithm. The simulation results show that the new algorithm outperforms MDAV in terms of minimizing distortion and disclosure risk with a similar computational cost.

  10. Clinical Implications of TiGRT Algorithm for External Audit in Radiation Oncology

    OpenAIRE

    Daryoush Shahbazi-Gahrouei; Mohsen Saeb; Shahram Monadi; Iraj Jabbari

    2017-01-01

    Background: Performing audits play an important role in quality assurance program in radiation oncology. Among different algorithms, TiGRT is one of the common application software for dose calculation. This study aimed to clinical implications of TiGRT algorithm to measure dose and compared to calculated dose delivered to the patients for a variety of cases, with and without the presence of inhomogeneities and beam modifiers. Materials and Methods: Nonhomogeneous phantom as quality dose veri...

  11. Efficient Algorithms for Electrostatic Interactions Including Dielectric Contrasts

    Directory of Open Access Journals (Sweden)

    Christian Holm

    2013-10-01

    Full Text Available Coarse-grained models of soft matter are usually combined with implicit solvent models that take the electrostatic polarizability into account via a dielectric background. In biophysical or nanoscale simulations that include water, this constant can vary greatly within the system. Performing molecular dynamics or other simulations that need to compute exact electrostatic interactions between charges in those systems is computationally demanding. We review here several algorithms developed by us that perform exactly this task. For planar dielectric surfaces in partial periodic boundary conditions, the arising image charges can be either treated with the MMM2D algorithm in a very efficient and accurate way or with the electrostatic layer correction term, which enables the user to use his favorite 3D periodic Coulomb solver. Arbitrarily-shaped interfaces can be dealt with using induced surface charges with the induced charge calculation (ICC* algorithm. Finally, the local electrostatics algorithm, MEMD(Maxwell Equations Molecular Dynamics, even allows one to employ a smoothly varying dielectric constant in the systems. We introduce the concepts of these three algorithms and an extension for the inclusion of boundaries that are to be held fixed at a constant potential (metal conditions. For each method, we present a showcase application to highlight the importance of dielectric interfaces.

  12. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    Science.gov (United States)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  13. BALL - biochemical algorithms library 1.3

    Directory of Open Access Journals (Sweden)

    Stöckel Daniel

    2010-10-01

    Full Text Available Abstract Background The Biochemical Algorithms Library (BALL is a comprehensive rapid application development framework for structural bioinformatics. It provides an extensive C++ class library of data structures and algorithms for molecular modeling and structural bioinformatics. Using BALL as a programming toolbox does not only allow to greatly reduce application development times but also helps in ensuring stability and correctness by avoiding the error-prone reimplementation of complex algorithms and replacing them with calls into the library that has been well-tested by a large number of developers. In the ten years since its original publication, BALL has seen a substantial increase in functionality and numerous other improvements. Results Here, we discuss BALL's current functionality and highlight the key additions and improvements: support for additional file formats, molecular edit-functionality, new molecular mechanics force fields, novel energy minimization techniques, docking algorithms, and support for cheminformatics. Conclusions BALL is available for all major operating systems, including Linux, Windows, and MacOS X. It is available free of charge under the Lesser GNU Public License (LPGL. Parts of the code are distributed under the GNU Public License (GPL. BALL is available as source code and binary packages from the project web site at http://www.ball-project.org. Recently, it has been accepted into the debian project; integration into further distributions is currently pursued.

  14. Two-step phase retrieval algorithm based on the quotient of inner products of phase-shifting interferograms

    International Nuclear Information System (INIS)

    Niu, Wenhu; Zhong, Liyun; Sun, Peng; Zhang, Wangping; Lu, Xiaoxu

    2015-01-01

    Based on the quotient of inner products, a simple and rapid algorithm is proposed to retrieve the measured phase from two-frame phase-shifting interferograms with unknown phase shifts. Firstly, we filtered the background of interferograms by a Gaussian high-pass filter. Secondly, we calculated the inner products of the background-filtered interferograms. Thirdly, we extracted the phase shifts by the quotient of the inner products then calculated the measured phase by an arctangent function. Finally, we tested the performance of the proposed algorithm by the simulation calculation and the experimental research for a vortex phase plate. Both the simulation calculation and the experimental result showed that the phase shifts and the measured phase with high accuracy can be obtained by the proposed algorithm rapidly and conveniently. (paper)

  15. Multilevel processor-sharing algorithm for M/G/1 systems with priorities

    Energy Technology Data Exchange (ETDEWEB)

    Yassouridis, A.; Koller, R.

    1983-01-01

    The well-known multilevel processor-sharing algorithm for M/G/1 systems without priorities is extended to M/G/1 systems with priority classes. The average response time t/sub j/(x) and the average waiting time w/sub j/(x) for a j-class job, which requires a total service of x sec, are analytically calculated. Some figures demonstrate how the priority classes and the total number of different levels affect the behaviour of the functions t/sub j/(x) and w/sub j/(x). In addition, the foreground-background algorithm with priorities, which is not yet covered in the literature, is treated as a special case of the multilevel processor-sharing algorithm. 8 references.

  16. MRS algorithm: a new method for searching myocardial region in SPECT myocardial perfusion images.

    Science.gov (United States)

    He, Yuan-Lie; Tian, Lian-Fang; Chen, Ping; Li, Bin; Mao, Zhong-Yuan

    2005-10-01

    First, the necessity of automatically segmenting myocardium from myocardial SPECT image is discussed in Section 1. To eliminate the influence of the background, the optimal threshold segmentation method modified for the MRS algorithm is explained in Section 2. Then, the image erosion structure is applied to identify the myocardium region and the liver region. The contour tracing method is introduced to extract the myocardial contour. To locate the centriod of the myocardium, the myocardial centriod searching method is developed. The protocol of the MRS algorithm is summarized in Section 6. The performance of the MRS algorithm is investigated and the conclusion is drawn in Section 7. Finally, the importance of the MRS algorithm and the improvement of the MRS algorithm are discussed.

  17. Jet algorithms performance in 13 TeV data

    CERN Document Server

    CMS Collaboration

    2017-01-01

    The performance of jet algorithms with data collected by the CMS detector at the LHC in 2015 with a center-of-mass energy of 13 TeV, corresponding to 2.3 fb$^{-1}$ of integrated luminosity, is reported. The criteria used to reject jets originating from detector noise are discussed and the efficiency and noise jet rejection rate are measured. A likelihood discriminant designed to differentiate jets initiated by light-quark partons from jets initiated from gluons is studied. A multivariate discriminator is built to distinguish jets initiated by a single high $p_{\\mathrm{T}}$ quark or gluon from jets originating from the overlap of multiple low $p_{\\mathrm{T}}$ particles from non-primary vertices (pileup jets). Algorithms used to identify large radius jets reconstructed from the decay products of highly Lorentz boosted W bosons and top quarks are discussed, and the efficiency and background rejection rates of these algorithms are measured.

  18. DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Srinivasan

    2010-11-01

    Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.

  19. Correlation between model observers in uniform background and human observers in patient liver background for a low-contrast detection task in CT

    Science.gov (United States)

    Gong, Hao; Yu, Lifeng; Leng, Shuai; Dilger, Samantha; Zhou, Wei; Ren, Liqiang; McCollough, Cynthia H.

    2018-03-01

    Channelized Hotelling observer (CHO) has demonstrated strong correlation with human observer (HO) in both single-slice viewing mode and multi-slice viewing mode in low-contrast detection tasks with uniform background. However, it remains unknown if the simplest single-slice CHO in uniform background can be used to predict human observer performance in more realistic tasks that involve patient anatomical background and multi-slice viewing mode. In this study, we aim to investigate the correlation between CHO in a uniform water background and human observer performance at a multi-slice viewing mode on patient liver background for a low-contrast lesion detection task. The human observer study was performed on CT images from 7 abdominal CT exams. A noise insertion tool was employed to synthesize CT scans at two additional dose levels. A validated lesion insertion tool was used to numerically insert metastatic liver lesions of various sizes and contrasts into both phantom and patient images. We selected 12 conditions out of 72 possible experimental conditions to evaluate the correlation at various radiation doses, lesion sizes, lesion contrasts and reconstruction algorithms. CHO with both single and multi-slice viewing modes were strongly correlated with HO. The corresponding Pearson's correlation coefficient was 0.982 (with 95% confidence interval (CI) [0.936, 0.995]) and 0.989 (with 95% CI of [0.960, 0.997]) in multi-slice and single-slice viewing modes, respectively. Therefore, this study demonstrated the potential to use the simplest single-slice CHO to assess image quality for more realistic clinically relevant CT detection tasks.

  20. Reconciling taxonomy and phylogenetic inference: formalism and algorithms for describing discord and inferring taxonomic roots

    Directory of Open Access Journals (Sweden)

    Matsen Frederick A

    2012-05-01

    Full Text Available Abstract Background Although taxonomy is often used informally to evaluate the results of phylogenetic inference and the root of phylogenetic trees, algorithmic methods to do so are lacking. Results In this paper we formalize these procedures and develop algorithms to solve the relevant problems. In particular, we introduce a new algorithm that solves a "subcoloring" problem to express the difference between a taxonomy and a phylogeny at a given rank. This algorithm improves upon the current best algorithm in terms of asymptotic complexity for the parameter regime of interest; we also describe a branch-and-bound algorithm that saves orders of magnitude in computation on real data sets. We also develop a formalism and an algorithm for rooting phylogenetic trees according to a taxonomy. Conclusions The algorithms in this paper, and the associated freely-available software, will help biologists better use and understand taxonomically labeled phylogenetic trees.

  1. New time-saving predictor algorithm for multiple breath washout in adolescents

    DEFF Research Database (Denmark)

    Grønbæk, Jonathan; Hallas, Henrik Wegener; Arianto, Lambang

    2016-01-01

    BACKGROUND: Multiple breath washout (MBW) is an informative but time-consuming test. This study evaluates the uncertainty of a time-saving predictor algorithm in adolescents. METHODS: Adolescents were recruited from the Copenhagen Prospective Study on Asthma in Childhood (COPSAC2000) birth cohort...

  2. Algorithmic modeling of the irrelevant sound effect (ISE) by the hearing sensation fluctuation strength.

    Science.gov (United States)

    Schlittmeier, Sabine J; Weissgerber, Tobias; Kerber, Stefan; Fastl, Hugo; Hellbrück, Jürgen

    2012-01-01

    Background sounds, such as narration, music with prominent staccato passages, and office noise impair verbal short-term memory even when these sounds are irrelevant. This irrelevant sound effect (ISE) is evoked by so-called changing-state sounds that are characterized by a distinct temporal structure with varying successive auditory-perceptive tokens. However, because of the absence of an appropriate psychoacoustically based instrumental measure, the disturbing impact of a given speech or nonspeech sound could not be predicted until now, but necessitated behavioral testing. Our database for parametric modeling of the ISE included approximately 40 background sounds (e.g., speech, music, tone sequences, office noise, traffic noise) and corresponding performance data that was collected from 70 behavioral measurements of verbal short-term memory. The hearing sensation fluctuation strength was chosen to model the ISE and describes the percept of fluctuations when listening to slowly modulated sounds (f(mod) background sounds, the algorithm estimated behavioral performance data in 63 of 70 cases within the interquartile ranges. In particular, all real-world sounds were modeled adequately, whereas the algorithm overestimated the (non-)disturbance impact of synthetic steady-state sounds that were constituted by a repeated vowel or tone. Implications of the algorithm's strengths and prediction errors are discussed.

  3. The Chandra Source Catalog: Background Determination and Source Detection

    Science.gov (United States)

    McCollough, Michael; Rots, Arnold; Primini, Francis A.; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Danny G. Gibbs, II; Grier, John D.; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory are used to generate one of the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).

  4. Deconvolution map-making for cosmic microwave background observations

    International Nuclear Information System (INIS)

    Armitage, Charmaine; Wandelt, Benjamin D.

    2004-01-01

    We describe a new map-making code for cosmic microwave background observations. It implements fast algorithms for convolution and transpose convolution of two functions on the sphere [B. Wandelt and K. Gorski, Phys. Rev. D 63, 123002 (2001)]. Our code can account for arbitrary beam asymmetries and can be applied to any scanning strategy. We demonstrate the method using simulated time-ordered data for three beam models and two scanning patterns, including a coarsened version of the WMAP strategy. We quantitatively compare our results with a standard map-making method and demonstrate that the true sky is recovered with high accuracy using deconvolution map-making

  5. Design, implementation and evaluation of a practical pseudoknot folding algorithm based on thermodynamics

    Directory of Open Access Journals (Sweden)

    Giegerich Robert

    2004-08-01

    Full Text Available Abstract Background The general problem of RNA secondary structure prediction under the widely used thermodynamic model is known to be NP-complete when the structures considered include arbitrary pseudoknots. For restricted classes of pseudoknots, several polynomial time algorithms have been designed, where the O(n6time and O(n4 space algorithm by Rivas and Eddy is currently the best available program. Results We introduce the class of canonical simple recursive pseudoknots and present an algorithm that requires O(n4 time and O(n2 space to predict the energetically optimal structure of an RNA sequence, possible containing such pseudoknots. Evaluation against a large collection of known pseudoknotted structures shows the adequacy of the canonization approach and our algorithm. Conclusions RNA pseudoknots of medium size can now be predicted reliably as well as efficiently by the new algorithm.

  6. The concept of ageing in evolutionary algorithms: Discussion and inspirations for human ageing.

    Science.gov (United States)

    Dimopoulos, Christos; Papageorgis, Panagiotis; Boustras, George; Efstathiades, Christodoulos

    2017-04-01

    This paper discusses the concept of ageing as this applies to the operation of Evolutionary Algorithms, and examines its relationship to the concept of ageing as this is understood for human beings. Evolutionary Algorithms constitute a family of search algorithms which base their operation on an analogy from the evolution of species in nature. The paper initially provides the necessary knowledge on the operation of Evolutionary Algorithms, focusing on the use of ageing strategies during the implementation of the evolutionary process. Background knowledge on the concept of ageing, as this is defined scientifically for biological systems, is subsequently presented. Based on this information, the paper provides a comparison between the two ageing concepts, and discusses the philosophical inspirations which can be drawn for human ageing based on the operation of Evolutionary Algorithms. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Evaluating ortholog prediction algorithms in a yeast model clade.

    Directory of Open Access Journals (Sweden)

    Leonidas Salichos

    Full Text Available BACKGROUND: Accurate identification of orthologs is crucial for evolutionary studies and for functional annotation. Several algorithms have been developed for ortholog delineation, but so far, manually curated genome-scale biological databases of orthologous genes for algorithm evaluation have been lacking. We evaluated four popular ortholog prediction algorithms (MultiParanoid; and OrthoMCL; RBH: Reciprocal Best Hit; RSD: Reciprocal Smallest Distance; the last two extended into clustering algorithms cRBH and cRSD, respectively, so that they can predict orthologs across multiple taxa against a set of 2,723 groups of high-quality curated orthologs from 6 Saccharomycete yeasts in the Yeast Gene Order Browser. RESULTS: Examination of sensitivity [TP/(TP+FN], specificity [TN/(TN+FP], and accuracy [(TP+TN/(TP+TN+FP+FN] across a broad parameter range showed that cRBH was the most accurate and specific algorithm, whereas OrthoMCL was the most sensitive. Evaluation of the algorithms across a varying number of species showed that cRBH had the highest accuracy and lowest false discovery rate [FP/(FP+TP], followed by cRSD. Of the six species in our set, three descended from an ancestor that underwent whole genome duplication. Subsequent differential duplicate loss events in the three descendants resulted in distinct classes of gene loss patterns, including cases where the genes retained in the three descendants are paralogs, constituting 'traps' for ortholog prediction algorithms. We found that the false discovery rate of all algorithms dramatically increased in these traps. CONCLUSIONS: These results suggest that simple algorithms, like cRBH, may be better ortholog predictors than more complex ones (e.g., OrthoMCL and MultiParanoid for evolutionary and functional genomics studies where the objective is the accurate inference of single-copy orthologs (e.g., molecular phylogenetics, but that all algorithms fail to accurately predict orthologs when paralogy

  8. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  9. Extracting Optical Fiber Background from Surface-Enhanced Raman Spectroscopy Spectra Based on Bi-Objective Optimization Modeling.

    Science.gov (United States)

    Huang, Jie; Shi, Tielin; Tang, Zirong; Zhu, Wei; Liao, Guanglan; Li, Xiaoping; Gong, Bo; Zhou, Tengyuan

    2017-08-01

    We propose a bi-objective optimization model for extracting optical fiber background from the measured surface-enhanced Raman spectroscopy (SERS) spectrum of the target sample in the application of fiber optic SERS. The model is built using curve fitting to resolve the SERS spectrum into several individual bands, and simultaneously matching some resolved bands with the measured background spectrum. The Pearson correlation coefficient is selected as the similarity index and its maximum value is pursued during the spectral matching process. An algorithm is proposed, programmed, and demonstrated successfully in extracting optical fiber background or fluorescence background from the measured SERS spectra of rhodamine 6G (R6G) and crystal violet (CV). The proposed model not only can be applied to remove optical fiber background or fluorescence background for SERS spectra, but also can be transferred to conventional Raman spectra recorded using fiber optic instrumentation.

  10. Detecting microsatellites within genomes: significant variation among algorithms

    Directory of Open Access Journals (Sweden)

    Rivals Eric

    2007-04-01

    Full Text Available Abstract Background Microsatellites are short, tandemly-repeated DNA sequences which are widely distributed among genomes. Their structure, role and evolution can be analyzed based on exhaustive extraction from sequenced genomes. Several dedicated algorithms have been developed for this purpose. Here, we compared the detection efficiency of five of them (TRF, Mreps, Sputnik, STAR, and RepeatMasker. Results Our analysis was first conducted on the human X chromosome, and microsatellite distributions were characterized by microsatellite number, length, and divergence from a pure motif. The algorithms work with user-defined parameters, and we demonstrate that the parameter values chosen can strongly influence microsatellite distributions. The five algorithms were then compared by fixing parameters settings, and the analysis was extended to three other genomes (Saccharomyces cerevisiae, Neurospora crassa and Drosophila melanogaster spanning a wide range of size and structure. Significant differences for all characteristics of microsatellites were observed among algorithms, but not among genomes, for both perfect and imperfect microsatellites. Striking differences were detected for short microsatellites (below 20 bp, regardless of motif. Conclusion Since the algorithm used strongly influences empirical distributions, studies analyzing microsatellite evolution based on a comparison between empirical and theoretical size distributions should therefore be considered with caution. We also discuss why a typological definition of microsatellites limits our capacity to capture their genomic distributions.

  11. Timing of Pulsed Prompt Gamma Rays for Background Discrimination

    International Nuclear Information System (INIS)

    Hueso-Gonzalez, F.; Golnik, C.; Berthel, M.; Dreyer, A.; Kormoll, T.; Rohling, H.; Pausch, G.; Enghardt, W.; Fiedler, F.; Heidel, K.; Schoene, S.; Schwengner, R.; Wagner, A.

    2013-06-01

    In the context of particle therapy, particle range verification is a major challenge for the quality assurance of the treatment. One approach is the measurement of the prompt gamma rays resulting from the tissue irradiation. A Compton camera based on several planes of position sensitive gamma ray detectors, together with an imaging algorithm, is expected to reconstruct the prompt gamma ray emission density profile, which is correlated with the dose distribution. At Helmholtz- Zentrum Dresden-Rossendorf (HZDR) and OncoRay, a camera prototype has been developed consisting of two scatter planes (CdZnTe cross strip detectors) and an absorber plane (Lu 2 SiO 5 block detector). The data acquisition is based on VME electronics and handled by software developed on the ROOT platform. The prototype was tested at the linear electron accelerator ELBE at HZDR, which was set up to produce bunched bremsstrahlung photons. Their spectrum has similarities with the one expected from prompt gamma rays in the clinical case, and these are also bunched with the accelerator frequency. The time correlation between the pulsed prompt photons and the measured signals was used for background discrimination, achieving a time resolution of 3 ns (2 ns) FWHM for the CZT (LSO) detector. A time-walk correction was applied for the LSO detector and improved its resolution to 1 ns. In conclusion, the detectors are suitable for time-resolved background discrimination in pulsed clinical particle accelerators. Ongoing tasks are the test of the imaging algorithms and the quantitative comparison with simulations. Further experiments will be performed at proton accelerators. (authors)

  12. Resolution recovery for Compton camera using origin ensemble algorithm.

    Science.gov (United States)

    Andreyev, A; Celler, A; Ozsahin, I; Sitek, A

    2016-08-01

    Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions

  13. Resolution recovery for Compton camera using origin ensemble algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Andreyev, A. [Philips Healthcare, Highland Heights, Ohio 44143 (United States); Celler, A. [Medical Imaging Research Group, University of British Columbia and Vancouver Coastal Health Research Institute, Vancouver, BC V5Z 1M9 (Canada); Ozsahin, I.; Sitek, A., E-mail: sarkadiu@gmail.com [Gordon Center for Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 and Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2016-08-15

    Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image

  14. Resolution recovery for Compton camera using origin ensemble algorithm

    International Nuclear Information System (INIS)

    Andreyev, A.; Celler, A.; Ozsahin, I.; Sitek, A.

    2016-01-01

    Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image

  15. Gamma-Ray Background Variability in Mobile Detectors

    Science.gov (United States)

    Aucott, Timothy John

    . This is accomplished by making many hours of background measurements with a truck-mounted system, which utilizes high-purity germanium detectors for spectroscopy and sodium iodide detectors for coded aperture imaging. This system also utilizes various peripheral sensors, such as panoramic cameras, laser ranging systems, global positioning systems, and a weather station to provide context for the gamma-ray data. About three hundred hours of data were taken in the San Francisco Bay Area, covering a wide variety of environments that might be encountered in operational scenarios. These measurements were used in a source injection study to evaluate the sensitivity of different algorithms (imaging and spectroscopy) and hardware (sodium iodide and high-purity germanium detectors). These measurements confirm that background distributions in large, mobile detector systems are dominated by systematic, not statistical variations, and both spectroscopy and imaging were found to substantially reduce this variability. Spectroscopy performed better than the coded aperture for the given scintillator array (one square meter of sodium iodide) for a variety of sources and geometries. By modeling the statistical and systematic uncertainties of the background, the data can be sampled to simulate the performance of a detector array of arbitrary size and resolution. With a larger array or lower resolution detectors, however imaging was better able to compensate for background variability.

  16. Experimental Methods for the Analysis of Optimization Algorithms

    DEFF Research Database (Denmark)

    , computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different...... in algorithm design, statistical design, optimization and heuristics, and most chapters provide theoretical background and are enriched with case studies. This book is written for researchers and practitioners in operations research and computer science who wish to improve the experimental assessment......In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However...

  17. Estimating representative background PM2.5 concentration in heavily polluted areas using baseline separation technique and chemical mass balance model

    Science.gov (United States)

    Gao, Shuang; Yang, Wen; Zhang, Hui; Sun, Yanling; Mao, Jian; Ma, Zhenxing; Cong, Zhiyuan; Zhang, Xian; Tian, Shasha; Azzi, Merched; Chen, Li; Bai, Zhipeng

    2018-02-01

    The determination of background concentration of PM2.5 is important to understand the contribution of local emission sources to total PM2.5 concentration. The purpose of this study was to exam the performance of baseline separation techniques to estimate PM2.5 background concentration. Five separation methods, which included recursive digital filters (Lyne-Hollick, one-parameter algorithm, and Boughton two-parameter algorithm), sliding interval and smoothed minima, were applied to one-year PM2.5 time-series data in two heavily polluted cities, Tianjin and Jinan. To obtain the proper filter parameters and recession constants for the separation techniques, we conducted regression analysis at a background site during the emission reduction period enforced by the Government for the 2014 Asia-Pacific Economic Cooperation (APEC) meeting in Beijing. Background concentrations in Tianjin and Jinan were then estimated by applying the determined filter parameters and recession constants. The chemical mass balance (CMB) model was also applied to ascertain the effectiveness of the new approach. Our results showed that the contribution of background PM concentration to ambient pollution was at a comparable level to the contribution obtained from the previous study. The best performance was achieved using the Boughton two-parameter algorithm. The background concentrations were estimated at (27 ± 2) μg/m3 for the whole year, (34 ± 4) μg/m3 for the heating period (winter), (21 ± 2) μg/m3 for the non-heating period (summer), and (25 ± 2) μg/m3 for the sandstorm period in Tianjin. The corresponding values in Jinan were (30 ± 3) μg/m3, (40 ± 4) μg/m3, (24 ± 5) μg/m3, and (26 ± 2) μg/m3, respectively. The study revealed that these baseline separation techniques are valid for estimating levels of PM2.5 air pollution, and that our proposed method has great potential for estimating the background level of other air pollutants.

  18. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    Science.gov (United States)

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  19. A sub-cubic time algorithm for computing the quartet distance between two general trees

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Kristensen, Anders Kabell; Mailund, Thomas

    2011-01-01

    Background When inferring phylogenetic trees different algorithms may give different trees. To study such effects a measure for the distance between two trees is useful. Quartet distance is one such measure, and is the number of quartet topologies that differ between two trees. Results We have...... derived a new algorithm for computing the quartet distance between a pair of general trees, i.e. trees where inner nodes can have any degree ≥ 3. The time and space complexity of our algorithm is sub-cubic in the number of leaves and does not depend on the degree of the inner nodes. This makes...... it the fastest algorithm so far for computing the quartet distance between general trees independent of the degree of the inner nodes. Conclusions We have implemented our algorithm and two of the best competitors. Our new algorithm is significantly faster than the competition and seems to run in close...

  20. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  1. Developing and evaluating a target-background similarity metric for camouflage detection.

    Directory of Open Access Journals (Sweden)

    Chiuhsiang Joe Lin

    Full Text Available BACKGROUND: Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. METHODOLOGY: In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. SIGNIFICANCE: The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results.

  2. Making maps of the cosmic microwave background: The MAXIMA example

    Science.gov (United States)

    Stompor, Radek; Balbi, Amedeo; Borrill, Julian D.; Ferreira, Pedro G.; Hanany, Shaul; Jaffe, Andrew H.; Lee, Adrian T.; Oh, Sang; Rabii, Bahman; Richards, Paul L.; Smoot, George F.; Winant, Celeste D.; Wu, Jiun-Huei Proty

    2002-01-01

    This work describes cosmic microwave background (CMB) data analysis algorithms and their implementations, developed to produce a pixelized map of the sky and a corresponding pixel-pixel noise correlation matrix from time ordered data for a CMB mapping experiment. We discuss in turn algorithms for estimating noise properties from the time ordered data, techniques for manipulating the time ordered data, and a number of variants of the maximum likelihood map-making procedure. We pay particular attention to issues pertinent to real CMB data, and present ways of incorporating them within the framework of maximum likelihood map making. Making a map of the sky is shown to be not only an intermediate step rendering an image of the sky, but also an important diagnostic stage, when tests for and/or removal of systematic effects can efficiently be performed. The case under study is the MAXIMA-I data set. However, the methods discussed are expected to be applicable to the analysis of other current and forthcoming CMB experiments.

  3. Latent variable method for automatic adaptation to background states in motor imagery BCI

    Science.gov (United States)

    Dagaev, Nikolay; Volkova, Ksenia; Ossadtchi, Alexei

    2018-02-01

    Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model’s parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.

  4. Training nuclei detection algorithms with simple annotations

    Directory of Open Access Journals (Sweden)

    Henning Kost

    2017-01-01

    Full Text Available Background: Generating good training datasets is essential for machine learning-based nuclei detection methods. However, creating exhaustive nuclei contour annotations, to derive optimal training data from, is often infeasible. Methods: We compared different approaches for training nuclei detection methods solely based on nucleus center markers. Such markers contain less accurate information, especially with regard to nuclear boundaries, but can be produced much easier and in greater quantities. The approaches use different automated sample extraction methods to derive image positions and class labels from nucleus center markers. In addition, the approaches use different automated sample selection methods to improve the detection quality of the classification algorithm and reduce the run time of the training process. We evaluated the approaches based on a previously published generic nuclei detection algorithm and a set of Ki-67-stained breast cancer images. Results: A Voronoi tessellation-based sample extraction method produced the best performing training sets. However, subsampling of the extracted training samples was crucial. Even simple class balancing improved the detection quality considerably. The incorporation of active learning led to a further increase in detection quality. Conclusions: With appropriate sample extraction and selection methods, nuclei detection algorithms trained on the basis of simple center marker annotations can produce comparable quality to algorithms trained on conventionally created training sets.

  5. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  6. Operator-independent method for background subtraction in adrenal-uptake measurements: concise communication

    International Nuclear Information System (INIS)

    Koral, K.F.; Sarkar, S.D.

    1977-01-01

    A new computer program for adrenal-uptake measurements is presented in which the algorithm identifies the adrenal and background regions automatically after being given a starting point in the image. Adrenal uptakes and results of reproducibility tests are given for patients injected with [ 131 I] 6β-iodomethyl-19-norcholesterol. The data to date indicate no overlap in the percent-of-dose uptakes for normal patients and patients with Cushing's disease and Cushing's syndrome

  7. Uniform background assumption produces misleading lung EIT images.

    Science.gov (United States)

    Grychtol, Bartłomiej; Adler, Andy

    2013-06-01

    Electrical impedance tomography (EIT) estimates an image of conductivity change within a body from stimulation and measurement at body surface electrodes. There is significant interest in EIT for imaging the thorax, as a monitoring tool for lung ventilation. To be useful in this application, we require an understanding of if and when EIT images can produce inaccurate images. In this paper, we study the consequences of the homogeneous background assumption, frequently made in linear image reconstruction, which introduces a mismatch between the reference measurement and the linearization point. We show in simulation and experimental data that the resulting images may contain large and clinically significant errors. A 3D finite element model of thorax conductivity is used to simulate EIT measurements for different heart and lung conductivity, size and position, as well as different amounts of gravitational collapse and ventilation-associated conductivity change. Three common linear EIT reconstruction algorithms are studied. We find that the asymmetric position of the heart can cause EIT images of ventilation to show up to 60% undue bias towards the left lung and that the effect is particularly strong for a ventilation distribution typical of mechanically ventilated patients. The conductivity gradient associated with gravitational lung collapse causes conductivity changes in non-dependent lung to be overestimated by up to 100% with respect to the dependent lung. Eliminating the mismatch by using a realistic conductivity distribution in the forward model of the reconstruction algorithm strongly reduces these undesirable effects. We conclude that subject-specific anatomically accurate forward models should be used in lung EIT and extra care is required when analysing EIT images of subjects whose background conductivity distribution in the lungs is known to be heterogeneous or exhibiting large changes.

  8. Uniform background assumption produces misleading lung EIT images

    International Nuclear Information System (INIS)

    Grychtol, Bartłomiej; Adler, Andy

    2013-01-01

    Electrical impedance tomography (EIT) estimates an image of conductivity change within a body from stimulation and measurement at body surface electrodes. There is significant interest in EIT for imaging the thorax, as a monitoring tool for lung ventilation. To be useful in this application, we require an understanding of if and when EIT images can produce inaccurate images. In this paper, we study the consequences of the homogeneous background assumption, frequently made in linear image reconstruction, which introduces a mismatch between the reference measurement and the linearization point. We show in simulation and experimental data that the resulting images may contain large and clinically significant errors. A 3D finite element model of thorax conductivity is used to simulate EIT measurements for different heart and lung conductivity, size and position, as well as different amounts of gravitational collapse and ventilation-associated conductivity change. Three common linear EIT reconstruction algorithms are studied. We find that the asymmetric position of the heart can cause EIT images of ventilation to show up to 60% undue bias towards the left lung and that the effect is particularly strong for a ventilation distribution typical of mechanically ventilated patients. The conductivity gradient associated with gravitational lung collapse causes conductivity changes in non-dependent lung to be overestimated by up to 100% with respect to the dependent lung. Eliminating the mismatch by using a realistic conductivity distribution in the forward model of the reconstruction algorithm strongly reduces these undesirable effects. We conclude that subject-specific anatomically accurate forward models should be used in lung EIT and extra care is required when analysing EIT images of subjects whose background conductivity distribution in the lungs is known to be heterogeneous or exhibiting large changes. (paper)

  9. TUnfold, an algorithm for correcting migration effects in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Schmitt, Stefan

    2012-07-15

    TUnfold is a tool for correcting migration and background effects in high energy physics for multi-dimensional distributions. It is based on a least square fit with Tikhonov regularisation and an optional area constraint. For determining the strength of the regularisation parameter, the L-curve method and scans of global correlation coefficients are implemented. The algorithm supports background subtraction and error propagation of statistical and systematic uncertainties, in particular those originating from limited knowledge of the response matrix. The program is interfaced to the ROOT analysis framework.

  10. HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization

    Energy Technology Data Exchange (ETDEWEB)

    2016-08-22

    NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems for $\\WW$ and $\\HH$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $\\WW$ and $\\HH$ within the alternating iterations.

  11. A Cancer Gene Selection Algorithm Based on the K-S Test and CFS

    Directory of Open Access Journals (Sweden)

    Qiang Su

    2017-01-01

    Full Text Available Background. To address the challenging problem of selecting distinguished genes from cancer gene expression datasets, this paper presents a gene subset selection algorithm based on the Kolmogorov-Smirnov (K-S test and correlation-based feature selection (CFS principles. The algorithm selects distinguished genes first using the K-S test, and then, it uses CFS to select genes from those selected by the K-S test. Results. We adopted support vector machines (SVM as the classification tool and used the criteria of accuracy to evaluate the performance of the classifiers on the selected gene subsets. This approach compared the proposed gene subset selection algorithm with the K-S test, CFS, minimum-redundancy maximum-relevancy (mRMR, and ReliefF algorithms. The average experimental results of the aforementioned gene selection algorithms for 5 gene expression datasets demonstrate that, based on accuracy, the performance of the new K-S and CFS-based algorithm is better than those of the K-S test, CFS, mRMR, and ReliefF algorithms. Conclusions. The experimental results show that the K-S test-CFS gene selection algorithm is a very effective and promising approach compared to the K-S test, CFS, mRMR, and ReliefF algorithms.

  12. Catalytic synthesis of alcoholic fuels for transportation from syngas

    DEFF Research Database (Denmark)

    Wu, Qiongxiao

    This work has investigated the catalytic conversion of syngas into methanol and higher alcohols. Based on input from computational catalyst screening, an experimental investigation of promising catalyst candidates for methanol synthesis from syngas has been carried out. Cu-Ni alloys of different...... composition have been identified as potential candidates for methanol synthesis. These Cu-Ni alloy catalysts have been synthesized and tested in a fixed-bed continuous-flow reactor for CO hydrogenation. The metal area based activity for a Cu-Ni/SiO2 catalyst is at the same level as a Cu/ZnO/Al2O3 model...... catalyst. The high activity and selectivity of silica supported Cu-Ni alloy catalysts agrees with the fact that the DFT calculations identified Cu-Ni alloys as highly active and selective catalysts for the hydrogenation of CO to form methanol. This work has also provided a systematic study of Cu...

  13. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  14. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  15. Comparison of segmentation algorithms for cow contour extraction from natural barn background in side view images

    NARCIS (Netherlands)

    Hertem, van T.; Alchanatis, V.; Antler, A.; Maltz, E.; Halachmi, I.; Schlageter Tello, A.A.; Lokhorst, C.; Viazzi, S.; Romanini, C.E.B.; Pluk, A.; Bahr, C.; Berckmans, D.

    2013-01-01

    Computer vision techniques are a means to extract individual animal information such as weight, activity and calving time in intensive farming. Automatic detection requires adequate image pre-processing such as segmentation to precisely distinguish the animal from its background. For some analyses

  16. SU-D-201-06: Random Walk Algorithm Seed Localization Parameters in Lung Positron Emission Tomography (PET) Images

    Energy Technology Data Exchange (ETDEWEB)

    Soufi, M [Shahid Beheshti University, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)

    2015-06-15

    Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and

  17. SU-D-201-06: Random Walk Algorithm Seed Localization Parameters in Lung Positron Emission Tomography (PET) Images

    International Nuclear Information System (INIS)

    Soufi, M; Asl, A Kamali; Geramifar, P

    2015-01-01

    Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm 3 . For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and

  18. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  19. Comparison of low-contrast detectability between two CT reconstruction algorithms using voxel-based 3D printed textured phantoms.

    Science.gov (United States)

    Solomon, Justin; Ba, Alexandre; Bochud, François; Samei, Ehsan

    2016-12-01

    To use novel voxel-based 3D printed textured phantoms in order to compare low-contrast detectability between two reconstruction algorithms, FBP (filtered-backprojection) and SAFIRE (sinogram affirmed iterative reconstruction) and determine what impact background texture (i.e., anatomical noise) has on estimating the dose reduction potential of SAFIRE. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find CLB textures that were reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, four cylindrical phantoms (Textures A-C and uniform, 165 mm in diameter, and 30 mm height) were designed, each containing 20 low-contrast spherical signals (6 mm diameter at nominal contrast levels of ∼3.2, 5.2, 7.2, 10, and 14 HU with four repeats per signal). The phantoms were voxelized and input into a commercial multimaterial 3D printer (Object Connex 350), with custom software for voxel-based printing (using principles of digital dithering). Images of the textured phantoms and a corresponding uniform phantom were acquired at six radiation dose levels (SOMATOM Flash, Siemens Healthcare) and observer model detection performance (detectability index of a multislice channelized Hotelling observer) was estimated for each condition (5 contrasts × 6 doses × 2 reconstructions × 4 backgrounds = 240 total conditions). A multivariate generalized regression analysis was performed (linear terms, no interactions, random error term, log link function) to assess whether dose, reconstruction algorithm, signal contrast, and background type have statistically significant effects on detectability. Also, fitted curves of detectability (averaged across contrast levels

  20. A simulation study comparing aberration detection algorithms for syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Painter Ian

    2007-03-01

    Full Text Available Abstract Background The usefulness of syndromic surveillance for early outbreak detection depends in part on effective statistical aberration detection. However, few published studies have compared different detection algorithms on identical data. In the largest simulation study conducted to date, we compared the performance of six aberration detection algorithms on simulated outbreaks superimposed on authentic syndromic surveillance data. Methods We compared three control-chart-based statistics, two exponential weighted moving averages, and a generalized linear model. We simulated 310 unique outbreak signals, and added these to actual daily counts of four syndromes monitored by Public Health – Seattle and King County's syndromic surveillance system. We compared the sensitivity of the six algorithms at detecting these simulated outbreaks at a fixed alert rate of 0.01. Results Stratified by baseline or by outbreak distribution, duration, or size, the generalized linear model was more sensitive than the other algorithms and detected 54% (95% CI = 52%–56% of the simulated epidemics when run at an alert rate of 0.01. However, all of the algorithms had poor sensitivity, particularly for outbreaks that did not begin with a surge of cases. Conclusion When tested on county-level data aggregated across age groups, these algorithms often did not perform well in detecting signals other than large, rapid increases in case counts relative to baseline levels.

  1. Improved target detection algorithm using Fukunaga-Koontz transform and distance classifier correlation filter

    Science.gov (United States)

    Bal, A.; Alam, M. S.; Aslan, M. S.

    2006-05-01

    Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.

  2. Detection algorithm of infrared small target based on improved SUSAN operator

    Science.gov (United States)

    Liu, Xingmiao; Wang, Shicheng; Zhao, Jing

    2010-10-01

    The methods of detecting small moving targets in infrared image sequences that contain moving nuisance objects and background noise is analyzed in this paper. A novel infrared small target detection algorithm based on improved SUSAN operator is put forward. The algorithm selects double templates for the infrared small target detection: one size is greater than the small target point size and another size is equal to the small target point size. First, the algorithm uses the big template to calculate the USAN of each pixel in the image and detect the small target, the edge of the image and isolated noise pixels; Then the algorithm uses the another template to calculate the USAN of pixels detected in the first step and improves the principles of SUSAN algorithm based on the characteristics of the small target so that the algorithm can only detect small targets and don't sensitive to the edge pixels of the image and isolated noise pixels. So the interference of the edge of the image and isolate noise points are removed and the candidate target points can be identified; At last, the target is detected by utilizing the continuity and consistency of target movement. The experimental results indicate that the improved SUSAN detection algorithm can quickly and effectively detect the infrared small targets.

  3. A highly efficient multi-core algorithm for clustering extremely large datasets

    Directory of Open Access Journals (Sweden)

    Kraus Johann M

    2010-04-01

    Full Text Available Abstract Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.

  4. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  5. Room-temperature synthesis of three-dimensional porous ZnO@CuNi hybrid magnetic layers with photoluminescent and photocatalytic properties

    Science.gov (United States)

    Guerrero, Miguel; Zhang, Jin; Altube, Ainhoa; García-Lecina, Eva; Roldan, Mònica; Baró, Maria Dolors; Pellicer, Eva; Sort, Jordi

    2016-01-01

    Abstract A facile synthetic approach to prepare porous ZnO@CuNi hybrid films is presented. Initially, magnetic CuNi porous layers (consisting of phase separated CuNi alloys) are successfully grown by electrodeposition at different current densities using H2 bubbles as a dynamic template to generate the porosity. The porous CuNi alloys serve as parent scaffolds to be subsequently filled with a solution containing ZnO nanoparticles previously synthesized by sol-gel. The dispersed nanoparticles are deposited dropwise onto the CuNi frameworks and the solvent is left to evaporate while the nanoparticles impregnate the interior of the pores, rendering ZnO-coated CuNi 3D porous structures. No thermal annealing is required to obtain the porous films. The synthesized hybrid porous layers exhibit an interesting combination of tunable ferromagnetic and photoluminescent properties. In addition, the aqueous photocatalytic activity of the composite is studied under UV−visible light irradiation for the degradation of Rhodamine B. The proposed method represents a fast and inexpensive approach towards the implementation of devices based on metal-semiconductor porous systems, avoiding the use of post-synthesis heat treatment steps which could cause deleterious oxidation of the metallic counterpart, as well as collapse of the porous structure and loss of the ferromagnetic properties. PMID:27877868

  6. An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects

    Directory of Open Access Journals (Sweden)

    Min Se Dong

    2011-06-01

    Full Text Available Abstract Background Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. Methods In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. Results A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. Conclusions The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.

  7. Lining seam elimination algorithm and surface crack detection in concrete tunnel lining

    Science.gov (United States)

    Qu, Zhong; Bai, Ling; An, Shi-Quan; Ju, Fang-Rong; Liu, Ling

    2016-11-01

    Due to the particularity of the surface of concrete tunnel lining and the diversity of detection environments such as uneven illumination, smudges, localized rock falls, water leakage, and the inherent seams of the lining structure, existing crack detection algorithms cannot detect real cracks accurately. This paper proposed an algorithm that combines lining seam elimination with the improved percolation detection algorithm based on grid cell analysis for surface crack detection in concrete tunnel lining. First, check the characteristics of pixels within the overlapping grid to remove the background noise and generate the percolation seed map (PSM). Second, cracks are detected based on the PSM by the accelerated percolation algorithm so that the fracture unit areas can be scanned and connected. Finally, the real surface cracks in concrete tunnel lining can be obtained by removing the lining seam and performing percolation denoising. Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks. Furthermore, it can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.

  8. Backtracking algorithm for lepton reconstruction with HADES

    International Nuclear Information System (INIS)

    Sellheim, P

    2015-01-01

    The High Acceptance Di-Electron Spectrometer (HADES) at the GSI Helmholtzzentrum für Schwerionenforschung investigates dilepton and strangeness production in elementary and heavy-ion collisions. In April - May 2012 HADES recorded 7 billion Au+Au events at a beam energy of 1.23 GeV/u with the highest multiplicities measured so far. The track reconstruction and particle identification in the high track density environment are challenging. The most important detector component for lepton identification is the Ring Imaging Cherenkov detector. Its main purpose is the separation of electrons and positrons from large background of charged hadrons produced in heavy-ion collisions. In order to improve lepton identification this backtracking algorithm was developed. In this contribution we will show the results of the algorithm compared to the currently applied method for e +/- identification. Efficiency and purity of a reconstructed e +/- sample will be discussed as well. (paper)

  9. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    Science.gov (United States)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  10. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  11. Background field method in gauge theories and on linear sigma models

    International Nuclear Information System (INIS)

    van de Ven, A.E.M.

    1986-01-01

    This dissertation constitutes a study of the ultraviolet behavior of gauge theories and two-dimensional nonlinear sigma-models by means of the background field method. After a general introduction in chapter 1, chapter 2 presents algorithms which generate the divergent terms in the effective action at one-loop for arbitrary quantum field theories in flat spacetime of dimension d ≤ 11. It is demonstrated that global N = 1 supersymmetric Yang-Mills theory in six dimensions in one-loop UV-finite. Chapter 3 presents an algorithm which produces the divergent terms in the effective action at two-loops for renormalizable quantum field theories in a curved four-dimensional background spacetime. Chapter 4 presents a study of the two-loop UV-behavior of two-dimensional bosonic and supersymmetric non-linear sigma-models which include a Wess-Zumino-Witten term. It is found that, to this order, supersymmetric models on quasi-Ricci flat spaces are UV-finite and the β-functions for the bosonic model depend only on torsionful curvatures. Chapter 5 summarizes a superspace calculation of the four-loop β-function for two-dimensional N = 1 and N = 2 supersymmetric non-linear sigma-models. It is found that besides the one-loop contribution which vanishes on Ricci-flat spaces, the β-function receives four-loop contributions which do not vanish in the Ricci-flat case. Implications for superstrings are discussed. Chapters 6 and 7 treat the details of these calculations

  12. Trigger Algorithms and Electronics for the ATLAS Muon NSW Upgrade

    CERN Document Server

    Guan, Liang; The ATLAS collaboration

    2015-01-01

    The ATLAS New Small Wheel (NSW), comprising MicroMegas (MMs) and small-strip Thin Gap Chambers (sTGCs), will upgrade the ATLAS muon system for a high background environment. Particularly, the NSW trigger will reduce the rate of fake triggers coming from background tracks in the endcap. We will present an overview of the FPGA-based trigger processor for NSW and trigger algorithms for sTGC and Micromegas detector sub systems. In additional, we will present development of NSW trigger electronics, in particular, the sTGC Trigger Data Serializer (TDS) ASIC, sTGC Pad Trigger board, the sTGC data packet router and L1 Data Driver Card. Finally, we will detail the challenges of meeting the low latency requirements of the trigger system and coping with the high background rates of the HL-LHC.

  13. Enhancement and evaluation of an algorithm for atmospheric profiling continuity from Aqua to Suomi-NPP

    Science.gov (United States)

    Lipton, A.; Moncet, J. L.; Payne, V.; Lynch, R.; Polonsky, I. N.

    2017-12-01

    We will present recent results from an algorithm for producing climate-quality atmospheric profiling earth system data records (ESDRs) for application to data from hyperspectral sounding instruments, including the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua and the Cross-track Infrared Sounder (CrIS) on Suomi-NPP, along with their companion microwave sounders, AMSU and ATMS, respectively. The ESDR algorithm uses an optimal estimation approach and the implementation has a flexible, modular software structure to support experimentation and collaboration. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. Developments to be presented include the impact of a radiance-based pre-classification method for the atmospheric background. In addition to improving retrieval performance, pre-classification has the potential to reduce the sensitivity of the retrievals to the climatological data from which the background estimate and its error covariance are derived. We will also discuss evaluation of a method for mitigating the effect of clouds on the radiances, and enhancements of the radiative transfer forward model.

  14. 2010 International consensus algorithm for the diagnosis, therapy and management of hereditary angioedema

    DEFF Research Database (Denmark)

    Bowen, Tom; Cicardi, Marco; Farkas, Henriette

    2010-01-01

    ABSTRACT: BACKGROUND: We published the Canadian 2003 International Consensus Algorithm for the Diagnosis, Therapy, and Management of Hereditary Angioedema (HAE; C1 inhibitor [C1-INH] deficiency) and updated this as Hereditary angioedema: a current state-of-the-art review: Canadian Hungarian 2007 ...

  15. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  16. Bayesian Noise Estimation for Non-ideal Cosmic Microwave Background Experiments

    Science.gov (United States)

    Wehus, I. K.; Næss, S. K.; Eriksen, H. K.

    2012-03-01

    We describe a Bayesian framework for estimating the time-domain noise covariance of cosmic microwave background (CMB) observations, typically parameterized in terms of a 1/f frequency profile. This framework is based on the Gibbs sampling algorithm, which allows for exact marginalization over nuisance parameters through conditional probability distributions. In this paper, we implement support for gaps in the data streams and marginalization over fixed time-domain templates, and also outline how to marginalize over confusion from CMB fluctuations, which may be important for high signal-to-noise experiments. As a by-product of the method, we obtain proper constrained realizations, which themselves can be useful for map making. To validate the algorithm, we demonstrate that the reconstructed noise parameters and corresponding uncertainties are unbiased using simulated data. The CPU time required to process a single data stream of 100,000 samples with 1000 samples removed by gaps is 3 s if only the maximum posterior parameters are required, and 21 s if one also wants to obtain the corresponding uncertainties by Gibbs sampling.

  17. BAYESIAN NOISE ESTIMATION FOR NON-IDEAL COSMIC MICROWAVE BACKGROUND EXPERIMENTS

    International Nuclear Information System (INIS)

    Wehus, I. K.; Næss, S. K.; Eriksen, H. K.

    2012-01-01

    We describe a Bayesian framework for estimating the time-domain noise covariance of cosmic microwave background (CMB) observations, typically parameterized in terms of a 1/f frequency profile. This framework is based on the Gibbs sampling algorithm, which allows for exact marginalization over nuisance parameters through conditional probability distributions. In this paper, we implement support for gaps in the data streams and marginalization over fixed time-domain templates, and also outline how to marginalize over confusion from CMB fluctuations, which may be important for high signal-to-noise experiments. As a by-product of the method, we obtain proper constrained realizations, which themselves can be useful for map making. To validate the algorithm, we demonstrate that the reconstructed noise parameters and corresponding uncertainties are unbiased using simulated data. The CPU time required to process a single data stream of 100,000 samples with 1000 samples removed by gaps is 3 s if only the maximum posterior parameters are required, and 21 s if one also wants to obtain the corresponding uncertainties by Gibbs sampling.

  18. BAYESIAN NOISE ESTIMATION FOR NON-IDEAL COSMIC MICROWAVE BACKGROUND EXPERIMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Wehus, I. K. [Theoretical Physics, Imperial College London, London SW7 2AZ (United Kingdom); Naess, S. K.; Eriksen, H. K., E-mail: i.k.wehus@fys.uio.no, E-mail: sigurdkn@astro.uio.no, E-mail: h.k.k.eriksen@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, N-0315 Oslo (Norway)

    2012-03-01

    We describe a Bayesian framework for estimating the time-domain noise covariance of cosmic microwave background (CMB) observations, typically parameterized in terms of a 1/f frequency profile. This framework is based on the Gibbs sampling algorithm, which allows for exact marginalization over nuisance parameters through conditional probability distributions. In this paper, we implement support for gaps in the data streams and marginalization over fixed time-domain templates, and also outline how to marginalize over confusion from CMB fluctuations, which may be important for high signal-to-noise experiments. As a by-product of the method, we obtain proper constrained realizations, which themselves can be useful for map making. To validate the algorithm, we demonstrate that the reconstructed noise parameters and corresponding uncertainties are unbiased using simulated data. The CPU time required to process a single data stream of 100,000 samples with 1000 samples removed by gaps is 3 s if only the maximum posterior parameters are required, and 21 s if one also wants to obtain the corresponding uncertainties by Gibbs sampling.

  19. Evaluation of the performance of existing non-laboratory based cardiovascular risk assessment algorithms

    Science.gov (United States)

    2013-01-01

    Background The high burden and rising incidence of cardiovascular disease (CVD) in resource constrained countries necessitates implementation of robust and pragmatic primary and secondary prevention strategies. Many current CVD management guidelines recommend absolute cardiovascular (CV) risk assessment as a clinically sound guide to preventive and treatment strategies. Development of non-laboratory based cardiovascular risk assessment algorithms enable absolute risk assessment in resource constrained countries. The objective of this review is to evaluate the performance of existing non-laboratory based CV risk assessment algorithms using the benchmarks for clinically useful CV risk assessment algorithms outlined by Cooney and colleagues. Methods A literature search to identify non-laboratory based risk prediction algorithms was performed in MEDLINE, CINAHL, Ovid Premier Nursing Journals Plus, and PubMed databases. The identified algorithms were evaluated using the benchmarks for clinically useful cardiovascular risk assessment algorithms outlined by Cooney and colleagues. Results Five non-laboratory based CV risk assessment algorithms were identified. The Gaziano and Framingham algorithms met the criteria for appropriateness of statistical methods used to derive the algorithms and endpoints. The Swedish Consultation, Framingham and Gaziano algorithms demonstrated good discrimination in derivation datasets. Only the Gaziano algorithm was externally validated where it had optimal discrimination. The Gaziano and WHO algorithms had chart formats which made them simple and user friendly for clinical application. Conclusion Both the Gaziano and Framingham non-laboratory based algorithms met most of the criteria outlined by Cooney and colleagues. External validation of the algorithms in diverse samples is needed to ascertain their performance and applicability to different populations and to enhance clinicians’ confidence in them. PMID:24373202

  20. A hand tracking algorithm with particle filter and improved GVF snake model

    Science.gov (United States)

    Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe

    2017-07-01

    To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.

  1. Hanford Site background: Part 1, Soil background for nonradioactive analytes

    International Nuclear Information System (INIS)

    1993-04-01

    Volume two contains the following appendices: Description of soil sampling sites; sampling narrative; raw data soil background; background data analysis; sitewide background soil sampling plan; and use of soil background data for the detection of contamination at waste management unit on the Hanford Site

  2. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  3. Ship detection using STFT sea background statistical modeling for large-scale oceansat remote sensing image

    Science.gov (United States)

    Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan

    2018-03-01

    Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.

  4. [Quantitative Analysis of Heavy Metals in Water with LIBS Based on Signal-to-Background Ratio].

    Science.gov (United States)

    Hu, Li; Zhao, Nan-jing; Liu, Wen-qing; Fang, Li; Zhang, Da-hai; Wang, Yin; Meng, De Shuo; Yu, Yang; Ma, Ming-jun

    2015-07-01

    There are many influence factors in the precision and accuracy of the quantitative analysis with LIBS technology. According to approximately the same characteristics trend of background spectrum and characteristic spectrum along with the change of temperature through in-depth analysis, signal-to-background ratio (S/B) measurement and regression analysis could compensate the spectral line intensity changes caused by system parameters such as laser power, spectral efficiency of receiving. Because the measurement dates were limited and nonlinear, we used support vector machine (SVM) for regression algorithm. The experimental results showed that the method could improve the stability and the accuracy of quantitative analysis of LIBS, and the relative standard deviation and average relative error of test set respectively were 4.7% and 9.5%. Data fitting method based on signal-to-background ratio(S/B) is Less susceptible to matrix elements and background spectrum etc, and provides data processing reference for real-time online LIBS quantitative analysis technology.

  5. The Combined Effect of Signal Strength and Background Traffic Load on Speech Quality in IEEE 802.11 WLAN

    Directory of Open Access Journals (Sweden)

    P. Pocta

    2011-04-01

    Full Text Available This paper deals with measurements of the combined effect of signal strength and background traffic load on speech quality in IEEE 802.11 WLAN. The ITU-T G.729AB encoding scheme is deployed in this study and the Distributed Internet Traffic Generator (D-ITG is used for the purpose of background traffic generation. The speech quality and background traffic load are assessed by means of the accomplished PESQ algorithm and Wireshark network analyzer, respectively. The results show that background traffic load has a bit higher impact on speech quality than signal strength when both effects are available together. Moreover, background traffic load also partially masks the impact of signal strength. The reasons for those findings are particularly discussed. The results also suggest some implications for designers of wireless networks providing VoIP service.

  6. Quantitative analysis of planar technetium-99m-sestamibi myocardial perfusion images using modified background subtraction

    International Nuclear Information System (INIS)

    Koster, K.; Wackers, F.J.; Mattera, J.A.; Fetterman, R.C.

    1990-01-01

    Standard interpolative background subtraction, as used for thallium-201 ( 201 Tl), may create artifacts when applied to planar technetium-99m-Sestamibi ( 99m Tc-Sestamibi) images, apparently because of the oversubtraction of relatively high extra-cardiac activity. A modified background subtraction algorithm was developed and compared to standard background subtraction in 16 patients who had both exercise-delayed 201 Tl and exercise-rest 99m Tc-Sestamibi imaging. Furthermore, a new normal data base was generated. Normal 99m Tc-Sestamibi distribution was slightly different compared to 201 Tl. Using standard background subtraction, mean defect reversibility was significantly underestimated by 99m Tc-Sestamibi compared to 201 Tl (2.8 +/- 4.9 versus -1.8 +/- 8.4, p less than 0.05). Using the modified background subtraction, mean defect reversibility on 201 Tl and 99m Tc-Sestamibi images was comparable (2.8 +/- 4.9 versus 1.7 +/- 5.2, p = NS). We conclude, that for quantification of 99m Tc-Sestamibi images a new normal data base, as well as a modification of the interpolative background subtraction method should be employed to obtain quantitative results comparable to those with 201 Tl

  7. Genetic Algorithm and its Application in Optimal Sensor Layout

    Directory of Open Access Journals (Sweden)

    Xiang-Yang Chen

    2015-05-01

    Full Text Available This paper aims at the problem of multi sensor station distribution, based on multi- sensor systems of different types as the research object, in the analysis of various types of sensors with different application background, different indicators of demand, based on the different constraints, for all kinds of multi sensor station is studied, the application of genetic algorithms as a tool for the objective function of the models optimization, then the optimal various types of multi sensor station distribution plan, improve the performance of the system, and achieved good military effect. In the field of application of sensor radar, track measuring instrument, the satellite, passive positioning equipment of various types, specific problem, use care indicators and station arrangement between the mathematical model of geometry, using genetic algorithm to get the optimization results station distribution, to solve a variety of practical problems provides useful help, but also reflects the improved genetic algorithm in electronic weapon system based on multi sensor station distribution on the applicability and effectiveness of the optimization; finally the genetic algorithm for integrated optimization of multi sensor station distribution using the good to the training exercise tasks based on actual in, and have achieved good military effect.

  8. ADAPTIVE BACKGROUND DENGAN METODE GAUSSIAN MIXTURE MODELS UNTUK REAL-TIME TRACKING

    Directory of Open Access Journals (Sweden)

    Silvia Rostianingsih

    2008-01-01

    Full Text Available Nowadays, motion tracking application is widely used for many purposes, such as detecting traffic jam and counting how many people enter a supermarket or a mall. A method to separate background and the tracked object is required for motion tracking. It will not be hard to develop the application if the tracking is performed on a static background, but it will be difficult if the tracked object is at a place with a non-static background, because the changing part of the background can be recognized as a tracking area. In order to handle the problem an application can be made to separate background where that separation can adapt to change that occur. This application is made to produce adaptive background using Gaussian Mixture Models (GMM as its method. GMM method clustered the input pixel data with pixel color value as it’s basic. After the cluster formed, dominant distributions are choosen as background distributions. This application is made by using Microsoft Visual C 6.0. The result of this research shows that GMM algorithm could made adaptive background satisfactory. This proofed by the result of the tests that succeed at all condition given. This application can be developed so the tracking process integrated in adaptive background maker process. Abstract in Bahasa Indonesia : Saat ini, aplikasi motion tracking digunakan secara luas untuk banyak tujuan, seperti mendeteksi kemacetan dan menghitung berapa banyak orang yang masuk ke sebuah supermarket atau sebuah mall. Sebuah metode untuk memisahkan antara background dan obyek yang di-track dibutuhkan untuk melakukan motion tracking. Membuat aplikasi tracking pada background yang statis bukanlah hal yang sulit, namun apabila tracking dilakukan pada background yang tidak statis akan lebih sulit, dikarenakan perubahan background dapat dikenali sebagai area tracking. Untuk mengatasi masalah tersebut, dapat dibuat suatu aplikasi untuk memisahkan background dimana aplikasi tersebut dapat

  9. Segmentation of Mushroom and Cap width Measurement using Modified K-Means Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Eser Sert

    2014-01-01

    Full Text Available Mushroom is one of the commonly consumed foods. Image processing is one of the effective way for examination of visual features and detecting the size of a mushroom. We developed software for segmentation of a mushroom in a picture and also to measure the cap width of the mushroom. K-Means clustering method is used for the process. K-Means is one of the most successful clustering methods. In our study we customized the algorithm to get the best result and tested the algorithm. In the system, at first mushroom picture is filtered, histograms are balanced and after that segmentation is performed. Results provided that customized algorithm performed better segmentation than classical K-Means algorithm. Tests performed on the designed software showed that segmentation on complex background pictures is performed with high accuracy, and 20 mushrooms caps are measured with 2.281 % relative error.

  10. The track finding algorithm of the Belle II vertex detectors

    Directory of Open Access Journals (Sweden)

    Bilka Tadeas

    2017-01-01

    Full Text Available The Belle II experiment is a high energy multi purpose particle detector operated at the asymmetric e+e− - collider SuperKEKB in Tsukuba (Japan. In this work we describe the algorithm performing the pattern recognition for inner tracking detector which consists of two layers of pixel detectors and four layers of double sided silicon strip detectors arranged around the interaction region. The track finding algorithm will be used both during the High Level Trigger on-line track reconstruction and during the off-line full reconstruction. It must provide good efficiency down to momenta as low as 50 MeV/c where material effects are sizeable even in an extremely thin detector as the VXD. In addition it has to be able to cope with the high occupancy of the Belle II detectors due to the background. The underlying concept of the track finding algorithm, as well as details of the implementation are outlined. The algorithm is proven to run with good performance on simulated ϒ(4S → BB̄ events with an efficiency for reconstructing tracks of above 90% over a wide range of momentum.

  11. A nonlinear filtering algorithm for denoising HR(S)TEM micrographs

    International Nuclear Information System (INIS)

    Du, Hongchu

    2015-01-01

    Noise reduction of micrographs is often an essential task in high resolution (scanning) transmission electron microscopy (HR(S)TEM) either for a higher visual quality or for a more accurate quantification. Since HR(S)TEM studies are often aimed at resolving periodic atomistic columns and their non-periodic deviation at defects, it is important to develop a noise reduction algorithm that can simultaneously handle both periodic and non-periodic features properly. In this work, a nonlinear filtering algorithm is developed based on widely used techniques of low-pass filter and Wiener filter, which can efficiently reduce noise without noticeable artifacts even in HR(S)TEM micrographs with contrast of variation of background and defects. The developed nonlinear filtering algorithm is particularly suitable for quantitative electron microscopy, and is also of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM. - Highlights: • A nonlinear filtering algorithm for denoising HR(S)TEM images is developed. • It can simultaneously handle both periodic and non-periodic features properly. • It is particularly suitable for quantitative electron microscopy. • It is of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM

  12. A statistical background noise correction sensitive to the steadiness of background noise.

    Science.gov (United States)

    Oppenheimer, Charles H

    2016-10-01

    A statistical background noise correction is developed for removing background noise contributions from measured source levels, producing a background noise-corrected source level. Like the standard background noise corrections of ISO 3741, ISO 3744, ISO 3745, and ISO 11201, the statistical background correction increases as the background level approaches the measured source level, decreasing the background noise-corrected source level. Unlike the standard corrections, the statistical background correction increases with steadiness of the background and is excluded from use when background fluctuation could be responsible for measured differences between the source and background noise levels. The statistical background noise correction has several advantages over the standard correction: (1) enveloping the true source with known confidence, (2) assuring physical source descriptions when measuring sources in fluctuating backgrounds, (3) reducing background corrected source descriptions by 1 to 8 dB for sources in steady backgrounds, and (4) providing a means to replace standardized background correction caps that incentivize against high precision grade methods.

  13. Flux-corrected transport principles, algorithms, and applications

    CERN Document Server

    Kuzmin, Dmitri; Turek, Stefan

    2005-01-01

    Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...

  14. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  15. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    Science.gov (United States)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the

  16. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    International Nuclear Information System (INIS)

    Poynee, L A

    2003-01-01

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation

  17. Algorithms and programs of dynamic mixture estimation unified approach to different types of components

    CERN Document Server

    Nagy, Ivan

    2017-01-01

    This book provides a general theoretical background for constructing the recursive Bayesian estimation algorithms for mixture models. It collects the recursive algorithms for estimating dynamic mixtures of various distributions and brings them in the unified form, providing a scheme for constructing the estimation algorithm for a mixture of components modeled by distributions with reproducible statistics. It offers the recursive estimation of dynamic mixtures, which are free of iterative processes and close to analytical solutions as much as possible. In addition, these methods can be used online and simultaneously perform learning, which improves their efficiency during estimation. The book includes detailed program codes for solving the presented theoretical tasks. Codes are implemented in the open source platform for engineering computations. The program codes given serve to illustrate the theory and demonstrate the work of the included algorithms.

  18. An adaptive tensor voting algorithm combined with texture spectrum

    Science.gov (United States)

    Wang, Gang; Su, Qing-tang; Lü, Gao-huan; Zhang, Xiao-feng; Liu, Yu-huan; He, An-zhi

    2015-01-01

    An adaptive tensor voting algorithm combined with texture spectrum is proposed. The image texture spectrum is used to get the adaptive scale parameter of voting field. Then the texture information modifies both the attenuation coefficient and the attenuation field so that we can use this algorithm to create more significant and correct structures in the original image according to the human visual perception. At the same time, the proposed method can improve the edge extraction quality, which includes decreasing the flocculent region efficiently and making image clear. In the experiment for extracting pavement cracks, the original pavement image is processed by the proposed method which is combined with the significant curve feature threshold procedure, and the resulted image displays the faint crack signals submerged in the complicated background efficiently and clearly.

  19. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    Science.gov (United States)

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  20. Development of estimation algorithm of loose parts and analysis of impact test data

    International Nuclear Information System (INIS)

    Kim, Jung Soo; Ham, Chang Sik; Jung, Chul Hwan; Hwang, In Koo; Kim, Tak Hwane; Kim, Tae Hwane; Park, Jin Ho

    1999-11-01

    Loose parts are produced by being parted from the structure of the reactor coolant system or by coming into RCS from the outside during test operation, refueling, and overhaul time. These loose parts are mixed with reactor coolant fluid and collide with RCS components. When loose parts are occurred within RCS, it is necessary to estimate the impact point and the mass of loose parts. In this report an analysis algorithm for the estimation of the impact point and mass of loose part is developed. The developed algorithm was tested with the impact test data of Yonggwang-3. The estimated impact point using the proposed algorithm in this report had 5 percent error to the real test data. The estimated mass was analyzed within 28 percent error bound using the same unit's data. We analyzed the characteristic frequency of each sensor because this frequency effected the estimation of impact point and mass. The characteristic frequency of the background noise during normal operation was compared with that of the impact test data. The result of the comparison illustrated that the characteristic frequency bandwidth of the impact test data was lower than that of the background noise during normal operation. by the comparison, the integrity of sensor and monitoring system could be checked, too. (author)

  1. [Algorithm for taking into account the average annual background of air pollution in the assessment of health risks].

    Science.gov (United States)

    Fokin, M V

    2013-01-01

    State Budgetary Educational Institution of Higher Professional Education "I.M. Sechenov First Moscow State Medical University" of the Ministry of Health care and Social Development, Moscow, Russian Federation. The assessment of health risks from air pollution with emissions from industrial facilities, without the average annual background of air pollution does not meet sanitary legislation. However Russian Federal Service for Hydrometeorology and Environmental Monitoring issues official certificates for a limited number of areas covered by the observations of the full program on the stationary points. Questions of accounting average background air pollution in the evaluation of health risks from exposure to emissions from industrial facilities are considered.

  2. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  3. GenClust: A genetic algorithm for clustering gene expression data

    Directory of Open Access Journals (Sweden)

    Raimondi Alessandra

    2005-12-01

    Full Text Available Abstract Background Clustering is a key step in the analysis of gene expression data, and in fact, many classical clustering algorithms are used, or more innovative ones have been designed and validated for the task. Despite the widespread use of artificial intelligence techniques in bioinformatics and, more generally, data analysis, there are very few clustering algorithms based on the genetic paradigm, yet that paradigm has great potential in finding good heuristic solutions to a difficult optimization problem such as clustering. Results GenClust is a new genetic algorithm for clustering gene expression data. It has two key features: (a a novel coding of the search space that is simple, compact and easy to update; (b it can be used naturally in conjunction with data driven internal validation methods. We have experimented with the FOM methodology, specifically conceived for validating clusters of gene expression data. The validity of GenClust has been assessed experimentally on real data sets, both with the use of validation measures and in comparison with other algorithms, i.e., Average Link, Cast, Click and K-means. Conclusion Experiments show that none of the algorithms we have used is markedly superior to the others across data sets and validation measures; i.e., in many cases the observed differences between the worst and best performing algorithm may be statistically insignificant and they could be considered equivalent. However, there are cases in which an algorithm may be better than others and therefore worthwhile. In particular, experiments for GenClust show that, although simple in its data representation, it converges very rapidly to a local optimum and that its ability to identify meaningful clusters is comparable, and sometimes superior, to that of more sophisticated algorithms. In addition, it is well suited for use in conjunction with data driven internal validation measures and, in particular, the FOM methodology.

  4. CSA: An efficient algorithm to improve circular DNA multiple alignment

    Directory of Open Access Journals (Sweden)

    Pereira Luísa

    2009-07-01

    Full Text Available Abstract Background The comparison of homologous sequences from different species is an essential approach to reconstruct the evolutionary history of species and of the genes they harbour in their genomes. Several complete mitochondrial and nuclear genomes are now available, increasing the importance of using multiple sequence alignment algorithms in comparative genomics. MtDNA has long been used in phylogenetic analysis and errors in the alignments can lead to errors in the interpretation of evolutionary information. Although a large number of multiple sequence alignment algorithms have been proposed to date, they all deal with linear DNA and cannot handle directly circular DNA. Researchers interested in aligning circular DNA sequences must first rotate them to the "right" place using an essentially manual process, before they can use multiple sequence alignment tools. Results In this paper we propose an efficient algorithm that identifies the most interesting region to cut circular genomes in order to improve phylogenetic analysis when using standard multiple sequence alignment algorithms. This algorithm identifies the largest chain of non-repeated longest subsequences common to a set of circular mitochondrial DNA sequences. All the sequences are then rotated and made linear for multiple alignment purposes. To evaluate the effectiveness of this new tool, three different sets of mitochondrial DNA sequences were considered. Other tests considering randomly rotated sequences were also performed. The software package Arlequin was used to evaluate the standard genetic measures of the alignments obtained with and without the use of the CSA algorithm with two well known multiple alignment algorithms, the CLUSTALW and the MAVID tools, and also the visualization tool SinicView. Conclusion The results show that a circularization and rotation pre-processing step significantly improves the efficiency of public available multiple sequence alignment

  5. Development of a data-driven algorithm to determine the W+jets background in t anti t events in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Mehlhase, Sascha

    2010-07-12

    The physics of the top quark is one of the key components in the physics programme of the ATLAS experiment at the Large Hadron Collider at CERN. In this thesis, general studies of the jet trigger performance for top quark events using fully simulated Monte Carlo samples are presented and two data-driven techniques to estimate the multi-jet trigger efficiency and the W+Jets background in top pair events are introduced to the ATLAS experiment. In a tag-and-probe based method, using a simple and common event selection and a high transverse momentum lepton as tag object, the possibility to estimate the multijet trigger efficiency from data in ATLAS is investigated and it is shown that the method is capable of estimating the efficiency without introducing any significant bias by the given tag selection. In the second data-driven analysis a new method to estimate the W+Jets background in a top-pair event selection is introduced to ATLAS. By defining signal and background dominated regions by means of the jet multiplicity and the pseudo-rapidity distribution of the lepton in the event, the W+Jets contribution is extrapolated from the background dominated into the signal dominated region. The method is found to estimate the given background contribution as a function of the jet multiplicity with an accuracy of about 25% for most of the top dominated region with an integrated luminosity of above 100 pb{sup -1} at {radical}(s) = 10 TeV. This thesis also covers a study summarising the thermal behaviour and expected performance of the Pixel Detector of ATLAS. All measurements performed during the commissioning phase of 2008/09 yield results within the specification of the system and the performance is expected to stay within those even after several years of running under LHC conditions. (orig.)

  6. Development of a data-driven algorithm to determine the W+jets background in t anti t events in ATLAS

    International Nuclear Information System (INIS)

    Mehlhase, Sascha

    2010-01-01

    The physics of the top quark is one of the key components in the physics programme of the ATLAS experiment at the Large Hadron Collider at CERN. In this thesis, general studies of the jet trigger performance for top quark events using fully simulated Monte Carlo samples are presented and two data-driven techniques to estimate the multi-jet trigger efficiency and the W+Jets background in top pair events are introduced to the ATLAS experiment. In a tag-and-probe based method, using a simple and common event selection and a high transverse momentum lepton as tag object, the possibility to estimate the multijet trigger efficiency from data in ATLAS is investigated and it is shown that the method is capable of estimating the efficiency without introducing any significant bias by the given tag selection. In the second data-driven analysis a new method to estimate the W+Jets background in a top-pair event selection is introduced to ATLAS. By defining signal and background dominated regions by means of the jet multiplicity and the pseudo-rapidity distribution of the lepton in the event, the W+Jets contribution is extrapolated from the background dominated into the signal dominated region. The method is found to estimate the given background contribution as a function of the jet multiplicity with an accuracy of about 25% for most of the top dominated region with an integrated luminosity of above 100 pb -1 at √(s) = 10 TeV. This thesis also covers a study summarising the thermal behaviour and expected performance of the Pixel Detector of ATLAS. All measurements performed during the commissioning phase of 2008/09 yield results within the specification of the system and the performance is expected to stay within those even after several years of running under LHC conditions. (orig.)

  7. A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test

    Science.gov (United States)

    Becker, D.; Cain, S.

    Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.

  8. Going Tobacco-Free on 24 New York City University Campuses: A Public Health Agency's Partnership with a Large Urban Public University System

    Science.gov (United States)

    Bresnahan, Marie P.; Sacks, Rachel; Farley, Shannon M.; Mandel-Ricci, Jenna; Patterson, Ty; Lamberson, Patti

    2016-01-01

    The New York City Department of Health and Mental Hygiene partnered with the nation's largest university system, the City University of New York (CUNY), to provide technical assistance and resources to support the development and implementation of a system-wide tobacco-free policy. This effort formed one component of "Healthy CUNY"--a…

  9. The Experiment

    Science.gov (United States)

    Miranda, Maria Eugenia

    2009-01-01

    The City University of New York (CUNY) Graduate School of Journalism is testing a new model for journalism with an innovative hyperlocal news project with The New York Times. CUNY is not only providing interactive media instruction to budding professional journalists, but some of its students are helping train citizen journalists to contribute…

  10. Enhanced Algorithms for EO/IR Electronic Stabilization, Clutter Suppression, and Track-Before-Detect for Multiple Low Observable Targets

    Science.gov (United States)

    Tartakovsky, A.; Brown, A.; Brown, J.

    The paper describes the development and evaluation of a suite of advanced algorithms which provide significantly-improved capabilities for finding, fixing, and tracking multiple ballistic and flying low observable objects in highly stressing cluttered environments. The algorithms have been developed for use in satellite-based staring and scanning optical surveillance suites for applications including theatre and intercontinental ballistic missile early warning, trajectory prediction, and multi-sensor track handoff for midcourse discrimination and intercept. The functions performed by the algorithms include electronic sensor motion compensation providing sub-pixel stabilization (to 1/100 of a pixel), as well as advanced temporal-spatial clutter estimation and suppression to below sensor noise levels, followed by statistical background modeling and Bayesian multiple-target track-before-detect filtering. The multiple-target tracking is performed in physical world coordinates to allow for multi-sensor fusion, trajectory prediction, and intercept. Output of detected object cues and data visualization are also provided. The algorithms are designed to handle a wide variety of real-world challenges. Imaged scenes may be highly complex and infinitely varied -- the scene background may contain significant celestial, earth limb, or terrestrial clutter. For example, when viewing combined earth limb and terrestrial scenes, a combination of stationary and non-stationary clutter may be present, including cloud formations, varying atmospheric transmittance and reflectance of sunlight and other celestial light sources, aurora, glint off sea surfaces, and varied natural and man-made terrain features. The targets of interest may also appear to be dim, relative to the scene background, rendering much of the existing deployed software useless for optical target detection and tracking. Additionally, it may be necessary to detect and track a large number of objects in the threat cloud

  11. Which clustering algorithm is better for predicting protein complexes?

    Directory of Open Access Journals (Sweden)

    Moschopoulos Charalampos N

    2011-12-01

    Full Text Available Abstract Background Protein-Protein interactions (PPI play a key role in determining the outcome of most cellular processes. The correct identification and characterization of protein interactions and the networks, which they comprise, is critical for understanding the molecular mechanisms within the cell. Large-scale techniques such as pull down assays and tandem affinity purification are used in order to detect protein interactions in an organism. Today, relatively new high-throughput methods like yeast two hybrid, mass spectrometry, microarrays, and phage display are also used to reveal protein interaction networks. Results In this paper we evaluated four different clustering algorithms using six different interaction datasets. We parameterized the MCL, Spectral, RNSC and Affinity Propagation algorithms and applied them to six PPI datasets produced experimentally by Yeast 2 Hybrid (Y2H and Tandem Affinity Purification (TAP methods. The predicted clusters, so called protein complexes, were then compared and benchmarked with already known complexes stored in published databases. Conclusions While results may differ upon parameterization, the MCL and RNSC algorithms seem to be more promising and more accurate at predicting PPI complexes. Moreover, they predict more complexes than other reviewed algorithms in absolute numbers. On the other hand the spectral clustering algorithm achieves the highest valid prediction rate in our experiments. However, it is nearly always outperformed by both RNSC and MCL in terms of the geometrical accuracy while it generates the fewest valid clusters than any other reviewed algorithm. This article demonstrates various metrics to evaluate the accuracy of such predictions as they are presented in the text below. Supplementary material can be found at: http://www.bioacademy.gr/bioinformatics/projects/ppireview.htm

  12. LHCb - Novel Muon Identification Algorithms for the LHCb Upgrade

    CERN Multimedia

    Cogoni, Violetta

    2016-01-01

    The present LHCb Muon Identification procedure was optimised to guarantee high muon detection efficiency at the istantaneous luminosity $\\mathcal{L}$ of $2\\cdot10^{32}$~cm$^{-2}$~s$^{-1}$. In the current data taking conditions, the luminosity is higher than foreseen and the low energy background contribution to the visible rate in the muon system is larger than expected. A worse situation is expected for Run III when LHCb will operate at $\\mathcal{L} = 2\\cdot10^{33}$~cm$^{-2}$~s$^{-1}$ causing the high particle fluxes to deteriorate the muon detection efficiency, because of the increased dead time of the electronics, and in particular to worsen the muon identification capabilities, due to the increased contribution of the background, with deleterious consequences especially for the analyses requiring high purity signal. In this context, possible new algorithms for the muon identification will be illustrated. In particular, the performance on combinatorial background rejection will be shown, together with the ...

  13. A speedup technique for (l, d-motif finding algorithms

    Directory of Open Access Journals (Sweden)

    Dinh Hieu

    2011-03-01

    Full Text Available Abstract Background The discovery of patterns in DNA, RNA, and protein sequences has led to the solution of many vital biological problems. For instance, the identification of patterns in nucleic acid sequences has resulted in the determination of open reading frames, identification of promoter elements of genes, identification of intron/exon splicing sites, identification of SH RNAs, location of RNA degradation signals, identification of alternative splicing sites, etc. In protein sequences, patterns have proven to be extremely helpful in domain identification, location of protease cleavage sites, identification of signal peptides, protein interactions, determination of protein degradation elements, identification of protein trafficking elements, etc. Motifs are important patterns that are helpful in finding transcriptional regulatory elements, transcription factor binding sites, functional genomics, drug design, etc. As a result, numerous papers have been written to solve the motif search problem. Results Three versions of the motif search problem have been proposed in the literature: Simple Motif Search (SMS, (l, d-motif search (or Planted Motif Search (PMS, and Edit-distance-based Motif Search (EMS. In this paper we focus on PMS. Two kinds of algorithms can be found in the literature for solving the PMS problem: exact and approximate. An exact algorithm identifies the motifs always and an approximate algorithm may fail to identify some or all of the motifs. The exact version of PMS problem has been shown to be NP-hard. Exact algorithms proposed in the literature for PMS take time that is exponential in some of the underlying parameters. In this paper we propose a generic technique that can be used to speedup PMS algorithms. Conclusions We present a speedup technique that can be used on any PMS algorithm. We have tested our speedup technique on a number of algorithms. These experimental results show that our speedup technique is indeed very

  14. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  15. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  16. A multilevel layout algorithm for visualizing physical and genetic interaction networks, with emphasis on their modular organization

    OpenAIRE

    Tuikkala, Johannes; Vähämaa, Heidi; Salmela, Pekka; Nevalainen, Olli S; Aittokallio, Tero

    2012-01-01

    Abstract Background Graph drawing is an integral part of many systems biology studies, enabling visual exploration and mining of large-scale biological networks. While a number of layout algorithms are available in popular network analysis platforms, such as Cytoscape, it remains poorly understood how well their solutions reflect the underlying biological processes that give rise to the network connectivity structure. Moreover, visualizations obtained using conventional layout algorithms, suc...

  17. Efficient generation of image chips for training deep learning algorithms

    Science.gov (United States)

    Han, Sanghui; Fafard, Alex; Kerekes, John; Gartley, Michael; Ientilucci, Emmett; Savakis, Andreas; Law, Charles; Parhan, Jason; Turek, Matt; Fieldhouse, Keith; Rovito, Todd

    2017-05-01

    Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with

  18. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  19. MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE

    Institute of Scientific and Technical Information of China (English)

    Dong Heping; Ma Fuming; Zhang Deyue

    2012-01-01

    In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.

  20. Improved wavelet packet classification algorithm for vibrational intrusions in distributed fiber-optic monitoring systems

    Science.gov (United States)

    Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo

    2015-05-01

    An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.

  1. On-board event processing algorithms for a CCD-based space borne X-ray spectrometer

    International Nuclear Information System (INIS)

    Chun, H.J.; Bowles, J.A.; Branduardi-Raymont, G.; Gowen, R.A.

    1996-01-01

    This paper describes two alternative algorithms which are applied to reduce the telemetry requirements for a Charge Coupled Device (CCD) based, space-borne, X-ray spectrometer by on-board reconstruction of the X-ray events split over two or more adjacent pixels. The algorithms have been developed for the Reflection Grating Spectrometer (RGS) on the X-ray multi-mirror (XMM) mission, the second cornerstone project in the European Space Agency's Horizon 2000 programme. The overall instrument and some criteria which provide the background of the development of the algorithms, implemented in Tartan ADA on an MA31750 microprocessor, are described. The on-board processing constraints and requirements are discussed, and the performances of the algorithms are compared. Test results are presented which show that the recursive implementation is faster and has a smaller executable file although it uses more memory because of its stack requirements. (orig.)

  2. Evaluation Of Spatial Filters For Background Suppression In Infrared Mosaic Sensor Systems

    Science.gov (United States)

    Bergen, T. L.; Mazaika, P. K.

    1982-12-01

    Spaceborne infrared mosaic sensors have been proposed for future surveillance systems. Because these systems will generate a large volume of data, background suppression will require algorithms which use innovative architectures and minimal storage. This paper analyzes the implementation and performance of candidate temporal and spatial filters. Spatial filters are attractive because they require far less memory, can effectively exploit a parallel, pipelined architecture, and are relatively insensitive to target speed. However, the performance of spatial filtering is substantially worse than that of temporal filtering when the sensor has good line-of-sight stability.

  3. Algorithm for evaluation of parameters of ionization chamber signals from the flash-ADC date

    International Nuclear Information System (INIS)

    Baturin, V.N.; Balin, D.V.; Maev, E.M.; Petrov, G.E.; Semenchuk, G.G.

    1991-01-01

    An algorithm for evaluation of parameters of pulses obtained from the ionization chamber (IC) and digitized by Flash-ADC is described. It was designed for determination of the energies and times of arrival of charged particles in DTμ catalyzed fusion that occurs in the IC sensitive volume, in order to measure directly the probability of muon sticking. The algorithm provides the extraction of weak pulses of sloped muon with 50% efficiency, the measurement of fusion energy, especially for long and low amplitude pulses, the recognition of pulse pileups, using special shape analysis procedure. The algorithm was tuned with a special electronic hardware that supplied sequences of pulses with specified amplitudes, durations and shapes and simulation of simulated tritium-noise background. 6 refs.; 7 figs.; 1 tab

  4. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  5. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  6. Salient Region Detection by Fusing Foreground and Background Cues Extracted from Single Image

    Directory of Open Access Journals (Sweden)

    Qiangqiang Zhou

    2016-01-01

    Full Text Available Saliency detection is an important preprocessing step in many application fields such as computer vision, robotics, and graphics to reduce computational cost by focusing on significant positions and neglecting the nonsignificant in the scene. Different from most previous methods which mainly utilize the contrast of low-level features, various feature maps are fused in a simple linear weighting form. In this paper, we propose a novel salient object detection algorithm which takes both background and foreground cues into consideration and integrate a bottom-up coarse salient regions extraction and a top-down background measure via boundary labels propagation into a unified optimization framework to acquire a refined saliency detection result. Wherein the coarse saliency map is also fused by three components, the first is local contrast map which is in more accordance with the psychological law, the second is global frequency prior map, and the third is global color distribution map. During the formation of background map, first we construct an affinity matrix and select some nodes which lie on border as labels to represent the background and then carry out a propagation to generate the regional background map. The evaluation of the proposed model has been implemented on four datasets. As demonstrated in the experiments, our proposed method outperforms most existing saliency detection models with a robust performance.

  7. Mina P. Shaughnessy: Her Life and Work.

    Science.gov (United States)

    Maher, Jane

    This book is intended to be both a biography of an extraordinary woman and a historical account of events leading to Open Admissions within the City University of New York (CUNY) in 1970, wherein every graduate of a New York City high school was guaranteed a place within the CUNY system. The book profiles Mina Shaugnessy, who devoted her…

  8. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  9. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  10. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  11. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  12. Current algorithm for the surgical treatment of facial pain

    Directory of Open Access Journals (Sweden)

    Munawar Naureen

    2007-07-01

    Full Text Available Background Facial pain may be divided into several distinct categories, each requiring a specific treatment approach. In some cases, however, such categorization is difficult and treatment is ineffective. We reviewed our extensive clinical experience and designed an algorithmic approach to the treatment of medically intractable facial pain that can be treated through surgical intervention. Methods Our treatment algorithm is based on taking into account underlying pathological processes, the anatomical distribution of pain, pain characteristics, the patient's age and medical condition, associated medical problems, the history of previous surgical interventions, and, in some cases, the results of psychological evaluation. The treatment modalities involved in this algorithm include diagnostic blocks, peripheral denervation procedures, craniotomy for microvascular decompression of cranial nerves, percutaneous rhizotomies using radiofrequency ablation, glycerol injection, balloon compression, peripheral nerve stimulation procedures, stereotactic radiosurgery, percutaneous trigeminal tractotomy, and motor cortex stimulation. We recommend that some patients not receive surgery at all, but rather be referred for other medical or psychological treatment. Results Our algorithmic approach was used in more than 100 consecutive patients with medically intractable facial pain. Clinical evaluations and diagnostic workups were followed in each case by the systematic choice of the appropriate intervention. The algorithm has proved easy to follow, and the recommendations include the identification of the optimal surgery for each patient with other options reserved for failures or recurrences. Our overall success rate in eliminating facial pain presently reaches 96%, which is higher than that observed in most clinical series reported to date Conclusion This treatment algorithm for the intractable facial pain appears to be effective for patients with a wide variety

  13. Hanford Site background: Part 1, Soil background for nonradioactive analytes

    International Nuclear Information System (INIS)

    1993-04-01

    The determination of soil background is one of the most important activities supporting environmental restoration and waste management on the Hanford Site. Background compositions serve as the basis for identifying soil contamination, and also as a baseline in risk assessment processes used to determine soil cleanup and treatment levels. These uses of soil background require an understanding of the extent to which analytes of concern occur naturally in the soils. This report documents the results of sampling and analysis activities designed to characterize the composition of soil background at the Hanford Site, and to evaluate the feasibility for use as Sitewide background. The compositions of naturally occurring soils in the vadose Zone have been-determined for-nonradioactive inorganic and organic analytes and related physical properties. These results confirm that a Sitewide approach to the characterization of soil background is technically sound and is a viable alternative to the determination and use of numerous local or area backgrounds that yield inconsistent definitions of contamination. Sitewide soil background consists of several types of data and is appropriate for use in identifying contamination in all soils in the vadose zone on the Hanford Site. The natural concentrations of nearly every inorganic analyte extend to levels that exceed calculated health-based cleanup limits. The levels of most inorganic analytes, however, are well below these health-based limits. The highest measured background concentrations occur in three volumetrically minor soil types, the most important of which are topsoils adjacent to the Columbia River that are rich in organic carbon. No organic analyte levels above detection were found in any of the soil samples

  14. Gravitation field algorithm and its application in gene cluster

    Directory of Open Access Journals (Sweden)

    Zheng Ming

    2010-09-01

    Full Text Available Abstract Background Searching optima is one of the most challenging tasks in clustering genes from available experimental data or given functions. SA, GA, PSO and other similar efficient global optimization methods are used by biotechnologists. All these algorithms are based on the imitation of natural phenomena. Results This paper proposes a novel searching optimization algorithm called Gravitation Field Algorithm (GFA which is derived from the famous astronomy theory Solar Nebular Disk Model (SNDM of planetary formation. GFA simulates the Gravitation field and outperforms GA and SA in some multimodal functions optimization problem. And GFA also can be used in the forms of unimodal functions. GFA clusters the dataset well from the Gene Expression Omnibus. Conclusions The mathematical proof demonstrates that GFA could be convergent in the global optimum by probability 1 in three conditions for one independent variable mass functions. In addition to these results, the fundamental optimization concept in this paper is used to analyze how SA and GA affect the global search and the inherent defects in SA and GA. Some results and source code (in Matlab are publicly available at http://ccst.jlu.edu.cn/CSBG/GFA.

  15. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. Keywords: Comparing data, Filamentous cyanobacteria, Algorithm, Deoising, Natural sample

  16. BFL: a node and edge betweenness based fast layout algorithm for large scale networks

    Science.gov (United States)

    Hashimoto, Tatsunori B; Nagasaki, Masao; Kojima, Kaname; Miyano, Satoru

    2009-01-01

    Background Network visualization would serve as a useful first step for analysis. However, current graph layout algorithms for biological pathways are insensitive to biologically important information, e.g. subcellular localization, biological node and graph attributes, or/and not available for large scale networks, e.g. more than 10000 elements. Results To overcome these problems, we propose the use of a biologically important graph metric, betweenness, a measure of network flow. This metric is highly correlated with many biological phenomena such as lethality and clusters. We devise a new fast parallel algorithm calculating betweenness to minimize the preprocessing cost. Using this metric, we also invent a node and edge betweenness based fast layout algorithm (BFL). BFL places the high-betweenness nodes to optimal positions and allows the low-betweenness nodes to reach suboptimal positions. Furthermore, BFL reduces the runtime by combining a sequential insertion algorim with betweenness. For a graph with n nodes, this approach reduces the expected runtime of the algorithm to O(n2) when considering edge crossings, and to O(n log n) when considering only density and edge lengths. Conclusion Our BFL algorithm is compared against fast graph layout algorithms and approaches requiring intensive optimizations. For gene networks, we show that our algorithm is faster than all layout algorithms tested while providing readability on par with intensive optimization algorithms. We achieve a 1.4 second runtime for a graph with 4000 nodes and 12000 edges on a standard desktop computer. PMID:19146673

  17. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  18. Multiagency Urban Search Experiment Detector and Algorithm Test Bed

    Science.gov (United States)

    Nicholson, Andrew D.; Garishvili, Irakli; Peplow, Douglas E.; Archer, Daniel E.; Ray, William R.; Swinney, Mathew W.; Willis, Michael J.; Davidson, Gregory G.; Cleveland, Steven L.; Patton, Bruce W.; Hornback, Donald E.; Peltz, James J.; McLean, M. S. Lance; Plionis, Alexander A.; Quiter, Brian J.; Bandstra, Mark S.

    2017-07-01

    In order to provide benchmark data sets for radiation detector and algorithm development, a particle transport test bed has been created using experimental data as model input and validation. A detailed radiation measurement campaign at the Combined Arms Collective Training Facility in Fort Indiantown Gap, PA (FTIG), USA, provides sample background radiation levels for a variety of materials present at the site (including cinder block, gravel, asphalt, and soil) using long dwell high-purity germanium (HPGe) measurements. In addition, detailed light detection and ranging data and ground-truth measurements inform model geometry. This paper describes the collected data and the application of these data to create background and injected source synthetic data for an arbitrary gamma-ray detection system using particle transport model detector response calculations and statistical sampling. In the methodology presented here, HPGe measurements inform model source terms while detector response calculations are validated via long dwell measurements using 2"×4"×16" NaI(Tl) detectors at a variety of measurement points. A collection of responses, along with sampling methods and interpolation, can be used to create data sets to gauge radiation detector and algorithm (including detection, identification, and localization) performance under a variety of scenarios. Data collected at the FTIG site are available for query, filtering, visualization, and download at muse.lbl.gov.

  19. Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.

    OpenAIRE

    Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence

    2012-01-01

    Abstract Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. Background There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real...

  20. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    Directory of Open Access Journals (Sweden)

    C. Fernandez-Lozano

    2013-01-01

    Full Text Available Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM. Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA, the most representative variables for a specific classification problem can be selected.

  1. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  2. Dynamic statistical optimization of GNSS radio occultation bending angles: advanced algorithm and performance analysis

    Science.gov (United States)

    Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.

    2015-08-01

    We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.

  3. Just health: meeting health needs fairly

    National Research Council Canada - National Science Library

    Daniels, Norman

    2008-01-01

    ... Protection Medical Professionalism and the Care We Should Get 7 8 161 191 218 part iii uses 9 Fairness in Health Sector Reform Accountability for Reasonableness in Developing Countries: Two Applications Reducing Health Disparities: No Simple Matter Priority Setting and Human Rights 10 11 12 243 274 297 313 vP1: KAE CUNY1073-FM CUNY1073/Daniels 978 0 52...

  4. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  5. A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data.

    Science.gov (United States)

    Gregoire, John M; Dale, Darren; van Dover, R Bruce

    2011-01-01

    Peak detection is ubiquitous in the analysis of spectral data. While many noise-filtering algorithms and peak identification algorithms have been developed, recent work [P. Du, W. Kibbe, and S. Lin, Bioinformatics 22, 2059 (2006); A. Wee, D. Grayden, Y. Zhu, K. Petkovic-Duran, and D. Smith, Electrophoresis 29, 4215 (2008)] has demonstrated that both of these tasks are efficiently performed through analysis of the wavelet transform of the data. In this paper, we present a wavelet-based peak detection algorithm with user-defined parameters that can be readily applied to the application of any spectral data. Particular attention is given to the algorithm's resolution of overlapping peaks. The algorithm is implemented for the analysis of powder diffraction data, and successful detection of Bragg peaks is demonstrated for both low signal-to-noise data from theta-theta diffraction of nanoparticles and combinatorial x-ray diffraction data from a composition spread thin film. These datasets have different types of background signals which are effectively removed in the wavelet-based method, and the results demonstrate that the algorithm provides a robust method for automated peak detection.

  6. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  7. Algorithm for Fast and Efficient Detection and Reaction to Angle Instability Conditions Using Phasor Measurement Unit Data

    Directory of Open Access Journals (Sweden)

    Igor Ivanković

    2018-03-01

    Full Text Available In wide area monitoring, protection, and control (WAMPAC systems, angle stability of transmission network is monitored using data from phasor measurement units (PMU placed on transmission lines. Based on this PMU data stream advanced algorithm for out-of-step condition detection and early warning issuing is developed. The algorithm based on theoretical background described in this paper is backed up by the data and results from corresponding simulations done in Matlab environment. Presented results aim to provide the insights of the potential benefits, such as fast and efficient detection and reaction to angle instability, this algorithm can have on the improvement of the power system protection. Accordingly, suggestion is given how the developed algorithm can be implemented in protection segments of the WAMPAC systems in the transmission system operator control centers.

  8. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  9. IMPROVED SIMULATION OF NON-GAUSSIAN TEMPERATURE AND POLARIZATION COSMIC MICROWAVE BACKGROUND MAPS

    International Nuclear Information System (INIS)

    Elsner, Franz; Wandelt, Benjamin D.

    2009-01-01

    We describe an algorithm to generate temperature and polarization maps of the cosmic microwave background (CMB) radiation containing non-Gaussianity of arbitrary local type. We apply an optimized quadrature scheme that allows us to predict and control integration accuracy, speed up the calculations, and reduce memory consumption by an order of magnitude. We generate 1000 non-Gaussian CMB temperature and polarization maps up to a multipole moment of l max = 1024. We validate the method and code using the power spectrum and the fast cubic (bispectrum) estimator and find consistent results. The simulations are provided to the community.

  10. PCA-based approach for subtracting thermal background emission in high-contrast imaging data

    Science.gov (United States)

    Hunziker, S.; Quanz, S. P.; Amara, A.; Meyer, M. R.

    2018-03-01

    Aims.Ground-based observations at thermal infrared wavelengths suffer from large background radiation due to the sky, telescope and warm surfaces in the instrument. This significantly limits the sensitivity of ground-based observations at wavelengths longer than 3 μm. The main purpose of this work is to analyse this background emission in infrared high-contrast imaging data as illustrative of the problem, show how it can be modelled and subtracted and demonstrate that it can improve the detection of faint sources, such as exoplanets. Methods: We used principal component analysis (PCA) to model and subtract the thermal background emission in three archival high-contrast angular differential imaging datasets in the M' and L' filter. We used an M' dataset of β Pic to describe in detail how the algorithm works and explain how it can be applied. The results of the background subtraction are compared to the results from a conventional mean background subtraction scheme applied to the same dataset. Finally, both methods for background subtraction are compared by performing complete data reductions. We analysed the results from the M' dataset of HD 100546 only qualitatively. For the M' band dataset of β Pic and the L' band dataset of HD 169142, which was obtained with an angular groove phase mask vortex vector coronagraph, we also calculated and analysed the achieved signal-to-noise ratio (S/N). Results: We show that applying PCA is an effective way to remove spatially and temporarily varying thermal background emission down to close to the background limit. The procedure also proves to be very successful at reconstructing the background that is hidden behind the point spread function. In the complete data reductions, we find at least qualitative improvements for HD 100546 and HD 169142, however, we fail to find a significant increase in S/N of β Pic b. We discuss these findings and argue that in particular datasets with strongly varying observing conditions or

  11. Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Yonggang

    2018-05-07

    We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streams by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).

  12. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  13. Spinning projectile's attitude measurement with LW infrared radiation under sea-sky background

    Science.gov (United States)

    Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu

    2018-05-01

    With the further development of infrared radiation research in sea-sky background and the requirement of spinning projectile's attitude measurement, the sea-sky infrared radiation field is used to carry out spinning projectile's attitude angle instead of inertial sensors. Firstly, the generation mechanism of sea-sky infrared radiation is analysed. The mathematical model of sea-sky infrared radiation is deduced in LW (long wave) infrared 8 ∼ 14 μm band by calculating the sea surface and sky infrared radiation. Secondly, according to the movement characteristics of spinning projectile, the attitude measurement model of infrared sensors on projectile's three axis is established. And the feasibility of the model is analysed by simulation. Finally, the projectile's attitude calculation algorithm is designed to improve the attitude angle estimation accuracy. The results of semi-physical experiments show that the segmented interactive algorithm estimation error of pitch and roll angle is within ±1.5°. The attitude measurement method is effective and feasible, and provides accurate measurement basis for the guidance of spinning projectile.

  14. Development of a Crosstalk Suppression Algorithm for KID Readout

    Science.gov (United States)

    Lee, Kyungmin; Ishitsuka, H.; Oguri, S.; Suzuki, J.; Tajima, O.; Tomita, N.; Won, Eunil; Yoshida, M.

    2018-06-01

    The GroundBIRD telescope aims to detect B-mode polarization of the cosmic microwave background radiation using the kinetic inductance detector array as a polarimeter. For the readout of the signal from detector array, we have developed a frequency division multiplexing readout system based on a digital down converter method. These techniques in general have the leakage problems caused by the crosstalks. The window function was applied in the field programmable gate arrays to mitigate the effect of these problems and tested it in algorithm level.

  15. A new LMS algorithm for analysis of atrial fibrillation signals

    OpenAIRE

    Ciaccio Edward J; Biviano Angelo B; Whang William; Garan Hasan

    2012-01-01

    Abstract Background A biomedical signal can be defined by its extrinsic features (x-axis and y-axis shift and scale) and intrinsic features (shape after normalization of extrinsic features). In this study, an LMS algorithm utilizing the method of differential steepest descent is developed, and is tested by normalization of extrinsic features in complex fractionated atrial electrograms (CFAE). Method Equations for normalization of x-axis and y-axis shift and scale are first derived. The algori...

  16. Focusing light through strongly scattering media using genetic algorithm with SBR discriminant

    Science.gov (United States)

    Zhang, Bin; Zhang, Zhenfeng; Feng, Qi; Liu, Zhipeng; Lin, Chengyou; Ding, Yingchun

    2018-02-01

    In this paper, we have experimentally demonstrated light focusing through strongly scattering media by performing binary amplitude optimization with a genetic algorithm. In the experiments, we control 160 000 mirrors of digital micromirror device to modulate and optimize the light transmission paths in the strongly scattering media. We replace the universal target-position-intensity (TPI) discriminant with signal-to-background ratio (SBR) discriminant in genetic algorithm. With 400 incident segments, a relative enhancement value of 17.5% with a ground glass diffuser is achieved, which is higher than the theoretical value of 1/(2π )≈ 15.9 % for binary amplitude optimization. According to our repetitive experiments, we conclude that, with the same segment number, the enhancement for the SBR discriminant is always higher than that for the TPI discriminant, which results from the background-weakening effect of SBR discriminant. In addition, with the SBR discriminant, the diameters of the focus can be changed ranging from 7 to 70 μm at arbitrary positions. Besides, multiple foci with high enhancement are obtained. Our work provides a meaningful reference for the study of binary amplitude optimization in the wavefront shaping field.

  17. A street rubbish detection algorithm based on Sift and RCNN

    Science.gov (United States)

    Yu, XiPeng; Chen, Zhong; Zhang, Shuo; Zhang, Ting

    2018-02-01

    This paper presents a street rubbish detection algorithm based on image registration with Sift feature and RCNN. Firstly, obtain the rubbish region proposal on the real-time street image and set up the CNN convolution neural network trained by the rubbish samples set consists of rubbish and non-rubbish images; Secondly, for every clean street image, obtain the Sift feature and do image registration with the real-time street image to obtain the differential image, the differential image filters a lot of background information, obtain the rubbish region proposal rect where the rubbish may appear on the differential image by the selective search algorithm. Then, the CNN model is used to detect the image pixel data in each of the region proposal on the real-time street image. According to the output vector of the CNN, it is judged whether the rubbish is in the region proposal or not. If it is rubbish, the region proposal on the real-time street image is marked. This algorithm avoids the large number of false detection caused by the detection on the whole image because the CNN is used to identify the image only in the region proposal on the real-time street image that may appear rubbish. Different from the traditional object detection algorithm based on the region proposal, the region proposal is obtained on the differential image not whole real-time street image, and the number of the invalid region proposal is greatly reduced. The algorithm has the high mean average precision (mAP).

  18. Hyperspectral imaging simulation of object under sea-sky background

    Science.gov (United States)

    Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui

    2016-10-01

    Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.

  19. Catalytic synthesis of alcoholic fuels for transportation from syngas

    Energy Technology Data Exchange (ETDEWEB)

    Qiongxiao Wu

    2012-12-15

    Based on input from computational catalyst screening, an experimental investigation of promising catalyst candidates for methanol synthesis from syngas has been carried out. Cu-Ni alloys of different composition have been identified as potential candidates for methanol synthesis. These Cu-Ni alloy catalysts have been synthesized and tested in a fixed-bed continuous-flow reactor for CO hydrogenation. The metal area based activity for a Cu-Ni/SiO2 catalyst is at the same level as a Cu/ZnO/Al2O3 model catalyst. The high activity and selectivity of silica supported Cu-Ni alloy catalysts agrees with the fact that the DFT calculations identified Cu-Ni alloys as highly active and selective catalysts for the hydrogenation of CO to form methanol. This work has also provided a systematic study of Cu-Ni catalysts for methanol synthesis from syngas. The following observations have been made: (1) Cu-Ni catalysts (Cu/Ni molar ratio equal to 1) supported on SiO2, ZrO2, {gamma}-Al2O3, and carbon nanotubes exhibit very different selectivities during CO hydrogenation. However, the metal area based CO conversion rates of all supported Cu-Ni catalysts are at the same level. Carbon nanotubes and SiO2 supported Cu-Ni catalysts show high activity and selectivity for methanol synthesis. The Cu-Ni/ZrO2 catalyst exhibits high methanol selectivity at lower temperatures (250 deg. C), but the selectivity shifts to hydrocarbons and dimethyl ether at higher temperatures (> 275 deg. C). It seems likely that the Cu-Ni alloys always produce methanol, but that some supports further convert methanol to different products. (2) Cu-Ni/SiO2 catalysts have been prepared with different calcination and reduction procedures and tested in the synthesis of methanol from H2/CO. The calcination of the impregnated catalysts (with/without calcination step) and different reduction procedures with varying hydrogen concentration have significant influence on Cu-Ni alloy formation and the alloy particle size and

  20. Mouse obesity network reconstruction with a variational Bayes algorithm to employ aggressive false positive control

    Directory of Open Access Journals (Sweden)

    Logsdon Benjamin A

    2012-04-01

    Full Text Available Abstract Background We propose a novel variational Bayes network reconstruction algorithm to extract the most relevant disease factors from high-throughput genomic data-sets. Our algorithm is the only scalable method for regularized network recovery that employs Bayesian model averaging and that can internally estimate an appropriate level of sparsity to ensure few false positives enter the model without the need for cross-validation or a model selection criterion. We use our algorithm to characterize the effect of genetic markers and liver gene expression traits on mouse obesity related phenotypes, including weight, cholesterol, glucose, and free fatty acid levels, in an experiment previously used for discovery and validation of network connections: an F2 intercross between the C57BL/6 J and C3H/HeJ mouse strains, where apolipoprotein E is null on the background. Results We identified eleven genes, Gch1, Zfp69, Dlgap1, Gna14, Yy1, Gabarapl1, Folr2, Fdft1, Cnr2, Slc24a3, and Ccl19, and a quantitative trait locus directly connected to weight, glucose, cholesterol, or free fatty acid levels in our network. None of these genes were identified by other network analyses of this mouse intercross data-set, but all have been previously associated with obesity or related pathologies in independent studies. In addition, through both simulations and data analysis we demonstrate that our algorithm achieves superior performance in terms of power and type I error control than other network recovery algorithms that use the lasso and have bounds on type I error control. Conclusions Our final network contains 118 previously associated and novel genes affecting weight, cholesterol, glucose, and free fatty acid levels that are excellent obesity risk candidates.

  1. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  2. Algorithms for optimizing drug therapy

    Directory of Open Access Journals (Sweden)

    Martin Lene

    2004-07-01

    Full Text Available Abstract Background Drug therapy has become increasingly efficient, with more drugs available for treatment of an ever-growing number of conditions. Yet, drug use is reported to be sub optimal in several aspects, such as dosage, patient's adherence and outcome of therapy. The aim of the current study was to investigate the possibility to optimize drug therapy using computer programs, available on the Internet. Methods One hundred and ten officially endorsed text documents, published between 1996 and 2004, containing guidelines for drug therapy in 246 disorders, were analyzed with regard to information about patient-, disease- and drug-related factors and relationships between these factors. This information was used to construct algorithms for identifying optimum treatment in each of the studied disorders. These algorithms were categorized in order to define as few models as possible that still could accommodate the identified factors and the relationships between them. The resulting program prototypes were implemented in HTML (user interface and JavaScript (program logic. Results Three types of algorithms were sufficient for the intended purpose. The simplest type is a list of factors, each of which implies that the particular patient should or should not receive treatment. This is adequate in situations where only one treatment exists. The second type, a more elaborate model, is required when treatment can by provided using drugs from different pharmacological classes and the selection of drug class is dependent on patient characteristics. An easily implemented set of if-then statements was able to manage the identified information in such instances. The third type was needed in the few situations where the selection and dosage of drugs were depending on the degree to which one or more patient-specific factors were present. In these cases the implementation of an established decision model based on fuzzy sets was required. Computer programs

  3. Aluminum as a source of background in low background experiments

    Energy Technology Data Exchange (ETDEWEB)

    Majorovits, B., E-mail: bela@mppmu.mpg.de [MPI fuer Physik, Foehringer Ring 6, 80805 Munich (Germany); Abt, I. [MPI fuer Physik, Foehringer Ring 6, 80805 Munich (Germany); Laubenstein, M. [Laboratori Nazionali del Gran Sasso, INFN, S.S.17/bis, km 18 plus 910, I-67100 Assergi (Italy); Volynets, O. [MPI fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)

    2011-08-11

    Neutrinoless double beta decay would be a key to understanding the nature of neutrino masses. The next generation of High Purity Germanium experiments will have to be operated with a background rate of better than 10{sup -5} counts/(kg y keV) in the region of interest around the Q-value of the decay. Therefore, so far irrelevant sources of background have to be considered. The metalization of the surface of germanium detectors is in general done with aluminum. The background from the decays of {sup 22}Na, {sup 26}Al, {sup 226}Ra and {sup 228}Th introduced by this metalization is discussed. It is shown that only a special selection of aluminum can keep these background contributions acceptable.

  4. Comparison of evolutionary algorithms in gene regulatory network model inference.

    LENUS (Irish Health Repository)

    2010-01-01

    ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.

  5. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    Science.gov (United States)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  6. Validation Study of a Predictive Algorithm to Evaluate Opioid Use Disorder in a Primary Care Setting

    Science.gov (United States)

    Sharma, Maneesh; Lee, Chee; Kantorovich, Svetlana; Tedtaotao, Maria; Smith, Gregory A.

    2017-01-01

    Background: Opioid abuse in chronic pain patients is a major public health issue. Primary care providers are frequently the first to prescribe opioids to patients suffering from pain, yet do not always have the time or resources to adequately evaluate the risk of opioid use disorder (OUD). Purpose: This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm (“profile”) incorporating phenotypic and, more uniquely, genotypic risk factors. Methods and Results: In a validation study with 452 participants diagnosed with OUD and 1237 controls, the algorithm successfully categorized patients at high and moderate risk of OUD with 91.8% sensitivity. Regardless of changes in the prevalence of OUD, sensitivity of the algorithm remained >90%. Conclusion: The algorithm correctly stratifies primary care patients into low-, moderate-, and high-risk categories to appropriately identify patients in need for additional guidance, monitoring, or treatment changes. PMID:28890908

  7. Impact of detector efficiency and energy resolution on gamma-ray background rejection in mobile spectroscopy and imaging systems

    Energy Technology Data Exchange (ETDEWEB)

    Aucott, Timothy J., E-mail: Timothy.Aucott@SRS.gov [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Bandstra, Mark S. [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Negut, Victor; Curtis, Joseph C. [University of California, Berkeley, Department of Nuclear Engineering, Berkeley, CA (United States); Meyer, Ross E.; Chivers, Daniel H. [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Vetter, Kai [University of California, Berkeley, Department of Nuclear Engineering, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States)

    2015-07-21

    The presence of gamma-ray background significantly reduces detection sensitivity when searching for radioactive sources in the field, and the systematic variability in the background will limit the size and energy resolution of systems that can be used effectively. An extensive survey of the background was performed using both sodium iodide and high-purity germanium. By using a bivariate negative binomial model for the measured counts, these measurements can be resampled to simulate the performance of a detector array of arbitrary size and resolution. The response of the system as it moved past a stationary source was modeled for spectroscopic and coded aperture imaging algorithms and used for source injection into the background. The performance of both techniques is shown for various sizes and resolutions, as well as the relative performance for sodium iodide and germanium. It was found that at smaller detector sizes or better energy resolution, spectroscopy has higher detection sensitivity than imaging, while imaging is better suited to larger or poorer resolution detectors.

  8. Accurate Prediction of Coronary Artery Disease Using Bioinformatics Algorithms

    Directory of Open Access Journals (Sweden)

    Hajar Shafiee

    2016-06-01

    Full Text Available Background and Objectives: Cardiovascular disease is one of the main causes of death in developed and Third World countries. According to the statement of the World Health Organization, it is predicted that death due to heart disease will rise to 23 million by 2030. According to the latest statistics reported by Iran’s Minister of health, 3.39% of all deaths are attributed to cardiovascular diseases and 19.5% are related to myocardial infarction. The aim of this study was to predict coronary artery disease using data mining algorithms. Methods: In this study, various bioinformatics algorithms, such as decision trees, neural networks, support vector machines, clustering, etc., were used to predict coronary heart disease. The data used in this study was taken from several valid databases (including 14 data. Results: In this research, data mining techniques can be effectively used to diagnose different diseases, including coronary artery disease. Also, for the first time, a prediction system based on support vector machine with the best possible accuracy was introduced. Conclusion: The results showed that among the features, thallium scan variable is the most important feature in the diagnosis of heart disease. Designation of machine prediction models, such as support vector machine learning algorithm can differentiate between sick and healthy individuals with 100% accuracy.

  9. A Weight-Aware Recommendation Algorithm for Mobile Multimedia Systems

    Directory of Open Access Journals (Sweden)

    Pedro M. P. Rosa

    2013-01-01

    Full Text Available In the last years, information flood is becoming a common reality, and the general user, hit by thousands of possible interesting information, has great difficulties identifying the best ones, that can guide him in his/her daily choices, like concerts, restaurants, sport gatherings, or culture events. The current growth of mobile smartphones and tablets with embedded GPS receiver, Internet access, camera, and accelerometer offer new opportunities to mobile ubiquitous multimedia applications that helps gathering the best information out of an always growing list of possibly good ones. This paper presents a mobile recommendation system for events, based on few weighted context-awareness data-fusion algorithms to combine several multimedia sources. A demonstrative deployment were utilized relevance like location data, user habits and user sharing statistics, and data-fusion algorithms like the classical CombSUM and CombMNZ, simple, and weighted. Still, the developed methodology is generic, and can be extended to other relevance, both direct (background noise volume and indirect (local temperature extrapolated by GPS coordinates in a Web service and other data-fusion techniques. To experiment, demonstrate, and evaluate the performance of different algorithms, the proposed system was created and deployed into a working mobile application providing real time awareness-based information of local events and news.

  10. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  11. Background Radiance Estimation for Gas Plume Quantification for Airborne Hyperspectral Thermal Imaging

    Directory of Open Access Journals (Sweden)

    Ramzi Idoughi

    2016-01-01

    Full Text Available Hyperspectral imaging in the long-wave infrared (LWIR is a mean that is proving its worth in the characterization of gaseous effluent. Indeed the spectral and spatial resolution of acquisition instruments is steadily decreasing, making the gases characterization increasingly easy in the LWIR domain. The majority of literature algorithms exploit the plume contribution to the radiance corresponding to the difference of radiance between the plume-present and plume-absent pixels. Nevertheless, the off-plume radiance is unobservable using a single image. In this paper, we propose a new method to retrieve trace gas concentration from airborne infrared hyperspectral data. More particularly the outlined method improves the existing background radiance estimation approach to deal with heterogeneous scenes corresponding to industrial scenes. It consists in performing a classification of the scene and then applying a principal components analysis based method to estimate the background radiance on each cluster stemming from the classification. In order to determine the contribution of the classification to the background radiance estimation, we compared the two approaches on synthetic data and Telops Fourier Transform Spectrometer (FTS Imaging Hyper-Cam LW airborne acquisition above ethylene release. We finally show ethylene retrieved concentration map and estimate flow rate of the ethylene release.

  12. GPU implementations of online track finding algorithms at PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Herten, Andreas; Stockmanns, Tobias; Ritman, James [Institut fuer Kernphysik, Forschungszentrum Juelich GmbH (Germany); Adinetz, Andrew; Pleiter, Dirk [Juelich Supercomputing Centre, Forschungszentrum Juelich GmbH (Germany); Kraus, Jiri [NVIDIA GmbH (Germany); Collaboration: PANDA-Collaboration

    2014-07-01

    The PANDA experiment is a hadron physics experiment that will investigate antiproton annihilation in the charm quark mass region. The experiment is now being constructed as one of the main parts of the FAIR facility. At an event rate of 2 . 10{sup 7}/s a data rate of 200 GB/s is expected. A reduction of three orders of magnitude is required in order to save the data for further offline analysis. Since signal and background processes at PANDA have similar signatures, no hardware-level trigger is foreseen for the experiment. Instead, a fast online event filter is substituting this element. We investigate the possibility of using graphics processing units (GPUs) for the online tracking part of this task. Researched algorithms are a Hough Transform, a track finder involving Riemann surfaces, and the novel, PANDA-specific Triplet Finder. This talk shows selected advances in the implementations as well as performance evaluations of the GPU tracking algorithms to be used at the PANDA experiment.

  13. Comparison of frequency difference reconstruction algorithms for the detection of acute stroke using EIT in a realistic head-shaped tank

    International Nuclear Information System (INIS)

    Packham, B; Koo, H; Romsauerova, A; Holder, D S; Ahn, S; Jun, S C; McEwan, A

    2012-01-01

    Imaging of acute stroke might be possible using multi-frequency electrical impedance tomography (MFEIT) but requires absolute or frequency difference imaging. Simple linear frequency difference reconstruction has been shown to be ineffective in imaging with a frequency-dependant background conductivity; this has been overcome with a weighted frequency difference approach with correction for the background but this has only been validated for a cylindrical and hemispherical tank. The feasibility of MFEIT for imaging of acute stroke in a realistic head geometry was examined by imaging a potato perturbation against a saline background and a carrot-saline frequency-dependant background conductivity, in a head-shaped tank with the UCLH Mk2.5 MFEIT system. Reconstruction was performed with time difference (TD), frequency difference (FD), FD adjacent (FDA), weighted FD (WFD) and weighted FDA (WFDA) linear algorithms. The perturbation in reconstructed images corresponded to the true position to <9.5% of image diameter with an image SNR of >5.4 for all algorithms in saline but only for TD, WFDA and WFD in the carrot-saline background. No reliable imaging was possible with FD and FDA. This indicates that the WFD approach is also effective for a realistic head geometry and supports its use for human imaging in the future. (paper)

  14. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  15. Camouflaging in a complex environment--octopuses use specific features of their surroundings for background matching.

    Directory of Open Access Journals (Sweden)

    Noam Josef

    Full Text Available Living under intense predation pressure, octopuses evolved an effective and impressive camouflaging ability that exploits features of their surroundings to enable them to "blend in." To achieve such background matching, an animal may use general resemblance and reproduce characteristics of its entire surroundings, or it may imitate a specific object in its immediate environment. Using image analysis algorithms, we examined correlations between octopuses and their backgrounds. Field experiments show that when camouflaging, Octopus cyanea and O. vulgaris base their body patterns on selected features of nearby objects rather than attempting to match a large field of view. Such an approach enables the octopus to camouflage in partly occluded environments and to solve the problem of differences in appearance as a function of the viewing inclination of the observer.

  16. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  17. Research on correction algorithm of laser positioning system based on four quadrant detector

    Science.gov (United States)

    Gao, Qingsong; Meng, Xiangyong; Qian, Weixian; Cai, Guixia

    2018-02-01

    This paper first introduces the basic principle of the four quadrant detector, and a set of laser positioning experiment system is built based on the four quadrant detector. Four quadrant laser positioning system in the actual application, not only exist interference of background light and detector dark current noise, and the influence of random noise, system stability, spot equivalent error can't be ignored, so it is very important to system calibration and correction. This paper analyzes the various factors of system positioning error, and then propose an algorithm for correcting the system error, the results of simulation and experiment show that the modified algorithm can improve the effect of system error on positioning and improve the positioning accuracy.

  18. Evaluation of three algorithms to calculate the relative renal function with {sup 99}Tc-DTPA; Evaluation de trois algorithmes pour calculer la fonction renale relative au {sup 99}Tc-DTPA

    Energy Technology Data Exchange (ETDEWEB)

    Charfeddine, S.; Maaloul, M.; Kallel, F.; Chtourou, K.; Guermazi, F. [EPS Habib Bourguiba, Service de Medecine Nucleaire, Sfax (Tunisia)

    2006-06-15

    The aim of our study is to estimate the reproducibility and the exactitude of three algorithms to determine with {sup 99m}Tc-DTPA the relative function of each kidney. Methods: a prospective study was carried out in voluntary patients. Reproducibility was studied in 11 patients who underwent had two examinations with {sup 99m}Tc-DTPA. Exactitude was evaluated in 35 patients who had an additional scintigraphy with {sup 99m}Tc-DMSA taken as a reference. To determine the relative renal function with {sup 99m}Tc-DTPA, three algorithms using various background subtraction methods and time intervals were applied. Results and conclusion: the method of the integral was the most reproducible and exact. It was little influenced by the choice of the interval of time. The reproducibility and the exactitude of the Patlak method were worse, especially in case of renal insufficiency or hydronephrosis. A high background and poor counting statistics explain why Patlak was less powerful with {sup 99m}Tc-DTPA than with {sup 99m}Tc-MAG3. The method of the slopes should not be recommended any more. (author)

  19. Assessing semantic similarity of texts - Methods and algorithms

    Science.gov (United States)

    Rozeva, Anna; Zerkova, Silvia

    2017-12-01

    Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.

  20. A General Approach to Enhance Short Wave Satellite Imagery by Removing Background Atmospheric Effects

    Directory of Open Access Journals (Sweden)

    Ronald Scheirer

    2018-04-01

    Full Text Available Atmospheric interaction distorts the surface signal received by a space-borne instrument. Images derived from visible channels appear often too bright and with reduced contrast. This hampers the use of RGB imagery otherwise useful in ocean color applications and in forecasting or operational disaster monitoring, for example forest fires. In order to correct for the dominant source of atmospheric noise, a simple, fast and flexible algorithm has been developed. The algorithm is implemented in Python and freely available in PySpectral which is part of the PyTroll family of open source packages, allowing easy access to powerful real-time image-processing tools. Pre-calculated look-up tables of top of atmosphere reflectance are derived by off-line calculations with RTM DISORT as part of the LibRadtran package. The approach is independent of platform and sensor bands, and allows it to be applied to any band in the visible spectral range. Due to the use of standard atmospheric profiles and standard aerosol loads, it is possible just to reduce the background disturbance. Thus signals from excess aerosols become more discernible. Examples of uncorrected and corrected satellite images demonstrate that this flexible real-time algorithm is a useful tool for atmospheric correction.

  1. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  2. Parallel algorithms for online trackfinding at PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Bianchi, Ludovico; Ritman, James; Stockmanns, Tobias [IKP, Forschungszentrum Juelich GmbH (Germany); Herten, Andreas [JSC, Forschungszentrum Juelich GmbH (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA experiment, one of the four scientific pillars of the FAIR facility currently in construction in Darmstadt, is a next-generation particle detector that will study collisions of antiprotons with beam momenta of 1.5-15 GeV/c on a fixed proton target. Because of the broad physics scope and the similar signature of signal and background events, PANDA's strategy for data acquisition is to continuously record data from the whole detector and use this global information to perform online event reconstruction and filtering. A real-time rejection factor of up to 1000 must be achieved to match the incoming data rate for offline storage, making all components of the data processing system computationally very challenging. Online particle track identification and reconstruction is an essential step, since track information is used as input in all following phases. Online tracking algorithms must ensure a delicate balance between high tracking efficiency and quality, and minimal computational footprint. For this reason, a massively parallel solution exploiting multiple Graphic Processing Units (GPUs) is under investigation. The talk presents the core concepts of the algorithms being developed for primary trackfinding, along with details of their implementation on GPUs.

  3. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  4. Background-Oriented Schlieren (BOS) for Scramjet Inlet-isolator Investigation

    Science.gov (United States)

    Che Idris, Azam; Rashdan Saad, Mohd; Hing Lo, Kin; Kontis, Konstantinos

    2018-05-01

    Background-oriented Schlieren (BOS) technique is a recently invented non-intrusive flow diagnostic method which has yet to be fully explored in its capabilities. In this paper, BOS technique has been applied for investigating the general flow field characteristics inside a generic scramjet inlet-isolator with Mach 5 flow. The difficulty in finding the delicate balance between measurement sensitivity and measurement area image focusing has been demonstrated. The differences between direct cross-correlation (DCC) and Fast Fourier Transform (FFT) raw data processing algorithm have also been demonstrated. As an exploratory study of BOS capability, this paper found that BOS is simple yet robust enough to be used to visualize complex flow in a scramjet inlet in hypersonic flow. However, in this case its quantitative data can be strongly affected by 3-dimensionality thus obscuring the density value with significant errors.

  5. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  6. Genomic multiple sequence alignments: refinement using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Lefkowitz Elliot J

    2005-08-01

    Full Text Available Abstract Background Genomic sequence data cannot be fully appreciated in isolation. Comparative genomics – the practice of comparing genomic sequences from different species – plays an increasingly important role in understanding the genotypic differences between species that result in phenotypic differences as well as in revealing patterns of evolutionary relationships. One of the major challenges in comparative genomics is producing a high-quality alignment between two or more related genomic sequences. In recent years, a number of tools have been developed for aligning large genomic sequences. Most utilize heuristic strategies to identify a series of strong sequence similarities, which are then used as anchors to align the regions between the anchor points. The resulting alignment is globally correct, but in many cases is suboptimal locally. We describe a new program, GenAlignRefine, which improves the overall quality of global multiple alignments by using a genetic algorithm to improve local regions of alignment. Regions of low quality are identified, realigned using the program T-Coffee, and then refined using a genetic algorithm. Because a better COFFEE (Consistency based Objective Function For alignmEnt Evaluation score generally reflects greater alignment quality, the algorithm searches for an alignment that yields a better COFFEE score. To improve the intrinsic slowness of the genetic algorithm, GenAlignRefine was implemented as a parallel, cluster-based program. Results We tested the GenAlignRefine algorithm by running it on a Linux cluster to refine sequences from a simulation, as well as refine a multiple alignment of 15 Orthopoxvirus genomic sequences approximately 260,000 nucleotides in length that initially had been aligned by Multi-LAGAN. It took approximately 150 minutes for a 40-processor Linux cluster to optimize some 200 fuzzy (poorly aligned regions of the orthopoxvirus alignment. Overall sequence identity increased only

  7. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    Science.gov (United States)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  8. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  9. Transmission-less attenuation correction in time-of-flight PET: analysis of a discrete iterative algorithm

    International Nuclear Information System (INIS)

    Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan

    2014-01-01

    The maximum likelihood attenuation correction factors (MLACF) algorithm has been developed to calculate the maximum-likelihood estimate of the activity image and the attenuation sinogram in time-of-flight (TOF) positron emission tomography, using only emission data without prior information on the attenuation. We consider the case of a Poisson model of the data, in the absence of scatter or random background. In this case the maximization with respect to the attenuation factors can be achieved in a closed form and the MLACF algorithm works by updating the activity. Despite promising numerical results, the convergence of this algorithm has not been analysed. In this paper we derive the algorithm and demonstrate that the MLACF algorithm monotonically increases the likelihood, is asymptotically regular, and that the limit points of the iteration are stationary points of the likelihood. Because the problem is not convex, however, the limit points might be saddle points or local maxima. To obtain some empirical insight into the latter question, we present data obtained by applying MLACF to 2D simulated TOF data, using a large number of iterations and different initializations. (paper)

  10. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  11. A novel hybrid algorithm of GSA with Kepler algorithm for numerical optimization

    Directory of Open Access Journals (Sweden)

    Soroor Sarafrazi

    2015-07-01

    Full Text Available It is now well recognized that pure algorithms can be promisingly improved by hybridization with other techniques. One of the relatively new metaheuristic algorithms is Gravitational Search Algorithm (GSA which is based on the Newton laws. In this paper, to enhance the performance of GSA, a novel algorithm called “Kepler”, inspired by the astrophysics, is introduced. The Kepler algorithm is based on the principle of the first Kepler law. The hybridization of GSA and Kepler algorithm is an efficient approach to provide much stronger specialization in intensification and/or diversification. The performance of GSA–Kepler is evaluated by applying it to 14 benchmark functions with 20–1000 dimensions and the optimal approximation of linear system as a practical optimization problem. The results obtained reveal that the proposed hybrid algorithm is robust enough to optimize the benchmark functions and practical optimization problems.

  12. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  13. Development of an inverse distance weighted active infrared stealth scheme using the repulsive particle swarm optimization algorithm.

    Science.gov (United States)

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk

    2018-04-20

    Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.

  14. Two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images.

    Science.gov (United States)

    He, Lifeng; Chao, Yuyan; Suzuki, Kenji

    2011-08-01

    Whenever one wants to distinguish, recognize, and/or measure objects (connected components) in binary images, labeling is required. This paper presents two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images. One is voxel based and the other is run based. For the voxel-based one, we present an efficient method of deciding the order for checking voxels in the mask. For the run-based one, instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our voxel-based algorithm is efficient for 3-D binary images with complicated connected components, that our run-based one is efficient for those with simple connected components, and that both are much more efficient than conventional 3-D labeling algorithms.

  15. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  16. Optimal Background Attenuation for Fielded Spectroscopic Detection Systems

    International Nuclear Information System (INIS)

    Robinson, Sean M.; Ashbaker, Eric D.; Schweppe, John E.; Siciliano, Edward R.

    2007-01-01

    Radiation detectors are often placed in positions difficult to shield from the effects of terrestrial background gamma radiation. This is particularly true in the case of Radiation Portal Monitor (RPM) systems, as their wide viewing angle and outdoor installations make them susceptible to radiation from the surrounding area. Reducing this source of background can improve gross-count detection capabilities in the current generation of non-spectroscopic RPM's as well as source identification capabilities in the next generation of spectroscopic RPM's. To provide guidance for designing such systems, the problem of shielding a general spectroscopic-capable RPM system from terrestrial gamma radiation is considered. This analysis is carried out by template matching algorithms, to determine and isolate a set of non-threat isotopes typically present in the commerce stream. Various model detector and shielding scenarios are calculated using the Monte-Carlo N Particle (MCNP) computer code. Amounts of nominal-density shielding needed to increase the probability of detection for an ensemble of illicit sources are given. Common shielding solutions such as steel plating are evaluated based on the probability of detection for 3 particular illicit sources of interest, and the benefits are weighed against the incremental cost of shielding. Previous work has provided optimal shielding scenarios for RPMs based on gross-counting measurements, and those same solutions (shielding the internal detector cavity, direct shielding of the ground between the detectors, and the addition of collimators) are examined with respect to their utility to improving spectroscopic detection

  17. Monte Carlo algorithms with absorbing Markov chains: Fast local algorithms for slow dynamics

    International Nuclear Information System (INIS)

    Novotny, M.A.

    1995-01-01

    A class of Monte Carlo algorithms which incorporate absorbing Markov chains is presented. In a particular limit, the lowest order of these algorithms reduces to the n-fold way algorithm. These algorithms are applied to study the escape from the metastable state in the two-dimensional square-lattice nearest-neighbor Ising ferromagnet in an unfavorable applied field, and the agreement with theoretical predictions is very good. It is demonstrated that the higher-order algorithms can be many orders of magnitude faster than either the traditional Monte Carlo or n-fold way algorithms

  18. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  19. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  20. A method for the interpretation of flow cytometry data using genetic algorithms

    Directory of Open Access Journals (Sweden)

    Cesar Angeletti

    2018-01-01

    Full Text Available Background: Flow cytometry analysis is the method of choice for the differential diagnosis of hematologic disorders. It is typically performed by a trained hematopathologist through visual examination of bidimensional plots, making the analysis time-consuming and sometimes too subjective. Here, a pilot study applying genetic algorithms to flow cytometry data from normal and acute myeloid leukemia subjects is described. Subjects and Methods: Initially, Flow Cytometry Standard files from 316 normal and 43 acute myeloid leukemia subjects were transformed into multidimensional FITS image metafiles. Training was performed through introduction of FITS metafiles from 4 normal and 4 acute myeloid leukemia in the artificial intelligence system. Results: Two mathematical algorithms termed 018330 and 025886 were generated. When tested against a cohort of 312 normal and 39 acute myeloid leukemia subjects, both algorithms combined showed high discriminatory power with a receiver operating characteristic (ROC curve of 0.912. Conclusions: The present results suggest that machine learning systems hold a great promise in the interpretation of hematological flow cytometry data.

  1. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  2. Algorithms as fetish: Faith and possibility in algorithmic work

    Directory of Open Access Journals (Sweden)

    Suzanne L Thomas

    2018-01-01

    Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.

  3. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  4. JEM-X background models

    DEFF Research Database (Denmark)

    Huovelin, J.; Maisala, S.; Schultz, J.

    2003-01-01

    Background and determination of its components for the JEM-X X-ray telescope on INTEGRAL are discussed. A part of the first background observations by JEM-X are analysed and results are compared to predictions. The observations are based on extensive imaging of background near the Crab Nebula...... on revolution 41 of INTEGRAL. Total observing time used for the analysis was 216 502 s, with the average of 25 cps of background for each of the two JEM-X telescopes. JEM-X1 showed slightly higher average background intensity than JEM-X2. The detectors were stable during the long exposures, and weak orbital...... background was enhanced in the central area of a detector, and it decreased radially towards the edge, with a clear vignetting effect for both JEM-X units. The instrument background was weakest in the central area of a detector and showed a steep increase at the very edges of both JEM-X detectors...

  5. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  6. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  7. Simulation of modified hybrid noise reduction algorithm to enhance the speech quality

    International Nuclear Information System (INIS)

    Waqas, A.; Muhammad, T.; Jamal, H.

    2013-01-01

    Speech is the most essential method of correspondence of humankind. Cell telephony, portable hearing assistants and, hands free are specific provisions in this respect. The performance of these communication devices could be affected because of distortions which might augment them. There are two essential sorts of distortions that might be recognized, specifically: convolutive and additive noises. These mutilations contaminate the clean speech and make it unsatisfactory to human audiences i.e. perceptual value and intelligibility of speech signal diminishes. The objective of speech upgrade systems is to enhance the quality and understandability of speech to make it more satisfactory to audiences. This paper recommends a modified hybrid approach for single channel devices to process the noisy signals considering only the effect of background noises. It is a mixture of pre-processing relative spectral amplitude (RASTA) filter, which is approximated by a straight forward 4th order band-pass filter, and conventional minimum mean square error short time spectral amplitude (MMSE STSA85) estimator. To analyze the performance of the algorithm an objective parameter called Perceptual estimation of speech quality (PESQ) is measured. The results show that the modified algorithm performs well to remove the background noises. SIMULINK implementation is also performed and its profile report has been generated to observe the execution time. (author)

  8. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  9. Background fluorescence estimation and vesicle segmentation in live cell imaging with conditional random fields.

    Science.gov (United States)

    Pécot, Thierry; Bouthemy, Patrick; Boulanger, Jérôme; Chessel, Anatole; Bardin, Sabine; Salamero, Jean; Kervrann, Charles

    2015-02-01

    Image analysis applied to fluorescence live cell microscopy has become a key tool in molecular biology since it enables to characterize biological processes in space and time at the subcellular level. In fluorescence microscopy imaging, the moving tagged structures of interest, such as vesicles, appear as bright spots over a static or nonstatic background. In this paper, we consider the problem of vesicle segmentation and time-varying background estimation at the cellular scale. The main idea is to formulate the joint segmentation-estimation problem in the general conditional random field framework. Furthermore, segmentation of vesicles and background estimation are alternatively performed by energy minimization using a min cut-max flow algorithm. The proposed approach relies on a detection measure computed from intensity contrasts between neighboring blocks in fluorescence microscopy images. This approach permits analysis of either 2D + time or 3D + time data. We demonstrate the performance of the so-called C-CRAFT through an experimental comparison with the state-of-the-art methods in fluorescence video-microscopy. We also use this method to characterize the spatial and temporal distribution of Rab6 transport carriers at the cell periphery for two different specific adhesion geometries.

  10. Modeling Adsorption Based Filters (Bio-remediation of Heavy Metal Contaminated Water)

    Science.gov (United States)

    McCarthy, Chris

    I will discuss kinetic models of adsorption, as well as models of filters based on those mechanisms. These mathematical models have been developed in support of our interdisciplinary lab group, which is centered at BMCC/CUNY (City University of New York). Our group conducts research into bio-remediation of heavy metal contaminated water via filtration. The filters are constructed out of biomass, such as spent tea leaves. The spent tea leaves are available in large quantities as a result of the industrial production of tea beverages. The heavy metals bond with the surfaces of the tea leaves (adsorption). The models involve differential equations, stochastic methods, and recursive functions. I will compare the models' predictions to data obtained from computer simulations and experimentally by our lab group. Funding: CUNY Collaborative Incentive Research Grant (Round 12); CUNY Research Scholars Program.

  11. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  12. DNA-based watermarks using the DNA-Crypt algorithm

    Directory of Open Access Journals (Sweden)

    Barnekow Angelika

    2007-05-01

    Full Text Available Abstract Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.

  13. DNA-based watermarks using the DNA-Crypt algorithm

    Science.gov (United States)

    Heider, Dominik; Barnekow, Angelika

    2007-01-01

    Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434

  14. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    Science.gov (United States)

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (pmachine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273

  15. Experimental verification of preset time count rate meters based on adaptive digital signal processing algorithms

    Directory of Open Access Journals (Sweden)

    Žigić Aleksandar D.

    2005-01-01

    Full Text Available Experimental verifications of two optimized adaptive digital signal processing algorithms implemented in two pre set time count rate meters were per formed ac cording to appropriate standards. The random pulse generator realized using a personal computer, was used as an artificial radiation source for preliminary system tests and performance evaluations of the pro posed algorithms. Then measurement results for background radiation levels were obtained. Finally, measurements with a natural radiation source radioisotope 90Sr-90Y, were carried out. Measurement results, con ducted without and with radio isotopes for the specified errors of 10% and 5% showed to agree well with theoretical predictions.

  16. An algorithm and program for finding sequence specific oligo-nucleotide probes for species identification

    Directory of Open Access Journals (Sweden)

    Tautz Diethard

    2002-03-01

    Full Text Available Abstract Background The identification of species or species groups with specific oligo-nucleotides as molecular signatures is becoming increasingly popular for bacterial samples. However, it shows also great promise for other small organisms that are taxonomically difficult to tract. Results We have devised here an algorithm that aims to find the optimal probes for any given set of sequences. The program requires only a crude alignment of these sequences as input and is optimized for performance to deal also with very large datasets. The algorithm is designed such that the position of mismatches in the probes influences the selection and makes provision of single nucleotide outloops. Program implementations are available for Linux and Windows.

  17. Fast neutron background characterization with the Radiological Multi-sensor Analysis Platform (RadMAP)

    Energy Technology Data Exchange (ETDEWEB)

    Davis, John R., E-mail: john.davis@usma.edu [Lawrence Berkeley National Laboratory, Berkeley, CA (United States); The United States Military Academy, West Point, NY (United States); Brubaker, Erik [Sandia National Laboratories, Livermore, CA (United States); Vetter, Kai [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2017-06-21

    In an effort to characterize the fast neutron radiation background, 16 EJ-309 liquid scintillator cells were installed in the Radiological Multi-sensor Analysis Platform (RadMAP) to collect data in the San Francisco Bay Area. Each fast neutron event was associated with specific weather metrics (pressure, temperature, absolute humidity) and GPS coordinates. The expected exponential dependence of the fast neutron count rate on atmospheric pressure was demonstrated and event rates were subsequently adjusted given the measured pressure at the time of detection. Pressure adjusted data was also used to investigate the influence of other environmental conditions on the neutron background rate. Using National Oceanic and Atmospheric Administration (NOAA) coastal area lidar data, an algorithm was implemented to approximate sky-view factors (the total fraction of visible sky) for points along RadMAPs route. Three areas analyzed in San Francisco, Downtown Oakland, and Berkeley all demonstrated a suppression in the background rate of over 50% for the range of sky-view factors measured. This effect, which is due to the shielding of cosmic-ray produced neutrons by surrounding buildings, was comparable to the pressure influence which yielded a 32% suppression in the count rate over the range of pressures measured.

  18. Catalytic synthesis of alcoholic fuels for transportation from syngas

    OpenAIRE

    Wu, Qiongxiao; Jensen, Anker Degn; Grunwaldt, Jan-Dierk; Temel, Burcin; Christensen, Jakob Munkholt

    2013-01-01

    This work has investigated the catalytic conversion of syngas into methanol and higher alcohols. Based on input from computational catalyst screening, an experimental investigation of promising catalyst candidates for methanol synthesis from syngas has been carried out. Cu-Ni alloys of different composition have been identified as potential candidates for methanol synthesis. These Cu-Ni alloy catalysts have been synthesized and tested in a fixed-bed continuous-flow reactor for CO hydrogenatio...

  19. Electrical control of superparamagnetism

    Science.gov (United States)

    Yamada, Kihiro T.; Koyama, Tomohiro; Kakizakai, Haruka; Miwa, Kazumoto; Ando, Fuyuki; Ishibashi, Mio; Kim, Kab-Jin; Moriyama, Takahiro; Ono, Shimpei; Chiba, Daichi; Ono, Teruo

    2017-01-01

    The electric field control of superparamagnetism is realized using a Cu/Ni system, in which the deposited Ni shows superparamagnetic behavior above the blocking temperature. An electric double-layer capacitor (EDLC) with the Cu/Ni electrode and a nonmagnetic counter electrode is fabricated to examine the electric field effect on magnetism in the magnetic electrode. By changing the voltage applied to the EDLC, the blocking temperature of the system is clearly modulated.

  20. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Science.gov (United States)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  1. Optimal background matching camouflage.

    Science.gov (United States)

    Michalis, Constantine; Scott-Samuel, Nicholas E; Gibson, David P; Cuthill, Innes C

    2017-07-12

    Background matching is the most familiar and widespread camouflage strategy: avoiding detection by having a similar colour and pattern to the background. Optimizing background matching is straightforward in a homogeneous environment, or when the habitat has very distinct sub-types and there is divergent selection leading to polymorphism. However, most backgrounds have continuous variation in colour and texture, so what is the best solution? Not all samples of the background are likely to be equally inconspicuous, and laboratory experiments on birds and humans support this view. Theory suggests that the most probable background sample (in the statistical sense), at the size of the prey, would, on average, be the most cryptic. We present an analysis, based on realistic assumptions about low-level vision, that estimates the distribution of background colours and visual textures, and predicts the best camouflage. We present data from a field experiment that tests and supports our predictions, using artificial moth-like targets under bird predation. Additionally, we present analogous data for humans, under tightly controlled viewing conditions, searching for targets on a computer screen. These data show that, in the absence of predator learning, the best single camouflage pattern for heterogeneous backgrounds is the most probable sample. © 2017 The Authors.

  2. Sampling-Based Motion Planning Algorithms for Replanning and Spatial Load Balancing

    Energy Technology Data Exchange (ETDEWEB)

    Boardman, Beth Leigh [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-12

    The common theme of this dissertation is sampling-based motion planning with the two key contributions being in the area of replanning and spatial load balancing for robotic systems. Here, we begin by recalling two sampling-based motion planners: the asymptotically optimal rapidly-exploring random tree (RRT*), and the asymptotically optimal probabilistic roadmap (PRM*). We also provide a brief background on collision cones and the Distributed Reactive Collision Avoidance (DRCA) algorithm. The next four chapters detail novel contributions for motion replanning in environments with unexpected static obstacles, for multi-agent collision avoidance, and spatial load balancing. First, we show improved performance of the RRT* when using the proposed Grandparent-Connection (GP) or Focused-Refinement (FR) algorithms. Next, the Goal Tree algorithm for replanning with unexpected static obstacles is detailed and proven to be asymptotically optimal. A multi-agent collision avoidance problem in obstacle environments is approached via the RRT*, leading to the novel Sampling-Based Collision Avoidance (SBCA) algorithm. The SBCA algorithm is proven to guarantee collision free trajectories for all of the agents, even when subject to uncertainties in the knowledge of the other agents’ positions and velocities. Given that a solution exists, we prove that livelocks and deadlock will lead to the cost to the goal being decreased. We introduce a new deconfliction maneuver that decreases the cost-to-come at each step. This new maneuver removes the possibility of livelocks and allows a result to be formed that proves convergence to the goal configurations. Finally, we present a limited range Graph-based Spatial Load Balancing (GSLB) algorithm which fairly divides a non-convex space among multiple agents that are subject to differential constraints and have a limited travel distance. The GSLB is proven to converge to a solution when maximizing the area covered by the agents. The analysis

  3. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  4. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  5. Hierarchical matrices algorithms and analysis

    CERN Document Server

    Hackbusch, Wolfgang

    2015-01-01

    This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

  6. Improved multivariate polynomial factoring algorithm

    International Nuclear Information System (INIS)

    Wang, P.S.

    1978-01-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included

  7. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  8. Background Material

    DEFF Research Database (Denmark)

    Zandersen, Marianne; Hyytiäinen, Kari; Saraiva, Sofia

    This document serves as a background material to the BONUS Pilot Scenario Workshop, which aims to develop harmonised regional storylines of socio-ecological futures in the Baltic Sea region in a collaborative effort together with other BONUS projects and stakeholders.......This document serves as a background material to the BONUS Pilot Scenario Workshop, which aims to develop harmonised regional storylines of socio-ecological futures in the Baltic Sea region in a collaborative effort together with other BONUS projects and stakeholders....

  9. New reconstruction algorithm for digital breast tomosynthesis: better image quality for humans and computers.

    Science.gov (United States)

    Rodriguez-Ruiz, Alejandro; Teuwen, Jonas; Vreemann, Suzan; Bouwman, Ramona W; van Engen, Ruben E; Karssemeijer, Nico; Mann, Ritse M; Gubern-Merida, Albert; Sechopoulos, Ioannis

    2017-01-01

    Background The image quality of digital breast tomosynthesis (DBT) volumes depends greatly on the reconstruction algorithm. Purpose To compare two DBT reconstruction algorithms used by the Siemens Mammomat Inspiration system, filtered back projection (FBP), and FBP with iterative optimizations (EMPIRE), using qualitative analysis by human readers and detection performance of machine learning algorithms. Material and Methods Visual grading analysis was performed by four readers specialized in breast imaging who scored 100 cases reconstructed with both algorithms (70 lesions). Scoring (5-point scale: 1 = poor to 5 = excellent quality) was performed on presence of noise and artifacts, visualization of skin-line and Cooper's ligaments, contrast, and image quality, and, when present, lesion visibility. In parallel, a three-dimensional deep-learning convolutional neural network (3D-CNN) was trained (n = 259 patients, 51 positives with BI-RADS 3, 4, or 5 calcifications) and tested (n = 46 patients, nine positives), separately with FBP and EMPIRE volumes, to discriminate between samples with and without calcifications. The partial area under the receiver operating characteristic curve (pAUC) of each 3D-CNN was used for comparison. Results EMPIRE reconstructions showed better contrast (3.23 vs. 3.10, P = 0.010), image quality (3.22 vs. 3.03, P algorithm provides DBT volumes with better contrast and image quality, fewer artifacts, and improved visibility of calcifications for human observers, as well as improved detection performance with deep-learning algorithms.

  10. Cosmic Microwave Background Timeline

    Science.gov (United States)

    Cosmic Microwave Background Timeline 1934 : Richard Tolman shows that blackbody radiation in an will have a blackbody cosmic microwave background with temperature about 5 K 1955: Tigran Shmaonov anisotropy in the cosmic microwave background, this strongly supports the big bang model with gravitational

  11. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    Science.gov (United States)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  12. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    Science.gov (United States)

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  14. Hanford Site background: Part 1, Soil background for nonradioactive analytes. Revision 1, Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    1993-04-01

    Volume two contains the following appendices: Description of soil sampling sites; sampling narrative; raw data soil background; background data analysis; sitewide background soil sampling plan; and use of soil background data for the detection of contamination at waste management unit on the Hanford Site.

  15. Blind signal processing algorithms under DC biased Gaussian noise

    Science.gov (United States)

    Kim, Namyong; Byun, Hyung-Gi; Lim, Jeong-Ok

    2013-05-01

    Distortions caused by the DC-biased laser input can be modeled as DC biased Gaussian noise and removing DC bias is important in the demodulation process of the electrical signal in most optical communications. In this paper, a new performance criterion and a related algorithm for unsupervised equalization are proposed for communication systems in the environment of channel distortions and DC biased Gaussian noise. The proposed criterion utilizes the Euclidean distance between the Dirac-delta function located at zero on the error axis and a probability density function of biased constant modulus errors, where constant modulus error is defined by the difference between the system out and a constant modulus calculated from the transmitted symbol points. From the results obtained from the simulation under channel models with fading and DC bias noise abruptly added to background Gaussian noise, the proposed algorithm converges rapidly even after the interruption of DC bias proving that the proposed criterion can be effectively applied to optical communication systems corrupted by channel distortions and DC bias noise.

  16. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  17. Background sources at PEP

    International Nuclear Information System (INIS)

    Lynch, H.; Schwitters, R.F.; Toner, W.T.

    1988-01-01

    Important sources of background for PEP experiments are studied. Background particles originate from high-energy electrons and positrons which have been lost from stable orbits, γ-rays emitted by the primary beams through bremsstrahlung in the residual gas, and synchrotron radiation x-rays. The effect of these processes on the beam lifetime are calculated and estimates of background rates at the interaction region are given. Recommendations for the PEP design, aimed at minimizing background are presented. 7 figs., 4 tabs

  18. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  19. Knowledge fusion: Time series modeling followed by pattern recognition applied to unusual sections of background data

    International Nuclear Information System (INIS)

    Burr, T.; Doak, J.; Howell, J.A.; Martinez, D.; Strittmatter, R.

    1996-03-01

    This report describes work performed during FY 95 for the Knowledge Fusion Project, which by the Department of Energy, Office of Nonproliferation and National Security. The project team selected satellite sensor data as the one main example to which its analysis algorithms would be applied. The specific sensor-fusion problem has many generic features that make it a worthwhile problem to attempt to solve in a general way. The generic problem is to recognize events of interest from multiple time series in a possibly noisy background. By implementing a suite of time series modeling and forecasting methods and using well-chosen alarm criteria, we reduce the number of false alarms. We then further reduce the number of false alarms by analyzing all suspicious sections of data, as judged by the alarm criteria, with pattern recognition methods. This report describes the implementation and application of this two-step process for separating events from unusual background. As a fortunate by-product of this activity, it is possible to gain a better understanding of the natural background

  20. Knowledge fusion: Time series modeling followed by pattern recognition applied to unusual sections of background data

    Energy Technology Data Exchange (ETDEWEB)

    Burr, T.; Doak, J.; Howell, J.A.; Martinez, D.; Strittmatter, R.

    1996-03-01

    This report describes work performed during FY 95 for the Knowledge Fusion Project, which by the Department of Energy, Office of Nonproliferation and National Security. The project team selected satellite sensor data as the one main example to which its analysis algorithms would be applied. The specific sensor-fusion problem has many generic features that make it a worthwhile problem to attempt to solve in a general way. The generic problem is to recognize events of interest from multiple time series in a possibly noisy background. By implementing a suite of time series modeling and forecasting methods and using well-chosen alarm criteria, we reduce the number of false alarms. We then further reduce the number of false alarms by analyzing all suspicious sections of data, as judged by the alarm criteria, with pattern recognition methods. This report describes the implementation and application of this two-step process for separating events from unusual background. As a fortunate by-product of this activity, it is possible to gain a better understanding of the natural background.

  1. Radiation anomaly detection algorithms for field-acquired gamma energy spectra

    Science.gov (United States)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen

    2015-08-01

    The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.

  2. New application of intelligent agents in sporadic amyotrophic lateral sclerosis identifies unexpected specific genetic background

    Directory of Open Access Journals (Sweden)

    Marocchi Alessandro

    2008-05-01

    Full Text Available Abstract Background Few genetic factors predisposing to the sporadic form of amyotrophic lateral sclerosis (ALS have been identified, but the pathology itself seems to be a true multifactorial disease in which complex interactions between environmental and genetic susceptibility factors take place. The purpose of this study was to approach genetic data with an innovative statistical method such as artificial neural networks to identify a possible genetic background predisposing to the disease. A DNA multiarray panel was applied to genotype more than 60 polymorphisms within 35 genes selected from pathways of lipid and homocysteine metabolism, regulation of blood pressure, coagulation, inflammation, cellular adhesion and matrix integrity, in 54 sporadic ALS patients and 208 controls. Advanced intelligent systems based on novel coupling of artificial neural networks and evolutionary algorithms have been applied. The results obtained have been compared with those derived from the use of standard neural networks and classical statistical analysis Results Advanced intelligent systems based on novel coupling of artificial neural networks and evolutionary algorithms have been applied. The results obtained have been compared with those derived from the use of standard neural networks and classical statistical analysis. An unexpected discovery of a strong genetic background in sporadic ALS using a DNA multiarray panel and analytical processing of the data with advanced artificial neural networks was found. The predictive accuracy obtained with Linear Discriminant Analysis and Standard Artificial Neural Networks ranged from 70% to 79% (average 75.31% and from 69.1 to 86.2% (average 76.6% respectively. The corresponding value obtained with Advanced Intelligent Systems reached an average of 96.0% (range 94.4 to 97.6%. This latter approach allowed the identification of seven genetic variants essential to differentiate cases from controls: apolipoprotein E arg

  3. EGNAS: an exhaustive DNA sequence design algorithm

    Directory of Open Access Journals (Sweden)

    Kick Alfred

    2012-06-01

    Full Text Available Abstract Background The molecular recognition based on the complementary base pairing of deoxyribonucleic acid (DNA is the fundamental principle in the fields of genetics, DNA nanotechnology and DNA computing. We present an exhaustive DNA sequence design algorithm that allows to generate sets containing a maximum number of sequences with defined properties. EGNAS (Exhaustive Generation of Nucleic Acid Sequences offers the possibility of controlling both interstrand and intrastrand properties. The guanine-cytosine content can be adjusted. Sequences can be forced to start and end with guanine or cytosine. This option reduces the risk of “fraying” of DNA strands. It is possible to limit cross hybridizations of a defined length, and to adjust the uniqueness of sequences. Self-complementarity and hairpin structures of certain length can be avoided. Sequences and subsequences can optionally be forbidden. Furthermore, sequences can be designed to have minimum interactions with predefined strands and neighboring sequences. Results The algorithm is realized in a C++ program. TAG sequences can be generated and combined with primers for single-base extension reactions, which were described for multiplexed genotyping of single nucleotide polymorphisms. Thereby, possible foldback through intrastrand interaction of TAG-primer pairs can be limited. The design of sequences for specific attachment of molecular constructs to DNA origami is presented. Conclusions We developed a new software tool called EGNAS for the design of unique nucleic acid sequences. The presented exhaustive algorithm allows to generate greater sets of sequences than with previous software and equal constraints. EGNAS is freely available for noncommercial use at http://www.chm.tu-dresden.de/pc6/EGNAS.

  4. Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS

    Directory of Open Access Journals (Sweden)

    Stephan P. Lovstedt

    2008-01-01

    Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.

  5. Evaluation of clustering algorithms for protein-protein interaction networks

    Directory of Open Access Journals (Sweden)

    van Helden Jacques

    2006-11-01

    Full Text Available Abstract Background Protein interactions are crucial components of all cellular processes. Recently, high-throughput methods have been developed to obtain a global description of the interactome (the whole network of protein interactions for a given organism. In 2002, the yeast interactome was estimated to contain up to 80,000 potential interactions. This estimate is based on the integration of data sets obtained by various methods (mass spectrometry, two-hybrid methods, genetic studies. High-throughput methods are known, however, to yield a non-negligible rate of false positives, and to miss a fraction of existing interactions. The interactome can be represented as a graph where nodes correspond with proteins and edges with pairwise interactions. In recent years clustering methods have been developed and applied in order to extract relevant modules from such graphs. These algorithms require the specification of parameters that may drastically affect the results. In this paper we present a comparative assessment of four algorithms: Markov Clustering (MCL, Restricted Neighborhood Search Clustering (RNSC, Super Paramagnetic Clustering (SPC, and Molecular Complex Detection (MCODE. Results A test graph was built on the basis of 220 complexes annotated in the MIPS database. To evaluate the robustness to false positives and false negatives, we derived 41 altered graphs by randomly removing edges from or adding edges to the test graph in various proportions. Each clustering algorithm was applied to these graphs with various parameter settings, and the clusters were compared with the annotated complexes. We analyzed the sensitivity of the algorithms to the parameters and determined their optimal parameter values. We also evaluated their robustness to alterations of the test graph. We then applied the four algorithms to six graphs obtained from high-throughput experiments and compared the resulting clusters with the annotated complexes. Conclusion This

  6. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  7. Opposition-Based Adaptive Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2016-07-01

    Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

  8. Opposite Degree Algorithm and Its Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Guang Yue

    2015-12-01

    Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.

  9. 2.5D Inversion Algorithm of Frequency-Domain Airborne Electromagnetics with Topography

    Directory of Open Access Journals (Sweden)

    Jianjun Xi

    2016-01-01

    Full Text Available We presented a 2.5D inversion algorithm with topography for frequency-domain airborne electromagnetic data. The forward modeling is based on edge finite element method and uses the irregular hexahedron to adapt the topography. The electric and magnetic fields are split into primary (background and secondary (scattered field to eliminate the source singularity. For the multisources of frequency-domain airborne electromagnetic method, we use the large-scale sparse matrix parallel shared memory direct solver PARDISO to solve the linear system of equations efficiently. The inversion algorithm is based on Gauss-Newton method, which has the efficient convergence rate. The Jacobian matrix is calculated by “adjoint forward modelling” efficiently. The synthetic inversion examples indicated that our proposed method is correct and effective. Furthermore, ignoring the topography effect can lead to incorrect results and interpretations.

  10. Single image super resolution algorithm based on edge interpolation in NSCT domain

    Science.gov (United States)

    Zhang, Mengqun; Zhang, Wei; He, Xinyu

    2017-11-01

    In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.

  11. Reproducible cancer biomarker discovery in SELDI-TOF MS using different pre-processing algorithms.

    Directory of Open Access Journals (Sweden)

    Jinfeng Zou

    Full Text Available BACKGROUND: There has been much interest in differentiating diseased and normal samples using biomarkers derived from mass spectrometry (MS studies. However, biomarker identification for specific diseases has been hindered by irreproducibility. Specifically, a peak profile extracted from a dataset for biomarker identification depends on a data pre-processing algorithm. Until now, no widely accepted agreement has been reached. RESULTS: In this paper, we investigated the consistency of biomarker identification using differentially expressed (DE peaks from peak profiles produced by three widely used average spectrum-dependent pre-processing algorithms based on SELDI-TOF MS data for prostate and breast cancers. Our results revealed two important factors that affect the consistency of DE peak identification using different algorithms. One factor is that some DE peaks selected from one peak profile were not detected as peaks in other profiles, and the second factor is that the statistical power of identifying DE peaks in large peak profiles with many peaks may be low due to the large scale of the tests and small number of samples. Furthermore, we demonstrated that the DE peak detection power in large profiles could be improved by the stratified false discovery rate (FDR control approach and that the reproducibility of DE peak detection could thereby be increased. CONCLUSIONS: Comparing and evaluating pre-processing algorithms in terms of reproducibility can elucidate the relationship among different algorithms and also help in selecting a pre-processing algorithm. The DE peaks selected from small peak profiles with few peaks for a dataset tend to be reproducibly detected in large peak profiles, which suggests that a suitable pre-processing algorithm should be able to produce peaks sufficient for identifying useful and reproducible biomarkers.

  12. On the evolution of Cu-Ni-rich bridges of Alnico alloys with tempering

    Energy Technology Data Exchange (ETDEWEB)

    Fan, M. [Department of Materials Science and Engineering, North Carolina State University, Campus Box 7907, Raleigh, NC 27695-7907 (United States); Liu, Y. [Department of Materials Science and Engineering, North Carolina State University, Campus Box 7907, Raleigh, NC 27695-7907 (United States); Analytical Instrumentation Facility, North Carolina State University, Raleigh, NC 27695 (United States); Jha, Rajesh; Dulikravich, George S. [Departments of Mechanical and Materials Engineering, MAIDROC, Florida International University, EC3462, 10555 West Flagler Street, Miami, FL 33174 (United States); Schwartz, J.; Koch, C.C. [Department of Materials Science and Engineering, North Carolina State University, Campus Box 7907, Raleigh, NC 27695-7907 (United States)

    2016-12-15

    Tempering is a critical step in Alnico alloy processing, yet the effects of tempering on microstructure have not been well studied. Here we report these effects, and in particular the effects on the Cu-Ni bridges. Energy-dispersive X-ray spectroscopy (EDS) maps and line scans show that tempering changes the elemental distribution in the Cu-Ni bridges, but not the morphology and distribution of Cu-bridges. The Cu concentration in the Cu-Ni bridges increases after tempering while other element concentrations decrease, especially Ni and Al. Furthermore, tempering sharpens the Cu bridge boundaries. These effects are primarily related to the large 2C{sub 44}/(C{sub 11}−C{sub 12}) ratio for Cu, largest of all elements in Alnico. In addition, the Ni-Cu loops around the α{sub 1} phases become inconspicuous with tempering. The diffusion of Fe and Co to the α{sub 1} phase during tempering, which increases the difference of saturation magnetization between the α{sub 1} and α{sub 2} phases, is observed by EDS. In summary, α{sub 1}, α{sub 2} and Cu-bridges are concentrated with their major elements during tempering which improves the magnetic properties. The formation of these features formed through elemental diffusion is discussed via energy theories. - Highlights: • Tempering changes the elemental distribution in the Cu-Ni bridges, but not morphology. • Cu concentration in the Cu-Ni bridges increases after tempering while others decrease. • These effects are related to the large 2C{sub 44}/(C{sub 11}−C{sub 12}) ratio for Cu. • The Ni-Cu loops around the α{sub 1} phases become inconspicuous with tempering. • The diffusion of Fe and Co to the α{sub 1} phase during tempering is observed by EDS.

  13. Can experimental data in humans verify the finite element-based bone remodeling algorithm?

    DEFF Research Database (Denmark)

    Wong, C.; Gehrchen, P.M.; Kiaer, T.

    2008-01-01

    STUDY DESIGN: A finite element analysis-based bone remodeling study in human was conducted in the lumbar spine operated on with pedicle screws. Bone remodeling results were compared to prospective experimental bone mineral content data of patients operated on with pedicle screws. OBJECTIVE......: The validity of 2 bone remodeling algorithms was evaluated by comparing against prospective bone mineral content measurements. Also, the potential stress shielding effect was examined using the 2 bone remodeling algorithms and the experimental bone mineral data. SUMMARY OF BACKGROUND DATA: In previous studies...... operated on with pedicle screws between L4 and L5. The stress shielding effect was also examined. The bone remodeling results were compared with prospective bone mineral content measurements of 4 patients. They were measured after surgery, 3-, 6- and 12-months postoperatively. RESULTS: After 1 year...

  14. A new comparison of hyperspectral anomaly detection algorithms for real-time applications

    Science.gov (United States)

    Díaz, María.; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    Due to the high spectral resolution that remotely sensed hyperspectral images provide, there has been an increasing interest in anomaly detection. The aim of anomaly detection is to stand over pixels whose spectral signature differs significantly from the background spectra. Basically, anomaly detectors mark pixels with a certain score, considering as anomalies those whose scores are higher than a threshold. Receiver Operating Characteristic (ROC) curves have been widely used as an assessment measure in order to compare the performance of different algorithms. ROC curves are graphical plots which illustrate the trade- off between false positive and true positive rates. However, they are limited in order to make deep comparisons due to the fact that they discard relevant factors required in real-time applications such as run times, costs of misclassification and the competence to mark anomalies with high scores. This last fact is fundamental in anomaly detection in order to distinguish them easily from the background without any posterior processing. An extensive set of simulations have been made using different anomaly detection algorithms, comparing their performances and efficiencies using several extra metrics in order to complement ROC curves analysis. Results support our proposal and demonstrate that ROC curves do not provide a good visualization of detection performances for themselves. Moreover, a figure of merit has been proposed in this paper which encompasses in a single global metric all the measures yielded for the proposed additional metrics. Therefore, this figure, named Detection Efficiency (DE), takes into account several crucial types of performance assessment that ROC curves do not consider. Results demonstrate that algorithms with the best detection performances according to ROC curves do not have the highest DE values. Consequently, the recommendation of using extra measures to properly evaluate performances have been supported and justified by

  15. Algorithmic phase diagrams

    Science.gov (United States)

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  16. Towards a computational- and algorithmic-level account of concept blending using analogies and amalgams

    Science.gov (United States)

    Besold, Tarek R.; Kühnberger, Kai-Uwe; Plaza, Enric

    2017-10-01

    Concept blending - a cognitive process which allows for the combination of certain elements (and their relations) from originally distinct conceptual spaces into a new unified space combining these previously separate elements, and enables reasoning and inference over the combination - is taken as a key element of creative thought and combinatorial creativity. In this article, we summarise our work towards the development of a computational-level and algorithmic-level account of concept blending, combining approaches from computational analogy-making and case-based reasoning (CBR). We present the theoretical background, as well as an algorithmic proposal integrating higher-order anti-unification matching and generalisation from analogy with amalgams from CBR. The feasibility of the approach is then exemplified in two case studies.

  17. Analysis of Correlation between an Accelerometer-Based Algorithm for Detecting Parkinsonian Gait and UPDRS Subscales

    Directory of Open Access Journals (Sweden)

    Alejandro Rodríguez-Molinero

    2017-09-01

    Full Text Available BackgroundOur group earlier developed a small monitoring device, which uses accelerometer measurements to accurately detect motor fluctuations in patients with Parkinson’s (On and Off state based on an algorithm that characterizes gait through the frequency content of strides. To further validate the algorithm, we studied the correlation of its outputs with the motor section of the Unified Parkinson’s Disease Rating Scale part-III (UPDRS-III.MethodSeventy-five patients suffering from Parkinson’s disease were asked to walk both in the Off and the On state while wearing the inertial sensor on the waist. Additionally, all patients were administered the motor section of the UPDRS in both motor phases. Tests were conducted at the patient’s home. Convergence between the algorithm and the scale was evaluated by using the Spearman’s correlation coefficient.ResultsCorrelation with the UPDRS-III was moderate (rho −0.56; p < 0.001. Correlation between the algorithm outputs and the gait item in the UPDRS-III was good (rho −0.73; p < 0.001. The factorial analysis of the UPDRS-III has repeatedly shown that several of its items can be clustered under the so-called Factor 1: “axial function, balance, and gait.” The correlation between the algorithm outputs and this factor of the UPDRS-III was −0.67 (p < 0.01.ConclusionThe correlation achieved by the algorithm with the UPDRS-III scale suggests that this algorithm might be a useful tool for monitoring patients with Parkinson’s disease and motor fluctuations.

  18. A physics-enabled flow restoration algorithm for sparse PIV and PTV measurements

    Science.gov (United States)

    Vlasenko, Andrey; Steele, Edward C. C.; Nimmo-Smith, W. Alex M.

    2015-06-01

    The gaps and noise present in particle image velocimetry (PIV) and particle tracking velocimetry (PTV) measurements affect the accuracy of the data collected. Existing algorithms developed for the restoration of such data are only applicable to experimental measurements collected under well-prepared laboratory conditions (i.e. where the pattern of the velocity flow field is known), and the distribution, size and type of gaps and noise may be controlled by the laboratory set-up. However, in many cases, such as PIV and PTV measurements of arbitrarily turbid coastal waters, the arrangement of such conditions is not possible. When the size of gaps or the level of noise in these experimental measurements become too large, their successful restoration with existing algorithms becomes questionable. Here, we outline a new physics-enabled flow restoration algorithm (PEFRA), specially designed for the restoration of such velocity data. Implemented as a ‘black box’ algorithm, where no user-background in fluid dynamics is necessary, the physical structure of the flow in gappy or noisy data is able to be restored in accordance with its hydrodynamical basis. The use of this is not dependent on types of flow, types of gaps or noise in measurements. The algorithm will operate on any data time-series containing a sequence of velocity flow fields recorded by PIV or PTV. Tests with numerical flow fields established that this method is able to successfully restore corrupted PIV and PTV measurements with different levels of sparsity and noise. This assessment of the algorithm performance is extended with an example application to in situ submersible 3D-PTV measurements collected in the bottom boundary layer of the coastal ocean, where the naturally-occurring plankton and suspended sediments used as tracers causes an increase in the noise level that, without such denoising, will contaminate the measurements.

  19. Diagnostic performance of line-immunoassay based algorithms for incident HIV-1 infection

    Directory of Open Access Journals (Sweden)

    Schüpbach Jörg

    2012-04-01

    Full Text Available Abstract Background Serologic testing algorithms for recent HIV seroconversion (STARHS provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident ( Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and

  20. A physics-enabled flow restoration algorithm for sparse PIV and PTV measurements

    International Nuclear Information System (INIS)

    Vlasenko, Andrey; Steele, Edward C C; Nimmo-Smith, W Alex M

    2015-01-01

    The gaps and noise present in particle image velocimetry (PIV) and particle tracking velocimetry (PTV) measurements affect the accuracy of the data collected. Existing algorithms developed for the restoration of such data are only applicable to experimental measurements collected under well-prepared laboratory conditions (i.e. where the pattern of the velocity flow field is known), and the distribution, size and type of gaps and noise may be controlled by the laboratory set-up. However, in many cases, such as PIV and PTV measurements of arbitrarily turbid coastal waters, the arrangement of such conditions is not possible. When the size of gaps or the level of noise in these experimental measurements become too large, their successful restoration with existing algorithms becomes questionable. Here, we outline a new physics-enabled flow restoration algorithm (PEFRA), specially designed for the restoration of such velocity data. Implemented as a ‘black box’ algorithm, where no user-background in fluid dynamics is necessary, the physical structure of the flow in gappy or noisy data is able to be restored in accordance with its hydrodynamical basis. The use of this is not dependent on types of flow, types of gaps or noise in measurements. The algorithm will operate on any data time-series containing a sequence of velocity flow fields recorded by PIV or PTV. Tests with numerical flow fields established that this method is able to successfully restore corrupted PIV and PTV measurements with different levels of sparsity and noise. This assessment of the algorithm performance is extended with an example application to in situ submersible 3D-PTV measurements collected in the bottom boundary layer of the coastal ocean, where the naturally-occurring plankton and suspended sediments used as tracers causes an increase in the noise level that, without such denoising, will contaminate the measurements. (paper)

  1. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  2. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference Genetics Selection Evolution 2010, 42:29

    DEFF Research Database (Denmark)

    Ødegård, Jørgen; Meuwissen, Theo HE; Heringstad, Bjørg

    2010-01-01

    Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where...... records exist for the parents). Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to the sire-dam model). Conclusions The new algorithm to estimate genetic parameters via Gibbs sampling solves the bias problems typically occurring in animal...... individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (co)variance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative...

  3. PROGNOSTIC ALGORITHM FOR DISEASE FLOW IN PULMONARY AND THORACIC LYMPH NODES SARCOIDOSIS

    Directory of Open Access Journals (Sweden)

    S. A. Terpigorev

    2014-01-01

    Full Text Available Background: Sarcoidosis is a systemic granulomatosis commonly affecting respiratory system. Variable and often unpredictable flow of the disease provides rationale for the development of prognostic algorithm. Aim: To detect predictive parameters in pulmonary and thoracic lymph nodes sarcoidosis; to develop prognostic algorithm. Materials and methods: The results of examination of 113 patients (85 women and 28 men, 19–77 years old with morphologically verified sarcoidosis has been assessed. Clinical manifestations, functional, radiographic (including CT numerical scores and morphological features of the disease were analyzed against 3-year outcomes in prednisolon/hydroxychloroquine-treated or treatment-naive patients. Results: Radiographic stage, CT-pattern scores, several parameters of pulmonary function tests (DLCO, RV, FEV1, FVC and dyspnoe had the greatest prognostic significance for disease flow. Prognostic accuracy was 87.8% and increased to 94.5% after one-year dynamics of symptoms was taken into account. Therapy with systemic glucocorticosteroids did not influence outcomes in sarcoidosis with asymptomatic enlargement of thoracic lymph nodes. Conclusion: We have developed an algorithm for prognosis assessment in pulmonary sarcoidosis. Taking into account the results of patients follow-up significantly improves the accuracy of the prognosis.

  4. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  5. A new LMS algorithm for analysis of atrial fibrillation signals

    Directory of Open Access Journals (Sweden)

    Ciaccio Edward J

    2012-03-01

    Full Text Available Abstract Background A biomedical signal can be defined by its extrinsic features (x-axis and y-axis shift and scale and intrinsic features (shape after normalization of extrinsic features. In this study, an LMS algorithm utilizing the method of differential steepest descent is developed, and is tested by normalization of extrinsic features in complex fractionated atrial electrograms (CFAE. Method Equations for normalization of x-axis and y-axis shift and scale are first derived. The algorithm is implemented for real-time analysis of CFAE acquired during atrial fibrillation (AF. Data was acquired at a 977 Hz sampling rate from 10 paroxysmal and 10 persistent AF patients undergoing clinical electrophysiologic study and catheter ablation therapy. Over 24 trials, normalization characteristics using the new algorithm with four weights were compared to the Widrow-Hoff LMS algorithm with four tapped delays. The time for convergence, and the mean squared error (MSE after convergence, were compared. The new LMS algorithm was also applied to lead aVF of the electrocardiogram in one patient with longstanding persistent AF, to enhance the F wave and to monitor extrinsic changes in signal shape. The average waveform over a 25 s interval was used as a prototypical reference signal for matching with the aVF lead. Results Based on the derivation equations, the y-shift and y-scale adjustments of the new LMS algorithm were shown to be equivalent to the scalar form of the Widrow-Hoff LMS algorithm. For x-shift and x-scale adjustments, rather than implementing a long tapped delay as in Widrow-Hoff LMS, the new method uses only two weights. After convergence, the MSE for matching paroxysmal CFAE averaged 0.46 ± 0.49μV2/sample for the new LMS algorithm versus 0.72 ± 0.35μV2/sample for Widrow-Hoff LMS. The MSE for matching persistent CFAE averaged 0.55 ± 0.95μV2/sample for the new LMS algorithm versus 0.62 ± 0.55μV2/sample for Widrow

  6. Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs

    Directory of Open Access Journals (Sweden)

    Vaughn Matthew

    2010-11-01

    Full Text Available Abstract Background Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ messages (Σ being the size of the alphabet. Results In this paper we present a Θ(n/p time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/BBlog(M/B (M being the main memory size and B being the size of the disk block. We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster - both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. Conclusions The bi

  7. Matching of Remote Sensing Images with Complex Background Variations via Siamese Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Haiqing He

    2018-02-01

    Full Text Available Feature-based matching methods have been widely used in remote sensing image matching given their capability to achieve excellent performance despite image geometric and radiometric distortions. However, most of the feature-based methods are unreliable for complex background variations, because the gradient or other image grayscale information used to construct the feature descriptor is sensitive to image background variations. Recently, deep learning-based methods have been proven suitable for high-level feature representation and comparison in image matching. Inspired by the progresses made in deep learning, a new technical framework for remote sensing image matching based on the Siamese convolutional neural network is presented in this paper. First, a Siamese-type network architecture is designed to simultaneously learn the features and the corresponding similarity metric from labeled training examples of matching and non-matching true-color patch pairs. In the proposed network, two streams of convolutional and pooling layers sharing identical weights are arranged without the manually designed features. The number of convolutional layers is determined based on the factors that affect image matching. The sigmoid function is employed to compute the matching and non-matching probabilities in the output layer. Second, a gridding sub-pixel Harris algorithm is used to obtain the accurate localization of candidate matches. Third, a Gaussian pyramid coupling quadtree is adopted to gradually narrow down the searching space of the candidate matches, and multiscale patches are compared synchronously. Subsequently, a similarity measure based on the output of the sigmoid is adopted to find the initial matches. Finally, the random sample consensus algorithm and the whole-to-local quadratic polynomial constraints are used to remove false matches. In the experiments, different types of satellite datasets, such as ZY3, GF1, IKONOS, and Google Earth images

  8. A New Algorithm for Radioisotope Identification of Shielded and Masked SNM/RDD Materials

    International Nuclear Information System (INIS)

    Jeffcoat, R.

    2012-01-01

    Detection and identification of shielded and masked nuclear materials is crucial to national security, but vast borders and high volumes of traffic impose stringent requirements for practical detection systems. Such tools must be be mobile, and hence low power, provide a low false alarm rate, and be sufficiently robust to be operable by non-technical personnel. Currently fielded systems have not achieved all of these requirements simultaneously. Transport modeling such as that done in GADRAS is able to predict observed spectra to a high degree of fidelity; our research is focusing on a radionuclide identification algorithm that inverts this modeling within the constraints imposed by a handheld device. Key components of this work include incorporation of uncertainty as a function of both the background radiation estimate and the hypothesized sources, dimensionality reduction, and nonnegative matrix factorization. We have partially evaluated performance of our algorithm on a third-party data collection made with two different sodium iodide detection devices. Initial results indicate, with caveats, that our algorithm performs as good as or better than the on-board identification algorithms. The system developed was based on a probabilistic approach with an improved approach to variance modeling relative to past work. This system was chosen based on technical innovation and system performance over algorithms developed at two competing research institutions. One key outcome of this probabilistic approach was the development of an intuitive measure of confidence which was indeed useful enough that a classification algorithm was developed based around alarming on high confidence targets. This paper will present and discuss results of this novel approach to accurately identifying shielded or masked radioisotopes with radiation detection systems.

  9. An algorithm to discover gene signatures with predictive potential

    Directory of Open Access Journals (Sweden)

    Hallett Robin M

    2010-09-01

    Full Text Available Abstract Background The advent of global gene expression profiling has generated unprecedented insight into our molecular understanding of cancer, including breast cancer. For example, human breast cancer patients display significant diversity in terms of their survival, recurrence, metastasis as well as response to treatment. These patient outcomes can be predicted by the transcriptional programs of their individual breast tumors. Predictive gene signatures allow us to correctly classify human breast tumors into various risk groups as well as to more accurately target therapy to ensure more durable cancer treatment. Results Here we present a novel algorithm to generate gene signatures with predictive potential. The method first classifies the expression intensity for each gene as determined by global gene expression profiling as low, average or high. The matrix containing the classified data for each gene is then used to score the expression of each gene based its individual ability to predict the patient characteristic of interest. Finally, all examined genes are ranked based on their predictive ability and the most highly ranked genes are included in the master gene signature, which is then ready for use as a predictor. This method was used to accurately predict the survival outcomes in a cohort of human breast cancer patients. Conclusions We confirmed the capacity of our algorithm to generate gene signatures with bona fide predictive ability. The simplicity of our algorithm will enable biological researchers to quickly generate valuable gene signatures without specialized software or extensive bioinformatics training.

  10. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  11. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  12. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  13. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    International Nuclear Information System (INIS)

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  14. Contribution of 210Pb bremsstrahlung to the background of lead shielded gamma spectrometers

    International Nuclear Information System (INIS)

    Mrda, D.; Bikit, I.; Veskovic, M.; Forkapic, S.

    2007-01-01

    Lead, which is often used as a shielding material, contains 210 Pb (T 1/2 =22.3 y). The 46.54 keV γ-intensity of 210 Pb can be easily reduced by an inner lining, but the bremsstrahlung caused by the β-decay of its daughter, 210 Bi, with a maximal electron energy of 1.16 MeV, will contribute to the gamma detector background. The spectrum of this bremsstrahlung is calculated by numerically fitting the β-spectrum and integrating the Koch-Motz formula. The absorption of the bremsstrahlung in the lead and detection efficiencies for the HPGe detector are calculated by the effective solid angle algorithm, using corrections for the photopeak/Compton ratio of cross-sections in Ge. By comparison with the measured background spectrum, it is shown that, for the lead with 25 Bq/kg of 210 Pb up to 500 keV of gamma spectrum, the bremsstrahlung contribution to the background is about 20% for our surface-based detector system. Also, we compared our calculations with a Monte Carlo simulation of another detector system with a shield containing 1 Bq/kg of 210 Pb and found that our analytical method gives a value of roughly two times higher than the Monte Carlo one for the total bremsstrahlung contribution. The quality of the analytical semi-empirical method is proved by the reasonable agreement with the experimental results published

  15. A numerical study of super-resolution through fast 3D wideband algorithm for scattering in highly-heterogeneous media

    KAUST Repository

    Létourneau, Pierre-David

    2016-09-19

    We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we mean that no assumption (e.g. Rayleigh scattering, geometrical optics, weak scattering, Born single scattering, etc.) is necessary regarding the properties of the scatterers, their distribution or the background medium. The algorithm is also fast in the sense that it scales linearly with the number of unknowns. We use this algorithm to study the phenomenon of super-resolution in time-reversal refocusing in highly-scattering media recently observed experimentally (Lemoult et al., 2011), and provide numerical arguments towards the fact that such a phenomenon can be explained through a homogenization theory.

  16. Improved Gravitation Field Algorithm and Its Application in Hierarchical Clustering

    Science.gov (United States)

    Zheng, Ming; Sun, Ying; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Background Gravitation field algorithm (GFA) is a new optimization algorithm which is based on an imitation of natural phenomena. GFA can do well both for searching global minimum and multi-minima in computational biology. But GFA needs to be improved for increasing efficiency, and modified for applying to some discrete data problems in system biology. Method An improved GFA called IGFA was proposed in this paper. Two parts were improved in IGFA. The first one is the rule of random division, which is a reasonable strategy and makes running time shorter. The other one is rotation factor, which can improve the accuracy of IGFA. And to apply IGFA to the hierarchical clustering, the initial part and the movement operator were modified. Results Two kinds of experiments were used to test IGFA. And IGFA was applied to hierarchical clustering. The global minimum experiment was used with IGFA, GFA, GA (genetic algorithm) and SA (simulated annealing). Multi-minima experiment was used with IGFA and GFA. The two experiments results were compared with each other and proved the efficiency of IGFA. IGFA is better than GFA both in accuracy and running time. For the hierarchical clustering, IGFA is used to optimize the smallest distance of genes pairs, and the results were compared with GA and SA, singular-linkage clustering, UPGMA. The efficiency of IGFA is proved. PMID:23173043

  17. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    Science.gov (United States)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  18. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  19. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  20. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  1. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  2. Algorithms and data structures for automated change detection and classification of sidescan sonar imagery

    Science.gov (United States)

    Gendron, Marlin Lee

    During Mine Warfare (MIW) operations, MIW analysts perform change detection by visually comparing historical sidescan sonar imagery (SSI) collected by a sidescan sonar with recently collected SSI in an attempt to identify objects (which might be explosive mines) placed at sea since the last time the area was surveyed. This dissertation presents a data structure and three algorithms, developed by the author, that are part of an automated change detection and classification (ACDC) system. MIW analysts at the Naval Oceanographic Office, to reduce the amount of time to perform change detection, are currently using ACDC. The dissertation introductory chapter gives background information on change detection, ACDC, and describes how SSI is produced from raw sonar data. Chapter 2 presents the author's Geospatial Bitmap (GB) data structure, which is capable of storing information geographically and is utilized by the three algorithms. This chapter shows that a GB data structure used in a polygon-smoothing algorithm ran between 1.3--48.4x faster than a sparse matrix data structure. Chapter 3 describes the GB clustering algorithm, which is the author's repeatable, order-independent method for clustering. Results from tests performed in this chapter show that the time to cluster a set of points is not affected by the distribution or the order of the points. In Chapter 4, the author presents his real-time computer-aided detection (CAD) algorithm that automatically detects mine-like objects on the seafloor in SSI. The author ran his GB-based CAD algorithm on real SSI data, and results of these tests indicate that his real-time CAD algorithm performs comparably to or better than other non-real-time CAD algorithms. The author presents his computer-aided search (CAS) algorithm in Chapter 5. CAS helps MIW analysts locate mine-like features that are geospatially close to previously detected features. A comparison between the CAS and a great circle distance algorithm shows that the

  3. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2013-01-01

    Full Text Available Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS data model. Aims: (1 Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2 Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3 Develop a set of queries to support data sampling and result comparisons; (4 Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1 algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2 algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The

  4. Evaluation of amplitude-based sorting algorithm to reduce lung tumor blurring in PET images using 4D NCAT phantom.

    Science.gov (United States)

    Wang, Jiali; Byrne, James; Franquiz, Juan; McGoron, Anthony

    2007-08-01

    develop and validate a PET sorting algorithm based on the respiratory amplitude to correct for abnormal respiratory cycles. using the 4D NCAT phantom model, 3D PET images were simulated in lung and other structures at different times within a respiratory cycle and noise was added. To validate the amplitude binning algorithm, NCAT phantom was used to simulate one case of five different respiratory periods and another case of five respiratory periods alone with five respiratory amplitudes. Comparison was performed for gated and un-gated images and for the new amplitude binning algorithm with the time binning algorithm by calculating the mean number of counts in the ROI (region of interest). an average of 8.87+/-5.10% improvement was reported for total 16 tumors with different tumor sizes and different T/B (tumor to background) ratios using the new sorting algorithm. As both the T/B ratio and tumor size decreases, image degradation due to respiration increases. The greater benefit for smaller diameter tumor and lower T/B ratio indicates a potential improvement in detecting more problematic tumors.

  5. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  6. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems

    Science.gov (United States)

    Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  7. Composite Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  8. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    Science.gov (United States)

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  9. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-06-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal

  10. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    International Nuclear Information System (INIS)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-01-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal

  11. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  12. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  13. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  14. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  15. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  16. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  17. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  18. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  19. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  20. The global Minmax k-means algorithm.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.