WorldWideScience

Sample records for cuny algorithms background

  1. Star point centroid algorithm based on background forecast

    Science.gov (United States)

    Wang, Jin; Zhao, Rujin; Zhu, Nan

    2014-09-01

    The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.

  2. Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms

    Directory of Open Access Journals (Sweden)

    Hazem Khaled

    2015-01-01

    Full Text Available Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI. The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.

  3. A Robust Algorithm to Determine the Topology of Space from the Cosmic Microwave Background Radiation

    OpenAIRE

    Weeks, Jeffrey R.

    2001-01-01

    Satellite measurements of the cosmic microwave back-ground radiation will soon provide an opportunity to test whether the universe is multiply connected. This paper presents a new algorithm for deducing the topology of the universe from the microwave background data. Unlike an older algorithm, the new algorithm gives the curvature of space and the radius of the last scattering surface as outputs, rather than requiring them as inputs. The new algorithm is also more tolerant of erro...

  4. Analysis of coincidence {gamma}-ray spectra using advanced background elimination, unfolding and fitting algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Morhac, M. E-mail: fyzimiro@savba.skfyzimiro@flnr.jinr.ru; Matousek, V. E-mail: matousek@savba.sk; Kliman, J.; Krupa, L.L.; Jandel, M

    2003-04-21

    The efficient algorithms to analyze multiparameter {gamma}-ray spectra are presented. They allow to search for peaks, to separate peaks from background, to improve the resolution and to fit 1-, 2-, 3-parameter {gamma}-ray spectra.

  5. The New Community College at CUNY and the Common Good

    Science.gov (United States)

    Rosenthal, Bill; Schnee, Emily

    2013-01-01

    On a prime site in Manhattan, a block from the lions guarding the New York Public Library, the City University of New York (CUNY) opened its newest community college in the fall of 2012. Designed to achieve greater student success, as measured through increased graduation rates, the New Community College at CUNY (NCC) is seen as a beacon of hope…

  6. A semi-supervised classification algorithm using the TAD-derived background as training data

    Science.gov (United States)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  7. Background Traffic-Based Retransmission Algorithm for Multimedia Streaming Transfer over Concurrent Multipaths

    Directory of Open Access Journals (Sweden)

    Yuanlong Cao

    2012-01-01

    Full Text Available The content-rich multimedia streaming will be the most attractive services in the next-generation networks. With function of distribute data across multipath end-to-end paths based on SCTP's multihoming feature, concurrent multipath transfer SCTP (CMT-SCTP has been regarded as the most promising technology for the efficient multimedia streaming transmission. However, the current researches on CMT-SCTP mainly focus on the algorithms related to the data delivery performance while they seldom consider the background traffic factors. Actually, background traffic of realistic network environments has an important impact on the performance of CMT-SCTP. In this paper, we firstly investigate the effect of background traffic on the performance of CMT-SCTP based on a close realistic simulation topology with reasonable background traffic in NS2, and then based on the localness nature of background flow, a further improved retransmission algorithm, named RTX_CSI, is proposed to reach more benefits in terms of average throughput and achieve high users' experience of quality for multimedia streaming services.

  8. An improved algorithm of laser spot center detection in strong noise background

    Science.gov (United States)

    Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong

    2018-01-01

    Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.

  9. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  10. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Justin, E-mail: justin.solomon@duke.edu [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 and Departments of Biomedical Engineering and Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was

  11. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    International Nuclear Information System (INIS)

    Solomon, Justin; Samei, Ehsan

    2014-01-01

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was

  12. Mechanical properties of highly textured Cu/Ni multilayers

    International Nuclear Information System (INIS)

    Liu, Y.; Bufford, D.; Wang, H.; Sun, C.; Zhang, X.

    2011-01-01

    We report on the synthesis of highly (1 1 1) and (1 0 0) textured Cu/Ni multilayers with individual layer thicknesses, h, varying from 1 to 200 nm. When, h, decreases to 5 nm or less, X-ray diffraction spectra show epitaxial growth of Cu/Ni multilayers. High resolution transmission electron microscopy studies show the coexistence of nanotwins and coherent layer interfaces in highly (1 1 1) textured Cu/Ni multilayers with smaller h. Hardnesses of multilayer films increase with decreasing h, approach a maximum at h of a few nanometers, and show softening thereafter at smaller h. The influence of layer interfaces as well as twin interfaces on strengthening mechanisms of multilayers and the formation of twins in Ni in multilayers are discussed.

  13. A Low-Complexity Algorithm for Static Background Estimation from Cluttered Image Sequences in Surveillance Contexts

    Directory of Open Access Journals (Sweden)

    Reddy Vikas

    2011-01-01

    Full Text Available Abstract For the purposes of foreground estimation, the true background model is unavailable in many practical circumstances and needs to be estimated from cluttered image sequences. We propose a sequential technique for static background estimation in such conditions, with low computational and memory requirements. Image sequences are analysed on a block-by-block basis. For each block location a representative set is maintained which contains distinct blocks obtained along its temporal line. The background estimation is carried out in a Markov Random Field framework, where the optimal labelling solution is computed using iterated conditional modes. The clique potentials are computed based on the combined frequency response of the candidate block and its neighbourhood. It is assumed that the most appropriate block results in the smoothest response, indirectly enforcing the spatial continuity of structures within a scene. Experiments on real-life surveillance videos demonstrate that the proposed method obtains considerably better background estimates (both qualitatively and quantitatively than median filtering and the recently proposed "intervals of stable intensity" method. Further experiments on the Wallflower dataset suggest that the combination of the proposed method with a foreground segmentation algorithm results in improved foreground segmentation.

  14. A novel robust and efficient algorithm for charge particle tracking in high background flux

    International Nuclear Information System (INIS)

    Fanelli, C; Cisbani, E; Dotto, A Del

    2015-01-01

    The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 10 39 cm -2 s -1 . To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results. (paper)

  15. The CUNY Fatherhood Academy: A Qualitative Evaluation. Research Report

    Science.gov (United States)

    McDaniel, Marla; Simms, Margaret C.; Monson, William; de Leon, Erwin

    2015-01-01

    Knowing the economic challenges young fathers without postsecondary education face in providing for their families, New York City's Young Men's Initiative launched a fatherhood program housed in LaGuardia Community College in spring 2012. The CUNY Fatherhood Academy (CFA) aims to connect young fathers to academic and employment opportunities while…

  16. The CUNY Fatherhood Academy: A Qualitative Evaluation. Executive Summary

    Science.gov (United States)

    McDaniel, Marla; Simms, Margaret C.; Monson, William; de Leon, Erwin

    2015-01-01

    Knowing the economic challenges young fathers without postsecondary education face in providing for their families, New York City's Young Men's Initiative launched a fatherhood program housed in LaGuardia Community College in spring 2012. The CUNY Fatherhood Academy (CFA) aims to connect young fathers to academic and employment opportunities while…

  17. A multilevel system of algorithms for detecting and isolating signals in a background of noise

    Science.gov (United States)

    Gurin, L. S.; Tsoy, K. A.

    1978-01-01

    Signal information is processed with the help of algorithms, and then on the basis of such processing, a part of the information is subjected to further processing with the help of more precise algorithms. Such a system of algorithms is studied, a comparative evaluation of a series of lower level algorithms is given, and the corresponding algorithms of higher level are characterized.

  18. Low temperature interdiffusion in Cu/Ni thin films

    International Nuclear Information System (INIS)

    Lefakis, H.; Cain, J.F.; Ho, P.S.

    1983-01-01

    Interdiffusion in Cu/Ni thin films was studied by means of Auger electron spectroscopy in conjunction with Ar + ion sputter profiling. The experimental conditions used aimed at simulating those of typical chip-packaging fabrication processes. The Cu/Ni couple (from 10 μm to 60 nm thick) was produced by sequential vapor deposition on fused-silica substrates at 360, 280 and 25 0 C in 10 - 6 Torr vacuum. Diffusion anneals were performed between 280 and 405 0 C for times up to 20 min. Such conditions define grain boundary diffusion in the regimes of B- and C-type kinetics. The data were analyzed according to the Whipple-Suzuoka model. Some deviations from the assumptions of this model, as occurred in the present study, are discussed but cannot fully account for the typical data scatter. The grain boundary diffusion coefficients were determined allowing calculation of respective permeation distances. (Auth.)

  19. Speech Enhancement of Mobile Devices Based on the Integration of a Dual Microphone Array and a Background Noise Elimination Algorithm.

    Science.gov (United States)

    Chen, Yung-Yue

    2018-05-08

    Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.

  20. Speech Enhancement of Mobile Devices Based on the Integration of a Dual Microphone Array and a Background Noise Elimination Algorithm

    Directory of Open Access Journals (Sweden)

    Yung-Yue Chen

    2018-05-01

    Full Text Available Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H2 estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.

  1. Research on the algorithm of infrared target detection based on the frame difference and background subtraction method

    Science.gov (United States)

    Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian

    2015-09-01

    As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.

  2. Enhanced Oxidation-Resistant Cu@Ni Core-Shell Nanoparticles for Printed Flexible Electrodes.

    Science.gov (United States)

    Kim, Tae Gon; Park, Hye Jin; Woo, Kyoohee; Jeong, Sunho; Choi, Youngmin; Lee, Su Yeon

    2018-01-10

    In this work, the fabrication and application of highly conductive, robust, flexible, and oxidation-resistant Cu-Ni core-shell nanoparticle (NP)-based electrodes have been reported. Cu@Ni core-shell NPs with a tunable Ni shell thickness were synthesized by varying the Cu/Ni molar ratios in the precursor solution. Through continuous spray coating and flash photonic sintering without an inert atmosphere, large-area Cu@Ni NP-based conductors were fabricated on various polymer substrates. These NP-based electrodes demonstrate a low sheet resistance of 1.3 Ω sq -1 under an optical energy dose of 1.5 J cm -2 . In addition, they exhibit highly stable sheet resistances (ΔR/R 0 flexible heater fabricated from the Cu@Ni film is demonstrated, which shows uniform heat distribution and stable temperature compared to those of a pure Cu film.

  3. Phase unwrapping algorithm based on multi-frequency fringe projection and fringe background for fringe projection profilometry

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Zhao, Hong; Gu, Feifei; Ma, Yueyang

    2015-01-01

    A phase unwrapping algorithm specially designed for the phase-shifting fringe projection profilometry (FPP) is proposed. It combines a revised dual-frequency fringe projectionalgorithm and a proposed fringe background based quality guided phase unwrapping algorithm (FB-QGPUA). Phase demodulated from the high-frequency fringe patterns is partially unwrapped by that demodulated from the low-frequency ones. Then FB-QGPUA is adopted to further unwrap the partially unwrapped phase. Influences of the phase error on the measurement are researched. Strategy to select the fringe pitch is given. Experiments demonstrate that the proposed method is very robust and efficient. (paper)

  4. A simple algorithm for measuring particle size distributions on an uneven background from TEM images

    DEFF Research Database (Denmark)

    Gontard, Lionel Cervera; Ozkaya, Dogan; Dunin-Borkowski, Rafal E.

    2011-01-01

    Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence of a...... application to images of heterogeneous catalysts is presented.......Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence...

  5. Thermoelasticity and interdiffusion in CuNi multilayers

    International Nuclear Information System (INIS)

    Benoudia, M.C.; Gao, F.; Roussel, J.M.; Labat, S.; Gailhanou, M.; Thomas, O.; Beke, D.L.; Erdelyi, Z.; Langer, G.A.; Csik, A.; Kis-Varga, M.

    2012-01-01

    Complete text of publication follows. The idea of observing artificial metallic multilayers with x-ray diffraction techniques to study interdiffusion phenomena dates back to the work of DuMond and Youtz. Interestingly, these pioneering contributions even suggested that the approach could be used to measure the concentration dependence of the diffusion coefficient. This remark is precisely the subject of the present work: we aim to revisit this issue in light of recent atomistic simulation results obtained for coherent CuNi multilayers. More generally, CuNi multilayers have been extensively studied for their magnetic, mechanical, and optical properties. These physical properties depend critically on interfaces and require a good control on the evolution of composition and strain fields under heat treatment. Understanding of how interdiffusion proceeds in these nanosystems should therefore improve these practical aspects. From a theoretical viewpoint these synthetic modulated structures have been also used as valuable model systems to test the various diffusion theories accounting in particular for the influence of the alloying energy, the coherency strain, and the local concentration. Nowadays, this field remains active and has been extended with the development of atomic simulations and many microscopy techniques like atom probe tomography which give details on the intermixing mechanisms. We have performed x-ray diffraction experiments on coherent CuNi multilayers to probe thermoelasticity and interdiffusion in these samples. Kinetic mean-field simulations combined with the modeling of the x-ray spectra were also achieved to rationalize the experimental results. We have shown that classical thermoelastic arguments combined with bulk data can be used to model the x-ray scattered intensity of annealed coherent CuNi multilayers. This result provides a valuable framework to analyze the evolution of the concentration profiles at higher temperature. The typical coherent

  6. Application of Monte Carlo algorithms to the Bayesian analysis of the Cosmic Microwave Background

    Science.gov (United States)

    Jewell, J.; Levin, S.; Anderson, C. H.

    2004-01-01

    Power spectrum estimation and evaluation of associated errors in the presence of incomplete sky coverage; nonhomogeneous, correlated instrumental noise; and foreground emission are problems of central importance for the extraction of cosmological information from the cosmic microwave background (CMB).

  7. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    Science.gov (United States)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  8. Background for the research and subsequent developments in the research program

    International Nuclear Information System (INIS)

    This report summarizes the historical background for the research and its subsequent development. Various aspects of the research were supported by the USAEC, ERDA, CCNY, CUNY, MHMC and personally by the principal investigator

  9. Monte Carlo Algorithms for a Bayesian Analysis of the Cosmic Microwave Background

    Science.gov (United States)

    Jewell, Jeffrey B.; Eriksen, H. K.; ODwyer, I. J.; Wandelt, B. D.; Gorski, K.; Knox, L.; Chu, M.

    2006-01-01

    A viewgraph presentation on the review of Bayesian approach to Cosmic Microwave Background (CMB) analysis, numerical implementation with Gibbs sampling, a summary of application to WMAP I and work in progress with generalizations to polarization, foregrounds, asymmetric beams, and 1/f noise is given.

  10. Comparison of segmentation algorithms for cow contour extraction from natural barn background in side view images

    NARCIS (Netherlands)

    Hertem, van T.; Alchanatis, V.; Antler, A.; Maltz, E.; Halachmi, I.; Schlageter Tello, A.A.; Lokhorst, C.; Viazzi, S.; Romanini, C.E.B.; Pluk, A.; Bahr, C.; Berckmans, D.

    2013-01-01

    Computer vision techniques are a means to extract individual animal information such as weight, activity and calving time in intensive farming. Automatic detection requires adequate image pre-processing such as segmentation to precisely distinguish the animal from its background. For some analyses

  11. A Dynamic Enhancement With Background Reduction Algorithm: Overview and Application to Satellite-Based Dust Storm Detection

    Science.gov (United States)

    Miller, Steven D.; Bankert, Richard L.; Solbrig, Jeremy E.; Forsythe, John M.; Noh, Yoo-Jeong; Grasso, Lewis D.

    2017-12-01

    This paper describes a Dynamic Enhancement Background Reduction Algorithm (DEBRA) applicable to multispectral satellite imaging radiometers. DEBRA uses ancillary information about the clear-sky background to reduce false detections of atmospheric parameters in complex scenes. Applied here to the detection of lofted dust, DEBRA enlists a surface emissivity database coupled with a climatological database of surface temperature to approximate the clear-sky equivalent signal for selected infrared-based multispectral dust detection tests. This background allows for suppression of false alarms caused by land surface features while retaining some ability to detect dust above those problematic surfaces. The algorithm is applicable to both day and nighttime observations and enables weighted combinations of dust detection tests. The results are provided quantitatively, as a detection confidence factor [0, 1], but are also readily visualized as enhanced imagery. Utilizing the DEBRA confidence factor as a scaling factor in false color red/green/blue imagery enables depiction of the targeted parameter in the context of the local meteorology and topography. In this way, the method holds utility to both automated clients and human analysts alike. Examples of DEBRA performance from notable dust storms and comparisons against other detection methods and independent observations are presented.

  12. CO2 activation on bimetallic CuNi nanoparticles

    Directory of Open Access Journals (Sweden)

    Natalie Austin

    2016-10-01

    Full Text Available Density functional theory calculations have been performed to investigate the structural, electronic, and CO2 adsorption properties of 55-atom bimetallic CuNi nanoparticles (NPs in core-shell and decorated architectures, as well as of their monometallic counterparts. Our results revealed that with respect to the monometallic Cu55 and Ni55 parents, the formation of decorated Cu12Ni43 and core-shell Cu42Ni13 are energetically favorable. We found that CO2 chemisorbs on monometallic Ni55, core-shell Cu13Ni42, and decorated Cu12Ni43 and Cu43Ni12, whereas, it physisorbs on monometallic Cu55 and core-shell Cu42Ni13. The presence of surface Ni on the NPs is key in strongly adsorbing and activating the CO2 molecule (linear to bent transition and elongation of C˭O bonds. This activation occurs through a charge transfer from the NPs to the CO2 molecule, where the local metal d-orbital density localization on surface Ni plays a pivotal role. This work identifies insightful structure-property relationships for CO2 activation and highlights the importance of keeping a balance between NP stability and CO2 adsorption behavior in designing catalytic bimetallic NPs that activate CO2.

  13. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  14. Fabrication of a Cu/Ni stack in supercritical carbon dioxide at low-temperature

    Energy Technology Data Exchange (ETDEWEB)

    Rasadujjaman, Md, E-mail: rasadphy@duet.ac.bd [Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, 4-3-11 Takeda, Kofu, Yamanashi 400-8511 (Japan); Department of Physics, Dhaka University of Engineering & Technology, Gazipur 1700 (Bangladesh); Watanabe, Mitsuhiro [Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, 4-3-11 Takeda, Kofu, Yamanashi 400-8511 (Japan); Sudoh, Hiroshi; Machida, Hideaki [Gas-Phase Growth Ltd., 2-24-16 Naka, Koganei, Tokyo 184-0012 (Japan); Kondoh, Eiichi [Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, 4-3-11 Takeda, Kofu, Yamanashi 400-8511 (Japan)

    2015-09-30

    We report the low-temperature deposition of Cu on a Ni-lined substrate in supercritical carbon dioxide. A novel Cu(I) amidinate precursor was used to reduce the deposition temperature. From the temperature dependence of the growth rate, the activation energy for Cu growth on the Ni film was determined to be 0.19 eV. The films and interfaces were characterized by Auger electron spectroscopy. At low temperature (140 °C), we successfully deposited a Cu/Ni stack with a sharp Cu/Ni interface. The stack had a high adhesion strength (> 1000 mN) according to microscratch testing. The high adhesion strength originated from strong interfacial bonding between the Cu and the Ni. However, at a higher temperature (240 °C), significant interdiffusion was observed and the adhesion became weak. - Highlights: • Cu/Ni stack fabricated in supercritical CO{sub 2} at low temperature. • A novel Cu(I) amidinate precursor was used to reduce the deposition temperature. • Adhesion strength of Cu/Ni stack improved dramatically. • Fabricated Cu/Ni stack is suitable for Cu interconnections in microelectronics.

  15. Copper and CuNi alloys substrates for HTS coated conductor applications protected from oxidation

    Energy Technology Data Exchange (ETDEWEB)

    Segarra, M; Diaz, J; Xuriguera, H; Chimenos, J M; Espiell, F [Dept. of Chemical Engineering and Metallurgy, Univ. of Barcelona, Barcelona (Spain); Miralles, L [Lab. d' Investigacio en Formacions Geologiques. Dept. of Petrology, Geochemistry and Geological Prospecting, Univ. of Barcelona, Barcelona (Spain); Pinol, S [Inst. de Ciencia de Materials de Barcelona, Bellaterra (Spain)

    2003-07-01

    Copper is an interesting substrate for HTS coated conductors for its low cost compared to other metallic substrates, and for its low resistivity. Nevertheless, mechanical properties and resistance to oxidation should be improved in order to use it as substrate for YBCO deposition by non-vacuum techniques. Therefore, different cube textured CuNi tapes were prepared by RABIT as possible substrates for deposition of high critical current density YBCO films. Under the optimised conditions of deformation and annealing, all the studied CuNi alloys (2%, 5%, and 10% Ni) presented (100) left angle 001 right angle cube texture which is compatible for YBCO deposition. Textured CuNi alloys present higher tensile strength than pure copper. Oxidation resistance of CuNi tapes under different oxygen atmospheres was also studied by thermogravimetric analysis and compared to pure copper tapes. Although the presence of nickel improves mechanical properties of annealed copper, it does not improve its oxidation resistance. However, when a chromium buffer layer is electrodeposited on the tape, oxygen diffusion is slowed down. Chromium is, therefore, useful for protecting copper and CuNi alloys from oxidation although its recrystallisation texture, (110), is not suitable for coated conductors. (orig.)

  16. In situ observation of Cu-Ni alloy nanoparticle formation by X-ray diffraction, X-ray absorption spectroscopy, and transmission electron microscopy: Influence of Cu/Ni ratio

    DEFF Research Database (Denmark)

    Wu, Qiongxiao; Duchstein, Linus Daniel Leonhard; Chiarello, Gian Luca

    2014-01-01

    Silica-supported, bimetallic Cu-Ni nanomaterials were prepared with different ratios of Cu to Ni by incipient wetness impregnation without a specific calcination step before reduction. Different in situ characterization techniques, in particular transmission electron microscopy (TEM), X-ray...... diffraction (XRD), and X-ray absorption spectroscopy (XAS), were applied to follow the reduction and alloying process of Cu-Ni nanoparticles on silica. In situ reduction of Cu-Ni samples with structural characterization by combined synchrotron XRD and XAS reveals a strong interaction between Cu and Ni species......, which results in improved reducibility of the Ni species compared with monometallic Ni. At high Ni concentrations silica-supported Cu-Ni alloys form a homogeneous solid solution of Cu and Ni, whereas at lower Ni contents Cu and Ni are partly segregated and form metallic Cu and Cu-Ni alloy phases. Under...

  17. Study on the characteristics of the impingement erosion-corrosion for Cu-Ni Alloy sprayed coating(I)

    International Nuclear Information System (INIS)

    Lee, Sang Yeol; Lim, Uh Joh; Yun, Byoung Du

    1998-01-01

    Impingement erosion-corrosion test and electrochemical corrosion test in tap water(5000Ω-cm) and seawater(25Ω-cm). Thermal spraying coated Cu-Ni alloy on the carbon steel was carried out. The impingement erosion-corrosion behavior and electrochemical corrosion characteristics of the substrate(SS41) and Cu-Ni thermal spray coating were investigated. The erosion-corrosion control efficiency of Cu-Ni coating to substrate was also estimated quantitatively. Main results obtained are as follows : 1) Under the flow velocity of 13m/s, impingement erosion-corrosion of Cu-Ni coating is under the control of electrochemical corrosion factor rather than that of mechanical erosion. 2) The corrosion potential of Cu-Ni coating becomes more noble than that of substrate, and the current density of Cu-Ni coating under the corrosion potential is drained lowly than that of substrate. 3) The erosion-corrosion control efficiency of Cu-Ni coating to substrate is excellent in the tap water of high specific resistance solution, but it becomes dull in the seawater of low specific resistance. 4) The corrosion control efficiency of Cu-Ni coating to substrate in the seawater appears to be higher than that in the tap water

  18. 75 FR 62838 - Award of a Single-Source Expansion Supplement to the Research Foundation of CUNY on Behalf of...

    Science.gov (United States)

    2010-10-13

    ...-Source Expansion Supplement to the Research Foundation of CUNY on Behalf of Hunter College School of... single-source program expansion supplement to the Research Foundation of CUNY on behalf of Hunter College... removal, of the relative's options to become a placement resource for the child. The supplemental funding...

  19. A diffuse neutron scattering study of clustering kinetics in Cu-Ni alloys

    International Nuclear Information System (INIS)

    Vrijen, J.; Radelaar, S.; Schwahn, D.

    1977-01-01

    Diffuse scattering of thermal neutrons was used to investigate the kinetics of clustering in Cu-Ni alloys. In order to optimize the experimental conditions the isotopes 65 Cu and 62 Ni were alloyed. The time evolution of the diffuse scattered intensity at 400 0 C has been measured for eight Cu-Ni alloys, varying in composition between 30 and 80 at. pour cent Ni. The relaxation of the so called null matrix, containing 56.5 at. pour cent Ni has also been investigated at 320, 340, 425 and 450 0 C. Using Cook's model from all these measurements information has been deduced about diffusion at low temperatures and about thermodynamic properties of the Cu-Ni system. It turns out that Cook's model is not sufficiently detailed for an accurate description of the initial stages of these relaxations

  20. CuNi Nanoparticles Assembled on Graphene for Catalytic Methanolysis of Ammonia Borane and Hydrogenation of Nitro/Nitrile Compounds

    International Nuclear Information System (INIS)

    Yu, Chao

    2017-01-01

    Here we report a solution phase synthesis of 16 nm CuNi nanoparticles (NPs) with the Cu/Ni composition control. These NPs are assembled on graphene (G) and show Cu/Ni composition-dependent catalysis for methanolysis of ammonia borane (AB) and hydrogenation of aromatic nitro (nitrile) compounds to primary amines in methanol at room temperature. Among five different CuNi NPs studied, the G-Cu 36 Ni 64 NPs are the best catalyst for both AB methanolysis (TOF = 49.1 mol H2 mol CuNi -1 min -1 and E a = 24.4 kJ/mol) and hydrogenation reactions (conversion yield >97%). In conclusion, the G-CuNi represents a unique noble-metal-free catalyst for hydrogenation reactions in a green environment without using pure hydrogen.

  1. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  2. Dissociated Structure of Dislocation Loops with Burgers Vector alpha in Electron-Irradiated Cu-Ni

    DEFF Research Database (Denmark)

    Bilde-Sørensen, Jørgen; Leffers, Torben; Barlow, P.

    1977-01-01

    The rectangular dislocation loops with total Burgers vector a100 which are formed in Cu-Ni alloys during 1 MeV electron irradiation at elevated temperatures have been examined by weak-beam electron microscopy. The loop edges were found to take up a Hirth-lock configuration, dissociating into two ...

  3. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  4. DO22-(Cu,Ni)3Sn intermetallic compound nanolayer formed in Cu/Sn-nanolayer/Ni structures

    International Nuclear Information System (INIS)

    Liu Lilin; Huang, Haiyou; Fu Ran; Liu Deming; Zhang Tongyi

    2009-01-01

    The present work conducts crystal characterization by High Resolution Transmission Electron Microscopy (HRTEM) on Cu/Sn-nanolayer/Ni sandwich structures associated with the use of Energy Dispersive X-ray (EDX) analysis. The results show that DO 22 -(Cu,Ni) 3 Sn intermetallic compound (IMC) ordered structure is formed in the sandwich structures at the as-electrodeposited state. The formed DO 22 -(Cu,Ni) 3 Sn IMC is a homogeneous layer with a thickness about 10 nm. The DO 22 -(Cu,Ni) 3 Sn IMC nanolayer is stable during annealing at 250 deg. C for 810 min. The formation and stabilization of the metastable DO 22 -(Cu,Ni) 3 Sn IMC nanolayer are attributed to the less strain energy induced by lattice mismatch between the DO 22 IMC and fcc Cu crystals in comparison with that between the equilibrium DO 3 IMC and fcc Cu crystals.

  5. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  6. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  7. Molecular dynamics simulation of effects of twin interfaces on Cu/Ni multilayers

    International Nuclear Information System (INIS)

    Fu, Tao; Peng, Xianghe; Weng, Shayuan; Zhao, Yinbo; Gao, Fengshan; Deng, Lijun; Wang, Zhongchang

    2016-01-01

    We perform molecular dynamics simulation of the indentation on pure Cu and Ni films and Cu/Ni multilayered films with a cylindrical indenter, aimed to investigate the effects of the cubic-on-cubic interface and hetero-twin interface on their mechanical properties. We also investigate systematically the formation of twin boundary in the pure metals and the effects of the cubic-on-cubic and hetero-twin interface on mechanical properties of the multilayers. We find that the slip of the horizontal stacking fault can release the internal stress, resulting in insignificant strengthening. The change in the crystal orientation by horizontal movement of the atoms in a layer-by-layer manner is found to initiate the movement of twin boundary, and the hetero-twin interface is beneficial to the hardening of multilayers. Moreover, we also find that increasing number of hetero-twin interfaces can harden the Cu/Ni multilayers.

  8. Influence of Ni Solute segregation on the intrinsic growth stresses in Cu(Ni) thin films

    International Nuclear Information System (INIS)

    Kaub, T.M.; Felfer, P.; Cairney, J.M.; Thompson, G.B.

    2016-01-01

    Using intrinsic solute segregation in alloys, the compressive stress in a series of Cu(Ni) thin films has been studied. The highest compressive stress was noted in the 5 at.% Ni alloy, with increasing Ni concentration resulting in a subsequent reduction of stress. Atom probe tomography quantified Ni's Gibbsian interfacial excess in the grain boundaries and confirmed that once grain boundary saturation is achieved, the compressive stress was reduced. This letter provides experimental support in elucidating how interfacial segregation of excess adatoms contributes to the post-coalescence compressive stress generation mechanism in thin films. - Graphical abstract: Cu(Ni) film stress relationship with Ni additions. Atom probe characterization confirms solute enrichment in the boundaries, which was linked to stress response.

  9. A study of the annealing and mechanical behaviour of electrodeposited Cu-Ni multilayers

    Energy Technology Data Exchange (ETDEWEB)

    Pickup, C.J.

    1997-08-01

    The mechanical strength of electrodeposited Cu-Ni multilayers is known to vary with deposition wavelength. Since layered coatings are harder and more resistant to wear and abrasion than non-layered coatings, this technique is of industrial interest. Optimisation of the process requires a better understanding of the strengthening mechanisms and the microstructural changes which affect such mechanisms. The work presented in this thesis presents the characterisation a series of Cu-Ni multilayers, covering a wide range of thicknesses of the individual layers in the multilayer, using X-ray diffraction, cross-section TEM, hardness testing and tensile testing. Further, the effects of high temperature annealing on interdiffusion and on changes in internal stresses are documented. (au). 176 refs.

  10. A study of the annealing and mechanical behaviour of electrodeposited Cu-Ni multilayers

    International Nuclear Information System (INIS)

    Pickup, C.J.

    1997-08-01

    The mechanical strength of electrodeposited Cu-Ni multilayers is known to vary with deposition wavelength. Since layered coatings are harder and more resistant to wear and abrasion than non-layered coatings, this technique is of industrial interest. Optimisation of the process requires a better understanding of the strengthening mechanisms and the microstructural changes which affect such mechanisms. The work presented in this thesis presents the characterisation a series of Cu-Ni multilayers, covering a wide range of thicknesses of the individual layers in the multilayer, using X-ray diffraction, cross-section TEM, hardness testing and tensile testing. Further, the effects of high temperature annealing on interdiffusion and on changes in internal stresses are documented. (au)

  11. Autoradiographical Detection of Tritium in Cu-Ni Alloy by Scanning Electron Microscopy

    OpenAIRE

    高安, 紀; 中野, 美樹; 竹内, 豊三郎

    1981-01-01

    The autoradiograph of tritium dispersed in Cu-Ni alloy sheet by 6Li(n,α)3H reaction was obtained by a scanning electron microscope. Prior to the irradiation of neutrons 6Li was deposited on the sheet by evaporation. The liquid emulsion, Fuji-ER, was used in this study. The distribution of tritium was detected by the dispersion of silver grains remaining in the emulsion after the development was carried out.

  12. AstroCom NYC: A Partnership Between Astronomers at CUNY, AMNH, and Columbia University

    Science.gov (United States)

    Paglione, Timothy; Ford, K. S.; Robbins, D.; Mac Low, M.; Agueros, M. A.

    2014-01-01

    AstroCom NYC is a new program designed to improve urban minority student access to opportunities in astrophysical research by greatly enhancing partnerships between research astronomers in New York City. The partners are minority serving institutions of the City University of New York, and the astrophysics research departments of the American Museum of Natural History and Columbia. AstroCom NYC provides centralized, personalized mentoring as well as financial and academic support, to CUNY undergraduates throughout their studies, plus the resources and opportunities to further CUNY faculty research with students. The goal is that students’ residency at AMNH helps them build a sense of belonging in the field, and inspires and prepares them for graduate study. AstroCom NYC prepares students for research with a rigorous Methods of Scientific Research course developed specifically to this purpose, a laptop, a research mentor, career mentor, involvement in Columbia outreach activities, scholarships and stipends, Metrocards, and regular assessment for maximum effectiveness. Stipends in part alleviate the burdens at home typical for CUNY students so they may concentrate on their academic success. AMNH serves as the central hub for our faculty and students, who are otherwise dispersed among all five boroughs of the City. With our first cohort we experienced the expected challenges from their diverse preparedness, but also far greater than anticipated challenges in scheduling, academic advisement, and molding their expectations. We review Year 1 operations and outcomes, as well as plans for Year 2, when our current students progress to be peer mentors.

  13. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  14. [Algorithm for taking into account the average annual background of air pollution in the assessment of health risks].

    Science.gov (United States)

    Fokin, M V

    2013-01-01

    State Budgetary Educational Institution of Higher Professional Education "I.M. Sechenov First Moscow State Medical University" of the Ministry of Health care and Social Development, Moscow, Russian Federation. The assessment of health risks from air pollution with emissions from industrial facilities, without the average annual background of air pollution does not meet sanitary legislation. However Russian Federal Service for Hydrometeorology and Environmental Monitoring issues official certificates for a limited number of areas covered by the observations of the full program on the stationary points. Questions of accounting average background air pollution in the evaluation of health risks from exposure to emissions from industrial facilities are considered.

  15. Correlation of plastic deformation induced intermittent electromagnetic radiation characteristics with mechanical properties of Cu-Ni alloys

    International Nuclear Information System (INIS)

    Singh, Ranjana; Lal, Shree P.; Misra, Ashok

    2015-01-01

    This paper presents experimental results on intermittent electromagnetic radiation during plastic deformation of Cu-Ni alloys under tension and compression modes of deformation. On the basis of the nature of electromagnetic radiation signals, oscillatory or exponential, results show that the compression increases the viscous coefficient of Cu-Ni alloys during plastic deformation. Increasing the percentage of solute atoms in Cu-Ni alloys makes electromagnetic radiation strength higher under tension. The electromagnetic radiation emission occurs at smaller strains under compression showing early onset of plastic deformation. This is attributed to the role of high core region tensile residual stresses in the rolled Cu-Ni alloy specimens in accordance with the Bauschinger effect. The distance between the apexes of the dead metal cones during compression plays a significant role in electromagnetic radiation parameters. The dissociation of edge dislocations into partials and increase in internal stresses with increase in solute percentage in Cu-Ni alloys under compression considerably influences the electromagnetic radiation frequency.

  16. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    Science.gov (United States)

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  17. A CuNi bimetallic cathode with nanostructured copper array for enhanced hydrodechlorination of trichloroethylene (TCE).

    Science.gov (United States)

    Liu, Bo; Zhang, Hao; Lu, Qi; Li, Guanghe; Zhang, Fang

    2018-09-01

    To address the challenges of low hydrodechlorination efficiency by non-noble metals, a CuNi bimetallic cathode with nanostructured copper array film was fabricated for effective electrochemical dechlorination of trichloroethylene (TCE) in aqueous solution. The CuNi bimetallic cathodes were prepared by a simple one-step electrodeposition of copper onto the Ni foam substrate, with various electrodeposition time of 5/10/15/20 min. The optimum electrodeposition time was 10 min when copper was coated as a uniform nanosheet array on the nickel foam substrate surface. This cathode exhibited the highest TCE removal, which was twice higher compared to that of the nickel foam cathode. At the same passed charge of 1080C, TCE removal increased from 33.9 ± 3.3% to 99.7 ± 0.1% with the increasing operation current from 5 to 20 mA cm -2 , while the normalized energy consumption decreased from 15.1 ± 1.0 to 2.6 ± 0.01 kWh log -1  m -3 . The decreased normalized energy consumption at a higher current density was due to the much higher removal efficiency at a higher current. These results suggest that CuNi cathodes prepared by simple electrodeposition method represent a promising and cost-effective approach for enhanced electrochemical dechlorination. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Local radiofrequency-induced hyperthermia using CuNi nanoparticles with therapeutically suitable Curie temperature

    International Nuclear Information System (INIS)

    Kuznetsov, Anatoly A.; Leontiev, Vladimir G.; Brukvin, Vladimir A.; Vorozhtsov, Georgy N.; Kogan, Boris Ya.; Shlyakhtin, Oleg A.; Yunin, Alexander M.; Tsybin, Oleg I.; Kuznetsov, Oleg A.

    2007-01-01

    Copper-nickel (CuNi) alloy nanoparticles with Curie temperatures (T c ) from 40 to 60 o C were synthesized by several techniques. Varying the synthesis parameters and post-treatment, as well as separations by size and T c , allow producing mediator nanoparticles for magnetic fluid hyperthermia with parametric feedback temperature control with desired parameters. In vitro and in vivo animal experiments have demonstrated the feasibility of the temperature-controlled heating of the tissue, laden with the particles, by an external alternating magnetic field

  19. Local radiofrequency-induced hyperthermia using CuNi nanoparticles with therapeutically suitable Curie temperature

    Energy Technology Data Exchange (ETDEWEB)

    Kuznetsov, Anatoly A. [Institute of Biochemical Physics, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation); Leontiev, Vladimir G. [Institute of Metallurgy, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation); Brukvin, Vladimir A. [Institute of Metallurgy, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation); Vorozhtsov, Georgy N. [NIOPIK Organic Intermediates and Dyes Institute, Moscow 103787 (Russian Federation); Kogan, Boris Ya. [NIOPIK Organic Intermediates and Dyes Institute, Moscow 103787 (Russian Federation); Shlyakhtin, Oleg A. [Institute of Chemical Physics, Russian Academy of Sciences (RAS), Kosygin St. 4, Moscow 119991 (Russian Federation); Yunin, Alexander M. [Institute of Biochemical Physics, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation); Tsybin, Oleg I. [Institute of Metallurgy, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation); Kuznetsov, Oleg A. [Institute of Biochemical Physics, Russian Academy of Sciences (RAS), Moscow 119991 (Russian Federation)]. E-mail: kuznetsov_oa@yahoo.com

    2007-04-15

    Copper-nickel (CuNi) alloy nanoparticles with Curie temperatures (T{sub c}) from 40 to 60{sup o}C were synthesized by several techniques. Varying the synthesis parameters and post-treatment, as well as separations by size and T{sub c}, allow producing mediator nanoparticles for magnetic fluid hyperthermia with parametric feedback temperature control with desired parameters. In vitro and in vivo animal experiments have demonstrated the feasibility of the temperature-controlled heating of the tissue, laden with the particles, by an external alternating magnetic field.

  20. Influence of ni thickness on oscillation coupling in Cu/Ni multilayers

    Energy Technology Data Exchange (ETDEWEB)

    Gagorowska, B; Dus-Sitek, M [Institute of Physics, Czestochowa University of Technology, Al. Armii Krajowej 19, 42-200 Czestochowa (Poland)

    2007-08-15

    The results of investigation of magnetic properties of [Cu/Ni]x100 were presented. Samples were deposited by face-to-face sputtering method onto the silicon substrate, the thickness of Cu layer was constant (d{sub Cu} = 2 nm) and the thickness of Ni layer - variable (1 nm {<=} d{sub Ni} {<=} 6 nm). In Cu/Ni multilayers, for the thickness of Ni layer bigger than 2 nm antiferromagnetic coupling (A-F) were observed, for the thickness of Ni smaller than 2 nm A-F coupling is absent.

  1. Influence of ni thickness on oscillation coupling in Cu/Ni multilayers

    International Nuclear Information System (INIS)

    Gagorowska, B; Dus-Sitek, M

    2007-01-01

    The results of investigation of magnetic properties of [Cu/Ni]x100 were presented. Samples were deposited by face-to-face sputtering method onto the silicon substrate, the thickness of Cu layer was constant (d Cu = 2 nm) and the thickness of Ni layer - variable (1 nm ≤ d Ni ≤ 6 nm). In Cu/Ni multilayers, for the thickness of Ni layer bigger than 2 nm antiferromagnetic coupling (A-F) were observed, for the thickness of Ni smaller than 2 nm A-F coupling is absent

  2. Fatigue of thin walled tubes in copper alloy CuNi10

    DEFF Research Database (Denmark)

    Lambertsen, Søren Heide; Damkilde, Lars; Jepsen, Michael S.

    2016-01-01

    The current work concerns the investigation of the fatigue resistance of CuNi10 tubes, which are frequently used in heat exchangers of large ship engines. The lifetime performances of the exchanger tubes are greatly affected by the environmental conditions, where especially the temperature...... by means of the ASTM E739 guideline and one-sided tolerance limits factor method. The tests show good fatigue resistance and the risk for a failure is low in aspect to the case of a ship heat exchanger....

  3. Crystal identification for a dual-layer-offset LYSO based PET system via Lu-176 background radiation and mean shift algorithm

    Science.gov (United States)

    Wei, Qingyang; Ma, Tianyu; Xu, Tianpeng; Zeng, Ming; Gu, Yu; Dai, Tiantian; Liu, Yaqiang

    2018-01-01

    Modern positron emission tomography (PET) detectors are made from pixelated scintillation crystal arrays and readout by Anger logic. The interaction position of the gamma-ray should be assigned to a crystal using a crystal position map or look-up table. Crystal identification is a critical procedure for pixelated PET systems. In this paper, we propose a novel crystal identification method for a dual-layer-offset LYSO based animal PET system via Lu-176 background radiation and mean shift algorithm. Single photon event data of the Lu-176 background radiation are acquired in list-mode for 3 h to generate a single photon flood map (SPFM). Coincidence events are obtained from the same data using time information to generate a coincidence flood map (CFM). The CFM is used to identify the peaks of the inner layer using the mean shift algorithm. The response of the inner layer is deducted from the SPFM by subtracting CFM. Then, the peaks of the outer layer are also identified using the mean shift algorithm. The automatically identified peaks are manually inspected by a graphical user interface program. Finally, a crystal position map is generated using a distance criterion based on these peaks. The proposed method is verified on the animal PET system with 48 detector blocks on a laptop with an Intel i7-5500U processor. The total runtime for whole system peak identification is 67.9 s. Results show that the automatic crystal identification has 99.98% and 99.09% accuracy for the peaks of the inner and outer layers of the whole system respectively. In conclusion, the proposed method is suitable for the dual-layer-offset lutetium based PET system to perform crystal identification instead of external radiation sources.

  4. Solution-Based Epitaxial Growth of Magnetically Responsive Cu@Ni Nanowires

    KAUST Repository

    Zhang, Shengmao

    2010-02-23

    An experiment was conducted to show the solution-based epitaxial growth of magnetically responsive Cu@Ni nanowires. The Ni-sheathed Cu nanowires were synthesized with a one-pot approach. 30 mL of high concentration NaOH, Cu(NO3)2. 3H2O, Cu(NO3)2. 3H2O and 0.07-0.30 mL of Ni(NO3)2. 6H 2O aqueous solutions were added into a plastic reactor with a capacity of 50.0 mL. A varying amount of ethylenediamine (EDA) and hydrazine were also added sequentially, followed by thorough mixing of all reagents. The dimension, morphology, and chemical composition of the products were examined with scanning electron microscopy with energy dispersive X-ray spectroscopy. The XPS analysis on the as formed Cu nanowires confirms that there is indeed no nickel inclusion in the nanowires prior to the formation of nickel overcoat, which rules out the possibility of Cu-Ni alloy formation.

  5. Magnetic susceptibility, specific heat and magnetic structure of CuNi2(PO4)2

    International Nuclear Information System (INIS)

    Escobal, Jaione; Pizarro, Jose L.; Mesa, Jose L.; Larranaga, Aitor; Fernandez, Jesus Rodriguez; Arriortua, Maria I.; Rojo, Teofilo

    2006-01-01

    The CuNi 2 (PO 4 ) 2 phosphate has been synthesized by the ceramic method at 800 deg. C in air. The crystal structure consists of a three-dimensional skeleton constructed from MO 4 (M II =Cu and Ni) planar squares and M 2 O 8 dimers with square pyramidal geometry, which are interconnected by (PO 4 ) 3- oxoanions with tetrahedral geometry. The magnetic behavior has been studied on powdered sample by using susceptibility, specific heat and neutron diffraction data. The bimetallic copper(II)-nickel(II) orthophosphate exhibits a three-dimensional magnetic ordering at, approximately, 29.8 K. However, its complex crystal structure hampers any parametrization of the J-exchange parameter. The specific heat measurements exhibit a three-dimensional magnetic ordering (λ-type) peak at 29.5 K. The magnetic structure of this phosphate shows ferromagnetic interactions inside the Ni 2 O 8 dimers, whereas the sublattice of Cu(II) ions presents antiferromagnetic couplings along the y-axis. The change of the sign in the magnetic unit-cell, due to the [1/2, 0, 1/2] propagation vector determines a purely antiferromagnetic structure. - Graphical abstract: Magnetic structure of CuNi2(PO4)2

  6. Solution-Based Epitaxial Growth of Magnetically Responsive Cu@Ni Nanowires

    KAUST Repository

    Zhang, Shengmao; Zeng, Hua Chun

    2010-01-01

    An experiment was conducted to show the solution-based epitaxial growth of magnetically responsive Cu@Ni nanowires. The Ni-sheathed Cu nanowires were synthesized with a one-pot approach. 30 mL of high concentration NaOH, Cu(NO3)2. 3H2O, Cu(NO3)2. 3H2O and 0.07-0.30 mL of Ni(NO3)2. 6H 2O aqueous solutions were added into a plastic reactor with a capacity of 50.0 mL. A varying amount of ethylenediamine (EDA) and hydrazine were also added sequentially, followed by thorough mixing of all reagents. The dimension, morphology, and chemical composition of the products were examined with scanning electron microscopy with energy dispersive X-ray spectroscopy. The XPS analysis on the as formed Cu nanowires confirms that there is indeed no nickel inclusion in the nanowires prior to the formation of nickel overcoat, which rules out the possibility of Cu-Ni alloy formation.

  7. Study on the occurrence of platinum in Xinjie Cu-Ni sulfide deposits by a combination of SPM and NAA

    International Nuclear Information System (INIS)

    Li Xiaolin; Zhu Jieqing; Lu Rongrong; Gu Yingmei; Wu Xiankang; Chen Youhong

    1997-01-01

    A combination of neutron-activation analysis (NAA) and scanning proton microprobe (SPM) was used to study the distribution of platinum-group elements (PGEs) in rocks and ores from Xinjie Cu-Ni deposit. The minimum detection limits of PGEs by NAA had been much improved by means of a nickel-sulfide fire-assay technique for pre-concentration of PGEs in the ore samples. A simple and effective method was developed for true element mapping in SPM experiments. A pair of moveable absorption filters was set up in the target chamber for high sensitivities of both major and trace elements. The bulk analysis results by NNA indicated that the PGE mineralization occurred at the base of Xinjie layered intrusion in clino-pyroxenite rocks and the Cu-Ni sulfide minerals disseminated within the rocks had high abundance level of PGEs. However, the micro-PIXE analysis of the Cu-Ni sulfide mineral grains did not find PGEs above the MDL of (6-9) x 10 -6 for Rh, Ru and Pd, and 6- x 10 -6 for Pt. The search for platinum occurrence in sulfide minerals was followed by scanning analysis of SPM when some smaller platinum enriched grains were found in the sulfide minerals. The microscopic analysis results suggested that platinum occurred in the Cu-Ni sulfide matrix as independent arsenide mineral grains. The chemical formula of the arsenide sperrylite was PtAs2. The information of the platinum occurrence was helpful to future mineralogical research and mineral processing and beneficiation of the Cu-Ni deposit

  8. Investigations on Cu-Ni and Cu-Al systems with secondary ion mass spectrometry (SIMS)

    International Nuclear Information System (INIS)

    Rodriguez-Murcia, H.; Beske, H.E.

    1976-04-01

    The ratio of the ionization coefficients of secondary atomic ions emitted from the two component systems Cu-Ni and Cu-Al was investigated as a function of the concentration of the two components. In the low concentration range the ratio of the ionization coefficients is a constant. An influence of the phase composition on the ratio of the ionization coefficients was found in the Cu-Al system. In addition, the cluster ion emission was investigated as a function of the concentration and the phase composition of the samples. The secondary atomic ion intensity was influenced by the presence of cluster ions. The importance of the cluster ions in quantitative analysis and phase determination by means of secondary ion mass spectrometry are discussed. (orig.) [de

  9. Effect of preparation conditions on the diffusion parameters of Cu/Ni thin films

    Energy Technology Data Exchange (ETDEWEB)

    Rammo, N.N.; Makadsi, M.N. [College of Science, Baghdad University, Baghdad (Iraq); Abdul-Lettif, A.M. [College of Science, Babylon University, Hilla (Iraq)

    2004-11-01

    Diffusion coefficients of vacuum-deposited Cu/Ni bilayer thin films were determined in the temperature range 200-500 C using X-ray photoelectron spectroscopy, sheet resistance measurements, and X-ray diffraction analysis. The difference between the results of the present work and those of previous relevant investigations may be attributed to the difference in the film microstructure, which is controlled by the preparation conditions. Therefore, the effects of deposition rate, substrate temperature, film thickness, and substrate structure on the diffusion parameters were separately investigated. It is shown that the diffusion activation energy (Q) decreases as deposition rate increases, whereas Q increases as substrate temperature and film thickness increase. The value of Q for films deposited on amorphous substrates is less than that for films deposited on single-crystal substrates. (copyright 2004 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  10. Investigation of optical properties of Cu/Ni multilayer nanowires embedded in etched ion-track template

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Lu [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Graduate School of the Chinese Academy of Sciences, Beijing 100049 (China); Yao, Huijun, E-mail: Yaohuijun@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Duan, Jinglai; Chen, Yonghui [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Lyu, Shuangbao [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Graduate School of the Chinese Academy of Sciences, Beijing 100049 (China); Maaz, Khan [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Nanomaterials Research Group, Physics Division, PINSTECH, Nilore 45650, Islamabad (Pakistan); Mo, Dan [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Liu, Jie, E-mail: J.Liu@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Sun, Youmei; Hou, Mingdong [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China)

    2016-12-01

    Graphical abstract: The schematic diagram of measurement of extinction spectra of Cu/Ni multilayer nanowire arrays embedded in the template after removing the gold/copper substrate. - Highlights: • The optical properties of Cu/Ni multilayer nanowire arrays were first investigated by UV/Vis/NIR spectrometer and it was confirmed that the extinction peaks strongly related to the periodicity of the multilayer nanowire. • The Ni segment was thought as a kind of impurity which can change the surface electron distribution and thereby the extinction peaks of nanowire. • Current work supplied the clear layer thickness information of Cu and Ni in Cu/Ni multilayer nanowire with TEM and EDS line-scan profile analysis. - Abstract: For understanding the interaction between light and noble/magnetism multilayer nanowires, Cu/Ni multilayer nanowires are fabricated by a multi-potential step deposition technique in etched ion-track polycarbonate template. The component and the corresponding layer thickness of multilayer nanowire are confirmed by TEM and EDS line-scan analysis. By tailoring the nanowire diameter, the Cu layer thickness and the periodicity of the nanowire, the extinction spectral of nanowire arrays exhibit an extra sensitivity to the change of structural parameters. The resonance wavelength caused by surface plasmon resonance increases obviously with increasing the nanowire diameter, the Cu layer thickness and the periodicity. The observations in our work can be explained by the “impurity effect” and coupled effect and can also be optimized for developing optical devices based on multilayer nanowires.

  11. Typical failures of CuNi 90/10 seawater tubing systems and how to avoid them

    Energy Technology Data Exchange (ETDEWEB)

    Schleich, Wilhelm [Technical Advisory Service, KM Europa Metal AG, Klosterstr. 29, 49074 Osnabrueck (Germany)

    2004-07-01

    For many decades, copper-nickel alloy CuNi 90/10 (UNS C70600) has extensively been used as a piping material for seawater systems in shipbuilding, offshore, and desalination industries. Attractive characteristics of this alloy combine excellent resistance to uniform corrosion, remarkable resistance to localised corrosion in chlorinated seawater, and higher erosion resistance than other copper alloys and steel. Furthermore, CuNi 90/10 is resistant to biofouling providing various economic benefits. In spite of the appropriate properties of the alloy, instances of failure have been experienced in practice. The reasons are mostly attributed to the composition and production of CuNi 90/10 products compounds, occurrence of erosion-corrosion and corrosion damage in polluted waters. This paper covers important areas which have to be considered to ensure successful application of the alloy for seawater tubing. For this purpose, the optimum and critical operating conditions are evaluated. It includes metallurgical, design and fabrication considerations. For the prevention of erosion-corrosion, the importance of hydrodynamics is demonstrated. In addition, commissioning, shut-down and start-up measures are compiled that are necessary for the establishment and re-establishment of the protective layer. (author)

  12. The Effect of Surfactant Content over Cu-Ni Coatings Electroplated by the sc-CO₂ Technique.

    Science.gov (United States)

    Chuang, Ho-Chiao; Sánchez, Jorge; Cheng, Hsiang-Yun

    2017-04-19

    Co-plating of Cu-Ni coatings by supercritical CO₂ (sc-CO₂) and conventional electroplating processes was studied in this work. 1,4-butynediol was chosen as the surfactant and the effects of adjusting the surfactant content were described. Although the sc-CO₂ process displayed lower current efficiency, it effectively removed excess hydrogen that causes defects on the coating surface, refined grain size, reduced surface roughness, and increased electrochemical resistance. Surface roughness of coatings fabricated by the sc-CO₂ process was reduced by an average of 10%, and a maximum of 55%, compared to conventional process at different fabrication parameters. Cu-Ni coatings produced by the sc-CO₂ process displayed increased corrosion potential of ~0.05 V over Cu-Ni coatings produced by the conventional process, and 0.175 V over pure Cu coatings produced by the conventional process. For coatings ~10 µm thick, internal stress developed from the sc-CO₂ process were ~20 MPa lower than conventional process. Finally, the preferred crystal orientation of the fabricated coatings remained in the (111) direction regardless of the process used or surfactant content.

  13. The activation energy for loop growth in Cu and Cu-Ni alloys

    International Nuclear Information System (INIS)

    Barlow, P.; Leffers, T.; Singh, B.N.

    1978-08-01

    The apparent activation energy for the growth of interstitial dislocation loops in copper, Cu-1%Ni, Cu-2%Ni, and Cu-5%Ni during high voltage electron microscope irradiation was determined. The apparent activation energy for loop growth in all these materials can be taken to be 0.34eV+-0.02eV. This value together with the corresponding value of 0.44eV+-0.02eV determined earlier for Cu-10%Ni is discussed with reference to the void growth rates observed in these materials. The apparent activation energy for loop growth in copper (and in Cu-1%Ni that has a void growth rate similar to that in pure copper) is interpreted as twice the vacancy migration energy (indicating that divacancies do not play any significant role). For the materials with higher Ni content (in which the void growth rate is much lower than that in Cu and Cu-1%Ni) the measured apparent activation energy is interpreted to be characteristic of loops positioned fairly close to the foil surface and not of loops in ''bulk material''. From the present results in combination with the earlier results for Cu-10%Ni it is concluded that interstitial trapping is the most likely explanation of the reduced void growth rate in Cu-Ni alloys. (author)

  14. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  15. Cu-Ni nanowire-based TiO{sub 2} hybrid for the dynamic photodegradation of acetaldehyde gas pollutant under visible light

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Shuying [Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai 200050 (China); University of Chinese Academy of Sciences, 19 Yuquan Road, Beijing 100049 (China); Xie, Xiaofeng, E-mail: xxfshcn@163.com [Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai 200050 (China); Chen, Sheng-Chieh [College of Science and Engineering, University of Minnesota, Minneapolis, MN 55455 (United States); Tong, Shengrui [Institute of Chemistry, Chinese Academy of Sciences, Beijing 100190 (China); Lu, Guanhong [Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai 200050 (China); Pui, David Y.H. [College of Science and Engineering, University of Minnesota, Minneapolis, MN 55455 (United States); Sun, Jing [Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai 200050 (China)

    2017-06-30

    Graphical abstract: One-dimensional Cu-Ni bimetallic nanowires were introduced into TiO{sub 2}-based matrix to enhance their photocatalysis efficiency and expand their light absorption range. - Highlights: • Cu-Ni nanowire-based TiO{sub 2} hybrid photocatalyst. • One-dimensional electron pathways and surface plasmon resonance effects. • Dynamic photodegradation of acetaldehyde gas pollutant. - Abstract: One-dimensional bimetallic nanowires were introduced into TiO{sub 2}-based matrix to enhance their photocatalysis efficiency and expand their light absorption range in this work. Recently, metal nanowires have attracted many attention in photocatalyst research fields because of their favorable electronic transmission properties and especially in the aspect of surface plasmon resonance effects. Moreover, Cu-Ni bimetallic nanowires (Cu-Ni NWs) have shown better chemical stability than ordinary monometallic nanowires in our recent works. Interestingly, it has been found that Ni sleeves of the bimetallic nanowires also can modify the Schottky barrier of interface between TiO{sub 2} and metallic conductor, so that be beneficial to the separation of photogenerated carriers in the Cu-Ni/TiO{sub 2} network topology. Hence, a novel heterostructured photocatalyst composed of Cu-Ni NWs and TiO{sub 2} nanoparticles (NPs) was fabricated by one-step hydrolysis approach to explore its photocatalytic performance. TEM and EDX mapping images of this TiO{sub 2} NPs @Cu-Ni NWs (TCN) hybrid displayed that Cu-Ni NWs were wrapped by compact TiO{sub 2} layer and retained the one-dimensional structure in matrix. In experiments, the photocatalytic performance of the TCN nanocomposite was significantly enhanced comparing to pure TiO{sub 2}. Acetaldehyde, as a common gas pollutant in the environment, was employed to evaluate the photodegradation efficiency of a series of TCN nanocomposites under continuous feeding. The TCN exhibited excellent potodegradation performance, where the

  16. Influence of preparation method on supported Cu-Ni alloys and their catalytic properties in high pressure CO hydrogenation

    DEFF Research Database (Denmark)

    Wu, Qiongxiao; Eriksen, Winnie L.; Duchstein, Linus Daniel Leonhard

    2014-01-01

    (50 bar CO and 50 bar H2). These alloy catalysts are highly selective (more than 99 mol%) and active for methanol synthesis; however, loss of Ni caused by nickel carbonyl formation is found to be a serious issue. The Ni carbonyl formation should be considered, if Ni-containing catalysts (even...... high surface area silica supported catalysts (BET surface area up to 322 m2 g-1, and metal area calculated from X-ray diffraction particle size up to 29 m2 g-1). The formation of bimetallic Cu-Ni alloy nanoparticles has been studied during reduction using in situ X-ray diffraction. Compared...

  17. Self-consistent electronic structure and segregation profiles of the Cu-Ni (001) random-alloy surface

    DEFF Research Database (Denmark)

    Ruban, Andrei; Abrikosov, I. A.; Kats, D. Ya.

    1994-01-01

    We have calculated the electronic structure and segregation profiles of the (001) surface of random Cu-Ni alloys with varying bulk concentrations by means of the coherent potential approximation and the linear muffin-tin-orbitals method. Exchange and correlation were included within the local......-density approximation. Temperature effects were accounted for by means of the cluster-variation method and, for comparison, by mean-field theory. The necessary interaction parameters were calculated by the Connolly-Williams method generalized to the case of a surface of a random alloy. We find the segregation profiles...

  18. An augmented space recursive method for the first principles study of concentration profiles at CuNi alloy surfaces

    International Nuclear Information System (INIS)

    Dasgupta, I.; Mookerjee, A.

    1995-07-01

    We present here a first principle method for the calculation of effective cluster interactions for semi-infinite solid alloys required for the study of surface segregation and surface ordering on disordered surfaces. Our method is based on the augmented space recursion coupled with the orbital peeling method of Burke in the framework of the TB-LMTO. Our study of surface segregation in CuNi alloys demonstrates strong copper segregation and a monotonic concentration profile throughout the concentration range. (author). 35 refs, 4 figs, 2 tabs

  19. A retention-time-shift-tolerant background subtraction and noise reduction algorithm (BgS-NoRA) for extraction of drug metabolites in liquid chromatography/mass spectrometry data from biological matrices.

    Science.gov (United States)

    Zhu, Peijuan; Ding, Wei; Tong, Wei; Ghosal, Anima; Alton, Kevin; Chowdhury, Swapan

    2009-06-01

    A retention-time-shift-tolerant background subtraction and noise reduction algorithm (BgS-NoRA) is implemented using the statistical programming language R to remove non-drug-related ion signals from accurate mass liquid chromatography/mass spectrometry (LC/MS) data. The background-subtraction part of the algorithm is similar to a previously published procedure (Zhang H and Yang Y. J. Mass Spectrom. 2008, 43: 1181-1190). The noise reduction algorithm (NoRA) is an add-on feature to help further clean up the residual matrix ion noises after background subtraction. It functions by removing ion signals that are not consistent across many adjacent scans. The effectiveness of BgS-NoRA was examined in biological matrices by spiking blank plasma extract, bile and urine with diclofenac and ibuprofen that have been pre-metabolized by microsomal incubation. Efficient removal of background ions permitted the detection of drug-related ions in in vivo samples (plasma, bile, urine and feces) obtained from rats orally dosed with (14)C-loratadine with minimal interference. Results from these experiments demonstrate that BgS-NoRA is more effective in removing analyte-unrelated ions than background subtraction alone. NoRA is shown to be particularly effective in the early retention region for urine samples and middle retention region for bile samples, where the matrix ion signals still dominate the total ion chromatograms (TICs) after background subtraction. In most cases, the TICs after BgS-NoRA are in excellent qualitative correlation to the radiochromatograms. BgS-NoRA will be a very useful tool in metabolite detection and identification work, especially in first-in-human (FIH) studies and multiple dose toxicology studies where non-radio-labeled drugs are administered. Data from these types of studies are critical to meet the latest FDA guidance on Metabolite in Safety Testing (MIST). Copyright (c) 2009 John Wiley & Sons, Ltd.

  20. Electrode kinetics of ethanol oxidation on novel CuNi alloy supported catalysts synthesized from PTFE suspension

    Science.gov (United States)

    Sen Gupta, S.; Datta, J.

    An understanding of the kinetics and mechanism of the electrochemical oxidation of ethanol is of considerable interest for the optimization of the direct ethanol fuel cell. In this paper, the electro-oxidation of ethanol in sodium hydroxide solution has been studied over 70:30 CuNi alloy supported binary platinum electrocatalysts. These comprised mixed deposits of Pt with Ru or Mo. The electrodepositions were carried out under galvanostatic condition from a dilute suspension of polytetrafluoroethylene (PTFE) containing the respective metal salts. Characterization of the catalyst layers by scanning electron microscope (SEM)-energy dispersive X-ray (EDX) indicated that this preparation technique yields well-dispersed catalyst particles on the CuNi alloy substrate. Cyclic voltammetry, polarization study and electrochemical impedance spectroscopy were used to investigate the kinetics and mechanism of ethanol electro-oxidation over a range of NaOH and ethanol concentrations. The relevant parameters such as Tafel slope, charge transfer resistance and the reaction orders in respect of OH - ions and ethanol were determined.

  1. Effect of chemical etching on the Cu/Ni metallization of poly (ether ether ketone)/carbon fiber composites

    International Nuclear Information System (INIS)

    Di Lizhi; Liu Bin; Song Jianjing; Shan Dan; Yang Dean

    2011-01-01

    Poly(ether ether ketone)/carbon fiber composites (PEEK/Cf) were chemical etched by Cr 2 O 3 /H 2 SO 4 solution, electroless plated with copper and then electroplated with nickel. The effects of chemical etching time and temperature on the adhesive strength between PEEK/Cf and Cu/Ni layers were studied by thermal shock method. The electrical resistance of some samples was measured. X-ray photoelectron spectroscopy (XPS) was used to analyze the surface composition and functional groups. Scanning electron microscopy (SEM) was performed to observe the surface morphology of the composite, the chemical etched sample, the plated sample and the peeled metal layer. The results indicated that C=O bond increased after chemical etching. With the increasing of etching temperature and time, more and more cracks and partially exposed carbon fibers appeared at the surface of PEEK/Cf composites, and the adhesive strength increased consequently. When the composites were etched at 60 deg. C for 25 min and at 70-80 deg. C for more than 15 min, the Cu/Ni metallization layer could withstand four thermal shock cycles without bubbling, and the electrical resistivity of the metal layer of these samples increased with the increasing of etching temperature and time.

  2. The Pobei Cu-Ni and Fe ore deposits in NW China are comagmatic evolution products: evidence from ore microscopy, zircon U-Pb chronology and geochemistry

    Energy Technology Data Exchange (ETDEWEB)

    Liu, G.I.; Li, W.Y.; Lu, X.B.; Huo, Y.H.; Zhang, B.

    2017-11-01

    The Pobei mafic-ultramafic complex in northwestern China comprises magmatic Cu-Ni sulfide ore deposits coexisting with Fe-Ti oxide deposits. The Poshi, Poyi, and Podong ultramafic intrusions host the Cu-Ni ore. The ultramafic intrusions experienced four stages during its formation. The intrusion sequence was as follows: dunite, hornblende-peridotite, wehrlite and pyroxenite. The wall rock of the ultramafic intrusions is the gabbro intrusion in the southwestern of the Pobei complex. The Xiaochangshan magmatic deposit outcrops in the magnetitemineralized gabbro in the northeastern part of the Pobei complex. The main emplacement events related to the mineralization in the Pobei complex, are the magnetite-mineralized gabbro related to the Xiaochangshan Fe deposit, the gabbro intrusion associated to the Poyi, Poshi and Podong Cu-Ni deposits, and the ultramafic intrusions that host Cu-Ni deposits (Poyi and Poshi). The U-Pb age of the magnetite-mineralized gabbro is 276±1.7Ma, which is similar to that of the Pobei mafic intrusions. The εHf(t) value of zircon in the magnetite-mineralized gabbro is almost the same as that of the gabbro around the Poyi and Poshi Cu-Ni deposits, indicating that the rocks related to Cu-Ni and magnetite deposits probably originated from the same parental magma. There is a trend of crystallization differentiation evolution in the Harker diagram from the dunite in the Cu-Ni deposit to the magnetite-mineralized gabbro. The monosulfide solid solution fractional crystallization was weak in Pobei; thus, the Pd/Ir values were only influenced by the crystallization of silicate minerals. The more complete the magma evolution is, the greater is the Pd/Ir ratio. The Pd/Ir values of dunite, the lithofacies containing sulfide (including hornblende peridotite, wehrlite, and pyroxenite) in the Poyi Cu-Ni deposit, magnetite-mineralized gabbro, and massive magnetite, are 8.55, 12.18, 12.26, and 18.14, respectively. Thus, the massive magnetite was probably the

  3. The Pobei Cu-Ni and Fe ore deposits in NW China are comagmatic evolution products: evidence from ore microscopy, zircon U-Pb chronology and geochemistry

    International Nuclear Information System (INIS)

    Liu, G.I.; Li, W.Y.; Lu, X.B.; Huo, Y.H.; Zhang, B.

    2017-01-01

    The Pobei mafic-ultramafic complex in northwestern China comprises magmatic Cu-Ni sulfide ore deposits coexisting with Fe-Ti oxide deposits. The Poshi, Poyi, and Podong ultramafic intrusions host the Cu-Ni ore. The ultramafic intrusions experienced four stages during its formation. The intrusion sequence was as follows: dunite, hornblende-peridotite, wehrlite and pyroxenite. The wall rock of the ultramafic intrusions is the gabbro intrusion in the southwestern of the Pobei complex. The Xiaochangshan magmatic deposit outcrops in the magnetitemineralized gabbro in the northeastern part of the Pobei complex. The main emplacement events related to the mineralization in the Pobei complex, are the magnetite-mineralized gabbro related to the Xiaochangshan Fe deposit, the gabbro intrusion associated to the Poyi, Poshi and Podong Cu-Ni deposits, and the ultramafic intrusions that host Cu-Ni deposits (Poyi and Poshi). The U-Pb age of the magnetite-mineralized gabbro is 276±1.7Ma, which is similar to that of the Pobei mafic intrusions. The εHf(t) value of zircon in the magnetite-mineralized gabbro is almost the same as that of the gabbro around the Poyi and Poshi Cu-Ni deposits, indicating that the rocks related to Cu-Ni and magnetite deposits probably originated from the same parental magma. There is a trend of crystallization differentiation evolution in the Harker diagram from the dunite in the Cu-Ni deposit to the magnetite-mineralized gabbro. The monosulfide solid solution fractional crystallization was weak in Pobei; thus, the Pd/Ir values were only influenced by the crystallization of silicate minerals. The more complete the magma evolution is, the greater is the Pd/Ir ratio. The Pd/Ir values of dunite, the lithofacies containing sulfide (including hornblende peridotite, wehrlite, and pyroxenite) in the Poyi Cu-Ni deposit, magnetite-mineralized gabbro, and massive magnetite, are 8.55, 12.18, 12.26, and 18.14, respectively. Thus, the massive magnetite was probably the

  4. Development of a data-driven algorithm to determine the W+jets background in t anti t events in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Mehlhase, Sascha

    2010-07-12

    The physics of the top quark is one of the key components in the physics programme of the ATLAS experiment at the Large Hadron Collider at CERN. In this thesis, general studies of the jet trigger performance for top quark events using fully simulated Monte Carlo samples are presented and two data-driven techniques to estimate the multi-jet trigger efficiency and the W+Jets background in top pair events are introduced to the ATLAS experiment. In a tag-and-probe based method, using a simple and common event selection and a high transverse momentum lepton as tag object, the possibility to estimate the multijet trigger efficiency from data in ATLAS is investigated and it is shown that the method is capable of estimating the efficiency without introducing any significant bias by the given tag selection. In the second data-driven analysis a new method to estimate the W+Jets background in a top-pair event selection is introduced to ATLAS. By defining signal and background dominated regions by means of the jet multiplicity and the pseudo-rapidity distribution of the lepton in the event, the W+Jets contribution is extrapolated from the background dominated into the signal dominated region. The method is found to estimate the given background contribution as a function of the jet multiplicity with an accuracy of about 25% for most of the top dominated region with an integrated luminosity of above 100 pb{sup -1} at {radical}(s) = 10 TeV. This thesis also covers a study summarising the thermal behaviour and expected performance of the Pixel Detector of ATLAS. All measurements performed during the commissioning phase of 2008/09 yield results within the specification of the system and the performance is expected to stay within those even after several years of running under LHC conditions. (orig.)

  5. Development of a data-driven algorithm to determine the W+jets background in t anti t events in ATLAS

    International Nuclear Information System (INIS)

    Mehlhase, Sascha

    2010-01-01

    The physics of the top quark is one of the key components in the physics programme of the ATLAS experiment at the Large Hadron Collider at CERN. In this thesis, general studies of the jet trigger performance for top quark events using fully simulated Monte Carlo samples are presented and two data-driven techniques to estimate the multi-jet trigger efficiency and the W+Jets background in top pair events are introduced to the ATLAS experiment. In a tag-and-probe based method, using a simple and common event selection and a high transverse momentum lepton as tag object, the possibility to estimate the multijet trigger efficiency from data in ATLAS is investigated and it is shown that the method is capable of estimating the efficiency without introducing any significant bias by the given tag selection. In the second data-driven analysis a new method to estimate the W+Jets background in a top-pair event selection is introduced to ATLAS. By defining signal and background dominated regions by means of the jet multiplicity and the pseudo-rapidity distribution of the lepton in the event, the W+Jets contribution is extrapolated from the background dominated into the signal dominated region. The method is found to estimate the given background contribution as a function of the jet multiplicity with an accuracy of about 25% for most of the top dominated region with an integrated luminosity of above 100 pb -1 at √(s) = 10 TeV. This thesis also covers a study summarising the thermal behaviour and expected performance of the Pixel Detector of ATLAS. All measurements performed during the commissioning phase of 2008/09 yield results within the specification of the system and the performance is expected to stay within those even after several years of running under LHC conditions. (orig.)

  6. Application of Remote-Sensing Observations for Detecting Patterns of Localization of Cu-Ni Mineralization of the Norilsk Ore Region

    Science.gov (United States)

    Milovsky, G. A.; Ishmukhametova, V. T.; Shemyakina, E. M.

    2017-12-01

    The methods of a complex analysis of materials of space, gravimetric, and magnetometric surveys were developed on the basis of a study of reference fields of the Norilsk ore region (Imangda, etc.) for detection patterns of the localization of Cu-Ni (with PGMs) mineralization in intrusive complexes of the northwestern frame of the Siberian Platform.

  7. Polycrystalline oxides formation during transient oxidation of (001) Cu-Ni binary alloys studied by in situ TEM and XRD

    International Nuclear Information System (INIS)

    Yang, J.C.; Li, Z.Q.; Sun, L.; Zhou, G.W.; Eastman, J.A.; Fong, D.D.; Fuoss, P.H.; Baldo, P.M.; Rehn, L.E.; Thompson, L.J.

    2009-01-01

    The nucleation and growth of Cu 2 O and NiO islands due to oxidation of Cu x Ni 1-x (001) films were monitored, at various temperatures, by in situ ultra-high vacuum (UHV) transmission electron microscopy (TEM) and in situ synchrotron X-ray diffraction (XRD). In remarkable contrast to our previous observations of Cu and Cu-Au oxidation, irregular-shaped polycrystalline oxide islands formed with respect to the Cu-Ni alloy film, and an unusual second oxide nucleation stage was noted. In situ XRD experiments revealed that NiO formed first epitaxially, then other orientations appeared, and finally polycrystalline Cu 2 O developed as the oxidation pressure was increased. The segregation of Ni and Cu towards or away, respectively, from the alloy surface during oxidation could disrupt the surface and cause polycrystalline oxide formation.

  8. Mechanical properties and bending strain effect on Cu-Ni sheathed MgB2 superconducting tape

    International Nuclear Information System (INIS)

    Fu, Minyi; Chen, Jiangxing; Jiao, Zhengkuan; Kumakura, H.; Togano, K.; Ding, Liren; Zhang, Yong; Chen, Zhiyou; Han, Hanmin; Chen, Jinglin

    2004-01-01

    The Young's modulus (E) of Cu-Ni sheathed MgB 2 monofilament tape was measured using electric method. It is about 8.05 x 10 10 Pa, the same order of Cu and its alloys. We found that the lower E value of the MgB 2 component seemed to relate to the lower filament density. The benefits of pre-compression in filaments were discussed in terms of improving stress distribution in the wires and tapes during winding and operation of superconducting magnets. The magnetic field dependence of J c was investigated on the sample subjected to various strain levels through bending with different radii at 4.2 K

  9. Assessment of AlSi21CuNi Alloy’s Quality with Use of ATND Method

    Directory of Open Access Journals (Sweden)

    Pezda J.

    2013-12-01

    Full Text Available Majority of combustion engines is produced (poured from Al-Si alloys with low thermal expansion coefficient, so called piston silumins. Hypereutectic alloys normally contain coarse, primary angular Si particles together with eutectic Si phase. The structure and mechanical properties of these alloys are highly dependent upon cooling rate, composition, modification and heat-treatment operations. In the paper one depicts use of the ATND method (thermal-voltage-derivative analysis and regression analysis to assessment of quality of the AlSi21CuNi alloy modified with Cu-P on stage of its preparation, in aspect of obtained mechanical properties (R0,02, Rm, A5, HB. Obtained dependencies enable prediction of mechanical properties of the investigated alloy in laboratory conditions, using values of characteristic points from curves of the ATND method.

  10. CuNi NPs supported on MIL-101 as highly active catalysts for the hydrolysis of ammonia borane

    Science.gov (United States)

    Gao, Doudou; Zhang, Yuhong; Zhou, Liqun; Yang, Kunzhou

    2018-01-01

    The catalysts containing Cu, Ni bi-metallic nanoparticles were successfully synthesized by in-situ reduction of Cu2+ and Ni2+ salts into the highly porous and hydrothermally stable metal-organic framework MIL-101 via a simple liquid impregnation method. When the total amount of loading metal is 3 × 10-4 mol, Cu2Ni1@MIL-101 catalyst shows higher catalytic activity comparing to CuxNiy@MIL-101 with different molar ratio of Cu and Ni (x, y = 0, 0.5, 1.5, 2, 2.5, 3). Cu2Ni1@MIL-101 catalyst has the highest catalytic activity comparing to mono-metallic Cu and Ni counterparts and pure bi-metallic CuNi nanoparticles in hydrolytic dehydrogeneration of ammonia borane (AB) at room temperature. Additionally, in the hydrolysis reaction, the Cu2Ni1@MIL- 101 catalyst possesses excellent catalytic performances, which exhibit highly catalytic activity with turn over frequency (TOF) value of 20.9 mol H2 min-1 Cu mol-1 and a very low activation energy value of 32.2 kJ mol-1. The excellent catalytic activity has been successfully achieved thanks to the strong bi-metallic synergistic effects, uniform distribution of nanoparticles and the bi-functional effects between CuNi nanoparticles and the host of MIL-101. Moreover, the catalyst also displays satisfied durable stability after five cycles for the hydrolytically releasing H2 from AB. The non-noble metal catalysts have broad prospects for commercial applications in the field of hydrogen-stored materials due to the low prices and excellent catalytic activity.

  11. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  12. Background Material

    DEFF Research Database (Denmark)

    Zandersen, Marianne; Hyytiäinen, Kari; Saraiva, Sofia

    This document serves as a background material to the BONUS Pilot Scenario Workshop, which aims to develop harmonised regional storylines of socio-ecological futures in the Baltic Sea region in a collaborative effort together with other BONUS projects and stakeholders.......This document serves as a background material to the BONUS Pilot Scenario Workshop, which aims to develop harmonised regional storylines of socio-ecological futures in the Baltic Sea region in a collaborative effort together with other BONUS projects and stakeholders....

  13. Gömülmüs Atom Potansiyeli Kullanarak CuNi Alasımının Moleküler Dinamik Simulasyonu

    Directory of Open Access Journals (Sweden)

    Eşe Ergün AKPINAR

    2009-04-01

    Full Text Available Bu çalısmada, CuNi alasımının moleküler dinamik simulasyonu, Sutton-Chen (SC potansiyeli kullanılarak incelendi. Bu potansiyel Cu, Ni ve CuNi in deneysel bilgilerinin fonksiyon parametrelerine fit edilmesiyle elde edildi. CuNi alasımının kristalizasyon sürecini atomik olarak tanımlamak için, gömülmüs atom yöntemini esas alan sabit basınç, sabit sıcaklık (NPT moleküler dinamik simulasyonu uygulandı. Sıvı fazda iken 4x1011 K/s sogutma hızında sogutulan CuNi alasımının yapısı ve kristallesme olusum yetenegi radyal dagılım fonksiyonuyla incelendi. Simulasyon, üç temel dogrultu boyunca periyodik sınır sartlarını saglayan kübik bir hücrede 1024 atom içeren sistemle gerçeklestirildi. Hareket denklemleri Verlet algoritması kullanılarak sayısal olarak çözüldü. Sogutma deneyi için sıvı hal baslangıcı, katının sıvı sıcaklıgına ısıtılmasıyla elde edildi. Sistem 1300-1550K sıvılasma bölgesi üzerindeki sıcaklıkta eritildi ve homojenize edildi ve hızla oda sıcaklıgına sogutuldu.

  14. The Effect of Modulation Ratio of Cu/Ni Multilayer Films on the Fretting Damage Behaviour of Ti-811 Titanium Alloy.

    Science.gov (United States)

    Zhang, Xiaohua; Liu, Daoxin; Li, Xiaoying; Dong, Hanshan; Xi, Yuntao

    2017-05-26

    To improve the fretting damage (fretting wear and fretting fatigue) resistance of Ti-811 titanium alloy, three Cu/Ni multilayer films with the same modulation period thickness (200 nm) and different modulation ratios (3:1, 1:1, 1:3) were deposited on the surface of the alloy via ion-assisted magnetron sputtering deposition (IAD). The bonding strength, micro-hardness, and toughness of the films were evaluated, and the effect of the modulation ratio on the room-temperature fretting wear (FW) and fretting fatigue (FF) resistance of the alloy was determined. The results indicated that the IAD technique can be successfully used to prepare Cu/Ni multilayer films, with high bonding strength, low-friction, and good toughness, which yield improved room-temperature FF and FW resistance of the alloy. For the same modulation period (200 nm), the micro-hardness, friction, and FW resistance of the coated alloy increased, decreased, and improved, respectively, with increasing modulation ratio of the Ni-to-Cu layer thickness. However, the FF resistance of the coated alloy increased non-monotonically with the increasing modulation ratio. Among the three Cu/Ni multilayer films, those with a modulation ratio of 1:1 can confer the highest FF resistance to the Ti-811 alloy, owing mainly to their unique combination of good toughness, high strength, and low-friction.

  15. Background subtraction theory and practice

    CERN Document Server

    Elgammal, Ahmed

    2014-01-01

    Background subtraction is a widely used concept for detection of moving objects in videos. In the last two decades there has been a lot of development in designing algorithms for background subtraction, as well as wide use of these algorithms in various important applications, such as visual surveillance, sports video analysis, motion capture, etc. Various statistical approaches have been proposed to model scene backgrounds. The concept of background subtraction also has been extended to detect objects from videos captured from moving cameras. This book reviews the concept and practice of back

  16. Background radiation

    International Nuclear Information System (INIS)

    Arnott, D.

    1985-01-01

    The effects of background radiation, whether natural or caused by man's activities, are discussed. The known biological effects of radiation in causing cancers or genetic mutations are explained. The statement that there is a threshold below which there is no risk is examined critically. (U.K.)

  17. A study of the composition and microstructure of nanodispersed Cu-Ni alloys obtained by different routes from copper and nickel oxides

    Energy Technology Data Exchange (ETDEWEB)

    Cangiano, Maria de los A; Ojeda, Manuel W., E-mail: mojeda@unsl.edu.ar; Carreras, Alejo C.; Gonzalez, Jorge A.; Ruiz, Maria del C

    2010-11-15

    Mixtures of CuO and NiO were prepared by two different techniques, and then the oxides were reduced with H{sub 2}. Method A involved the preparation of mechanical mixtures of CuO and NiO using different milling and pelletizing processes. Method B involved the chemical synthesis of the mixture of CuO and NiO. The route used to prepare the copper and nickel oxide mixture was found to have great influence on the characteristics of bimetallic Cu-Ni particles obtained. Observations performed using the X-ray diffraction (XRD) technique showed that although both methods led to the Cu-Ni solid solution, the diffractogram of the alloy obtained with method A revealed the presence of NiO together with the alloy. The temperature-programmed reduction (TPR) experiments indicated that the alloy is formed at lower temperatures when using method B. The scanning electron microscopy (SEM) studies revealed notable differences in the morphology and size distribution of the bimetallic particles synthesized by different routes. The results of the electron probe microanalysis (EPMA) studies evidenced the existence of a small amount of oxygen in both cases and demonstrated that the alloy synthesized using method B presented a homogeneous composition with a Cu-Ni ratio close to 1:1. On the contrary, the alloy obtained using method A was not homogeneous in all the volume of the solid. The homogeneity depended on the mechanical treatment undergone by the mixture of the oxides. - Research Highlights: {yields}Study of the properties of Cu-Ni alloys synthesized by two different routes. {yields}Mixtures of Cu and Ni oxides prepared by two techniques were reduced with H{sub 2}. {yields}Mixtures of oxides were obtained by a mechanical process and the citrate-gel route. {yields}The characterizations were carried out by TPR, XRD, SEM and EPMA. {yields}The route used to prepare oxide mixtures influences on the Cu-Ni alloy obtained.

  18. A study of the composition and microstructure of nanodispersed Cu-Ni alloys obtained by different routes from copper and nickel oxides

    International Nuclear Information System (INIS)

    Cangiano, Maria de los A; Ojeda, Manuel W.; Carreras, Alejo C.; Gonzalez, Jorge A.; Ruiz, Maria del C

    2010-01-01

    Mixtures of CuO and NiO were prepared by two different techniques, and then the oxides were reduced with H 2 . Method A involved the preparation of mechanical mixtures of CuO and NiO using different milling and pelletizing processes. Method B involved the chemical synthesis of the mixture of CuO and NiO. The route used to prepare the copper and nickel oxide mixture was found to have great influence on the characteristics of bimetallic Cu-Ni particles obtained. Observations performed using the X-ray diffraction (XRD) technique showed that although both methods led to the Cu-Ni solid solution, the diffractogram of the alloy obtained with method A revealed the presence of NiO together with the alloy. The temperature-programmed reduction (TPR) experiments indicated that the alloy is formed at lower temperatures when using method B. The scanning electron microscopy (SEM) studies revealed notable differences in the morphology and size distribution of the bimetallic particles synthesized by different routes. The results of the electron probe microanalysis (EPMA) studies evidenced the existence of a small amount of oxygen in both cases and demonstrated that the alloy synthesized using method B presented a homogeneous composition with a Cu-Ni ratio close to 1:1. On the contrary, the alloy obtained using method A was not homogeneous in all the volume of the solid. The homogeneity depended on the mechanical treatment undergone by the mixture of the oxides. - Research Highlights: →Study of the properties of Cu-Ni alloys synthesized by two different routes. →Mixtures of Cu and Ni oxides prepared by two techniques were reduced with H 2 . →Mixtures of oxides were obtained by a mechanical process and the citrate-gel route. →The characterizations were carried out by TPR, XRD, SEM and EPMA. →The route used to prepare oxide mixtures influences on the Cu-Ni alloy obtained.

  19. Automatic development of normal zone in composite MgB2/CuNi wires with different diameters

    Science.gov (United States)

    Jokinen, A.; Kajikawa, K.; Takahashi, M.; Okada, M.

    2010-06-01

    One of the promising applications with superconducting technology for hydrogen utilization is a sensor with a magnesium-diboride (MgB2) superconductor to detect the position of boundary between the liquid hydrogen and the evaporated gas stored in a Dewar vessel. In our previous experiment for the level sensor, the normal zone has been automatically developed and therefore any energy input with the heater has not been required for normal operation. Although the physical mechanism for such a property of the MgB2 wire has not been clarified yet, the deliberate application might lead to the realization of a simpler superconducting level sensor without heater system. In the present study, the automatic development of normal zone with increasing a transport current is evaluated for samples consisting of three kinds of MgB2 wires with CuNi sheath and different diameters immersed in liquid helium. The influences of the repeats of current excitation and heat cycle on the normal zone development are discussed experimentally. The aim of this paper is to confirm the suitability of MgB2 wire in a heater free level sensor application. This could lead to even more optimized design of the liquid hydrogen level sensor and the removal of extra heater input.

  20. The influence of the marine aerobic Pseudomonas strain on the corrosion of 70/30 Cu-Ni alloy

    International Nuclear Information System (INIS)

    Yuan, S.J.; Choong, Amy M.F.; Pehkonen, S.O.

    2007-01-01

    A comparative study of the corrosion behavior of the 70/30 Cu-Ni alloy in a nutrient-rich simulated seawater-based nutrient-rich medium in the presence and the absence of a marine aerobic Pseudomonas bacterium was carried out by electrochemical experiments, microscopic methods and X-ray photoelectron spectroscopy (XPS). The results of Tafel plot measurements showed the noticeable increase in the corrosion rate of the alloy in the presence of the Pseudomonas bacteria as compared to the corresponding control samples. The E1S data demonstrated that the charge transfer resistance, R ct , and the resistance of oxide film, R f , gradually increased with time in the abiotic medium; whereas, both of them dramatically decreased with time in the biotic medium inoculated with the Pseudomonas, indicative of the acceleration of corrosion rates of the alloy. The bacterial cells preferentially attached themselves to the alloy surface to form patchy or blotchy biofilms, as observed by fluorescent microscopy (FM). Scanning electron microscopy (SEM) images revealed the occurrence of micro-pitting corrosion underneath the biofilms on the alloy surface after the biofilm removal. XPS studies presented the evolution of the passive film on the alloy surface with time in the presence and the absence of the Pseudomonas bacteria under experimental conditions, and further revealed that the presence of the Pseudomonas cells and its extra-cellular polymers (EPS) on the alloy surface retarded the formation process or impaired the protective nature of the oxide film. Furthermore, XPS results verified the difference in the chelating functional groups between the conditioning layers and the bacterial cells and the EPS in the biofilms, which was believed to connect with the loss of the passivity of the protective oxide film

  1. A dilute Cu(Ni) alloy for synthesis of large-area Bernal stacked bilayer graphene using atmospheric pressure chemical vapour deposition

    Energy Technology Data Exchange (ETDEWEB)

    Madito, M. J.; Bello, A.; Dangbegnon, J. K.; Momodu, D. Y.; Masikhwa, T. M.; Barzegar, F.; Manyala, N., E-mail: ncholu.manyala@up.ac.za [Department of Physics, Institute of Applied Materials, SARCHI Chair in Carbon Technology and Materials, University of Pretoria, Pretoria 0028 (South Africa); Oliphant, C. J.; Jordaan, W. A. [National Metrology Institute of South Africa, Private Bag X34, Lynwood Ridge, Pretoria 0040 (South Africa); Fabiane, M. [Department of Physics, Institute of Applied Materials, SARCHI Chair in Carbon Technology and Materials, University of Pretoria, Pretoria 0028 (South Africa); Department of Physics, National University of Lesotho, P.O. Roma 180 (Lesotho)

    2016-01-07

    A bilayer graphene film obtained on copper (Cu) foil is known to have a significant fraction of non-Bernal (AB) stacking and on copper/nickel (Cu/Ni) thin films is known to grow over a large-area with AB stacking. In this study, annealed Cu foils for graphene growth were doped with small concentrations of Ni to obtain dilute Cu(Ni) alloys in which the hydrocarbon decomposition rate of Cu will be enhanced by Ni during synthesis of large-area AB-stacked bilayer graphene using atmospheric pressure chemical vapour deposition. The Ni doped concentration and the Ni homogeneous distribution in Cu foil were confirmed with inductively coupled plasma optical emission spectrometry and proton-induced X-ray emission. An electron backscatter diffraction map showed that Cu foils have a single (001) surface orientation which leads to a uniform growth rate on Cu surface in early stages of graphene growth and also leads to a uniform Ni surface concentration distribution through segregation kinetics. The increase in Ni surface concentration in foils was investigated with time-of-flight secondary ion mass spectrometry. The quality of graphene, the number of graphene layers, and the layers stacking order in synthesized bilayer graphene films were confirmed by Raman and electron diffraction measurements. A four point probe station was used to measure the sheet resistance of graphene films. As compared to Cu foil, the prepared dilute Cu(Ni) alloy demonstrated the good capability of growing large-area AB-stacked bilayer graphene film by increasing Ni content in Cu surface layer.

  2. Microstructure, thickness and sheet resistivity of Cu/Ni thin film produced by electroplating technique on the variation of electrolyte temperature

    Science.gov (United States)

    Toifur, M.; Yuningsih, Y.; Khusnani, A.

    2018-03-01

    In this research, it has been made Cu/Ni thin film produced with electroplating technique. The deposition process was done in the plating bath using Cu and Ni as cathode and anode respectively. The electrolyte solution was made from the mixture of HBrO3 (7.5g), NiSO4 (100g), NiCl2 (15g), and aquadest (250 ml). Electrolyte temperature was varied from 40°C up to 80°C, to make the Ni ions in the solution easy to move to Cu cathode. The deposition was done during 2 minutes on the potential of 1.5 volt. Many characterizations were done including the thickness of Ni film, microstructure, and sheet resistivity. The results showed that at all samples Ni had attacked on the Cu substrate to form Cu/Ni. The raising of electrolyte temperature affected the increasing of Ni thickness that is the Ni thickness increase with the increasing electrolyte temperature. From the EDS spectrum, it can be informed that samples already contain Ni and Cu elements and NiO and CuO compounds. Addition element and compound are found for sample Cu/Ni resulted from 70° electrolyte temperature of Ni deposition, that are Pt and PtO2. From XRD pattern, there are several phases which have crystal structure i.e. Cu, Ni, and NiO, while CuO and PtO2 have amorphous structure. The sheet resistivity linearly decreases with the increasing electrolyte temperature.

  3. The CUNY Energy Institute Electrical Energy Storage Development for Grid Applications

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Sanjoy

    2013-03-31

    1. Project Objectives The objectives of the project are to elucidate science issues intrinsic to high energy density electricity storage (battery) systems for smart-grid applications, research improvements in such systems to enable scale-up to grid-scale and demonstrate a large 200 kWh battery to facilitate transfer of the technology to industry. 2. Background Complex and difficult to control interfacial phenomena are intrinsic to high energy density electrical energy storage systems, since they are typically operated far from equilibrium. One example of such phenomena is the formation of dendrites. Such dendrites occur on battery electrodes as they cycle, and can lead to internal short circuits, reducing cycle life. An improved understanding of the formation of dendrites and their control can improve the cycle life and safety of many energy storage systems, including rechargeable lithium and zinc batteries. Another area where improved understanding is desirable is the application of ionic liquids as electrolytes in energy storage systems. An ionic liquid is typically thought of as a material that is fully ionized (consisting only of anions and cations) and is fluid at or near room temperature. Some features of ionic liquids include a generally high thermal stability (up to 450 °C), a high electrochemical window (up to 6 V) and relatively high intrinsic conductivities. Such features make them attractive as battery or capacitor electrolytes, and may enable batteries which are safer (due to the good thermal stability) and of much higher energy density (due to the higher voltage electrode materials which may be employed) than state of the art secondary (rechargeable) batteries. Of particular interest is the use of such liquids as electrolytes in metal air batteries, where energy densities on the order of 1-2,000 Wh / kg are possible; this is 5-10 times that of existing state of the art lithium battery technology. The Energy Institute has been engaged in the

  4. Morphology, optical and electrical properties of Cu-Ni nanoparticles in a-C:H prepared by co-deposition of RF-sputtering and RF-PECVD

    Energy Technology Data Exchange (ETDEWEB)

    Ghodselahi, T., E-mail: ghodselahi@ipm.ir [School of Physics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Vesaghi, M.A. [School of Physics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Department of Physics, Sharif University of Technology, P.O. Box 11365-9161, Tehran (Iran, Islamic Republic of); Gelali, A.; Zahrabi, H.; Solaymani, S. [Young Researchers Club, Islamic Azad University, Kermanshah Branch, Kermanshah (Iran, Islamic Republic of)

    2011-11-01

    We report optical and electrical properties of Cu-Ni nanoparticles in hydrogenated amorphous carbon (Cu-Ni NPs - a-C:H) with different surface morphology. Ni NPs with layer thicknesses of 5, 10 and 15 nm over Cu NPs - a-C:H were prepared by co-deposition of RF-sputtering and RF-Plasma Enhanced Chemical Vapor Deposition (RF-PECVD) from acetylene gas and Cu and Ni targets. A nonmetal-metal transition was observed as the thickness of Ni over layer increases. The surface morphology of the sample was described by a two dimensional (2D) Gaussian self-affine fractal, except the sample with 10 nm thickness of Ni over layer, which is in the nonmetal-metal transition region. X-ray diffraction profile indicates that Cu NPs and Ni NPs with fcc crystalline structure are formed in these films. Localized Surface Plasmon Resonance (LSPR) peak of Cu NPs is observed around 600 nm in visible spectra, which is widen and shifted to lower wavelengths as the thickness of Ni over layer increases. The variation of LSPR peak width correlates with conductivity variation of these bilayers. We assign both effects to surface electron delocalization of Cu NPs.

  5. Instructional Modules for Training Special Education Teachers: A Final Report on the Development and Field Testing of the CUNY-CBTEP Special Education Modules. Case 30-76. Toward Competence Instructional Materials for Teacher Education.

    Science.gov (United States)

    City Univ. of New York, NY. Center for Advanced Study in Education.

    The City University of New York Competency Based Teacher Education Project (CUNY-CBTEP) in Special Education studied Modularization, focusing on the variables in the instructional setting that facilitate learning from modular materials for a wide range of students. Four of the five modules for the training of special education teachers developed…

  6. The Optical Properties of Cu-Ni Nanoparticles Produced via Pulsed Laser Dewetting of Ultrathin Films: The Effect of Nanoparticle Size and Composition on the Plasmon Response

    International Nuclear Information System (INIS)

    Wu, Yeuyeng; Fowlkes, Jason Davidson; Rack, Philip D.

    2011-01-01

    Thin film Cu-Ni alloys ranging from 2-8nm were synthesized and their optical properties were measured as-deposited and after a laser treatment which dewet the films into arrays of spatially correlated nanoparticles. The resultant nanoparticle size and spacing are attributed to laser induced spinodal dewetting process. The evolution of the spinodal dewetting process is investigated as a function of the thin film composition which ultimately dictates the size distribution and spacing of the nanoparticles. The optical measurements of the copper rich alloy nanoparticles reveal a signature absorption peak suggestive of a plasmonic peak which red-shifts with increasing nanoparticle size and blue shifts and dampens with increasing nickel concentration.

  7. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  8. Generative electronic background music system

    Energy Technology Data Exchange (ETDEWEB)

    Mazurowski, Lukasz [Faculty of Computer Science, West Pomeranian University of Technology in Szczecin, Zolnierska Street 49, Szczecin, PL (Poland)

    2015-03-10

    In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions.

  9. Generative electronic background music system

    International Nuclear Information System (INIS)

    Mazurowski, Lukasz

    2015-01-01

    In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions

  10. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  11. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  12. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  13. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  14. Gas leak detection in infrared video with background modeling

    Science.gov (United States)

    Zeng, Xiaoxia; Huang, Likun

    2018-03-01

    Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.

  15. A synergistic effort among geoscience, physics, computer science and mathematics at Hunter College of CUNY as a Catalyst for educating Earth scientists.

    Science.gov (United States)

    Salmun, H.; Buonaiuto, F. S.

    2016-12-01

    The Catalyst Scholarship Program at Hunter College of The City University of New York (CUNY) was established with a four-year award from the National Science Foundation (NSF) to fund scholarships for academically talented but financially disadvantaged students majoring in four disciplines of science, technology, engineering and mathematics (STEM). Led by Earth scientists the Program awarded scholarships to students in their junior or senior years majoring in computer science, geosciences, mathematics and physics to create two cohorts of students that spent a total of four semesters in an interdisciplinary community. The program included mentoring of undergraduate students by faculty and graduate students (peer-mentoring), a sequence of three semesters of a one-credit seminar course and opportunities to engage in research activities, research seminars and other enriching academic experiences. Faculty and peer-mentoring were integrated into all parts of the scholarship activities. The one-credit seminar course, although designed to expose scholars to the diversity STEM disciplines and to highlight research options and careers in these disciplines, was thematically focused on geoscience, specifically on ocean and atmospheric science. The program resulted in increased retention rates relative to institutional averages. In this presentation we will discuss the process of establishing the program, from the original plans to its implementation, as well as the impact of this multidisciplinary approach to geoscience education at our institution and beyond. An overview of accomplishments, lessons learned and potential for best practices will be presented.

  16. Surface functionalization of Cu-Ni alloys via grafting of a bactericidal polymer for inhibiting biocorrosion by Desulfovibrio desulfuricans in anaerobic seawater.

    Science.gov (United States)

    Yuan, S J; Liu, C K; Pehkonen, S O; Bai, R B; Neoh, K G; Ting, Y P; Kang, E T

    2009-01-01

    A novel surface modification technique was developed to provide a copper nickel alloy (M) surface with bactericidal and anticorrosion properties for inhibiting biocorrosion. 4-(chloromethyl)-phenyl tricholorosilane (CTS) was first coupled to the hydroxylated alloy surface to form a compact silane layer, as well as to confer the surface with chloromethyl functional groups. The latter allowed the coupling of 4-vinylpyridine (4VP) to generate the M-CTS-4VP surface with biocidal functionality. Subsequent surface graft polymerization of 4VP, in the presence of benzoyl peroxide (BPO) initiator, from the M-CTS-4VP surface produced the poly(4-vinylpyridine) (P(4VP)) grafted surface, or the M-CTS-P(4VP) surface. The pyridine nitrogen moieties on the M-CTS-P(4VP) surface were quaternized with hexylbromide to produce a high concentration of quaternary ammonium groups. Each surface functionalization step was ascertained by X-ray photoelectron spectroscopy (XPS) and static water contact angle measurements. The alloy with surface-quaternized pyridinium cation groups (N+) exhibited good bactericidal efficiency in a Desulfovibrio desulfuricans-inoculated seawater-based modified Barr's medium, as indicated by viable cell counts and fluorescence microscopy (FM) images of the surface. The anticorrosion capability of the organic layers was verified by the polarization curve and electrochemical impedance spectroscopy (EIS) measurements. In comparison, the pristine (surface hydroxylated) Cu-Ni alloy was found to be readily susceptible to biocorrosion under the same environment.

  17. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  18. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  19. Background sources at PEP

    International Nuclear Information System (INIS)

    Lynch, H.; Schwitters, R.F.; Toner, W.T.

    1988-01-01

    Important sources of background for PEP experiments are studied. Background particles originate from high-energy electrons and positrons which have been lost from stable orbits, γ-rays emitted by the primary beams through bremsstrahlung in the residual gas, and synchrotron radiation x-rays. The effect of these processes on the beam lifetime are calculated and estimates of background rates at the interaction region are given. Recommendations for the PEP design, aimed at minimizing background are presented. 7 figs., 4 tabs

  20. Cosmic Microwave Background Timeline

    Science.gov (United States)

    Cosmic Microwave Background Timeline 1934 : Richard Tolman shows that blackbody radiation in an will have a blackbody cosmic microwave background with temperature about 5 K 1955: Tigran Shmaonov anisotropy in the cosmic microwave background, this strongly supports the big bang model with gravitational

  1. Low Background Micromegas in CAST

    CERN Document Server

    Garza, J.G.; Aznar, F.; Calvet, D.; Castel, J.F.; Christensen, F.E.; Dafni, T.; Davenport, M.; Decker, T.; Ferrer-Ribas, E.; Galán, J.; García, J.A.; Giomataris, I.; Hill, R.M.; Iguaz, F.J.; Irastorza, I.G.; Jakobsen, A.C.; Jourde, D.; Mirallas, H.; Ortega, I.; Papaevangelou, T.; Pivovaroff, M.J.; Ruz, J.; Tomás, A.; Vafeiadis, T.; Vogel, J.K.

    2015-11-16

    Solar axions could be converted into x-rays inside the strong magnetic field of an axion helioscope, triggering the detection of this elusive particle. Low background x-ray detectors are an essential component for the sensitivity of these searches. We report on the latest developments of the Micromegas detectors for the CERN Axion Solar Telescope (CAST), including technological pathfinder activities for the future International Axion Observatory (IAXO). The use of low background techniques and the application of discrimination algorithms based on the high granularity of the readout have led to background levels below 10$^{-6}$ counts/keV/cm$^2$/s, more than a factor 100 lower than the first generation of Micromegas detectors. The best levels achieved at the Canfranc Underground Laboratory (LSC) are as low as 10$^{-7}$ counts/keV/cm$^2$/s, showing good prospects for the application of this technology in IAXO. The current background model, based on underground and surface measurements, is presented, as well as ...

  2. Optimal background matching camouflage.

    Science.gov (United States)

    Michalis, Constantine; Scott-Samuel, Nicholas E; Gibson, David P; Cuthill, Innes C

    2017-07-12

    Background matching is the most familiar and widespread camouflage strategy: avoiding detection by having a similar colour and pattern to the background. Optimizing background matching is straightforward in a homogeneous environment, or when the habitat has very distinct sub-types and there is divergent selection leading to polymorphism. However, most backgrounds have continuous variation in colour and texture, so what is the best solution? Not all samples of the background are likely to be equally inconspicuous, and laboratory experiments on birds and humans support this view. Theory suggests that the most probable background sample (in the statistical sense), at the size of the prey, would, on average, be the most cryptic. We present an analysis, based on realistic assumptions about low-level vision, that estimates the distribution of background colours and visual textures, and predicts the best camouflage. We present data from a field experiment that tests and supports our predictions, using artificial moth-like targets under bird predation. Additionally, we present analogous data for humans, under tightly controlled viewing conditions, searching for targets on a computer screen. These data show that, in the absence of predator learning, the best single camouflage pattern for heterogeneous backgrounds is the most probable sample. © 2017 The Authors.

  3. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  4. Polarization of Cosmic Microwave Background

    International Nuclear Information System (INIS)

    Buzzelli, A; Cabella, P; De Gasperis, G; Vittorio, N

    2016-01-01

    In this work we present an extension of the ROMA map-making code for data analysis of Cosmic Microwave Background polarization, with particular attention given to the inflationary polarization B-modes. The new algorithm takes into account a possible cross- correlated noise component among the different detectors of a CMB experiment. We tested the code on the observational data of the BOOMERanG (2003) experiment and we show that we are provided with a better estimate of the power spectra, in particular the error bars of the BB spectrum are smaller up to 20% for low multipoles. We point out the general validity of the new method. A possible future application is the LSPE balloon experiment, devoted to the observation of polarization at large angular scales. (paper)

  5. Cosmic microwave background radiation

    International Nuclear Information System (INIS)

    Wilson, R.W.

    1979-01-01

    The 20-ft horn-reflector antenna at Bell Laboratories is discussed in detail with emphasis on the 7.35 cm radiometer. The circumstances leading to the detection of the cosmic microwave background radiation are explored

  6. Universal algorithm of time sharing

    International Nuclear Information System (INIS)

    Silin, I.N.; Fedyun'kin, E.D.

    1979-01-01

    Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed

  7. Zambia Country Background Report

    DEFF Research Database (Denmark)

    Hampwaye, Godfrey; Jeppesen, Søren; Kragelund, Peter

    This paper provides background data and general information for the Zambia studies focusing on local food processing sub­‐sector; and the local suppliers to the mines as part of the SAFIC project (Successful African Firms and Institutional Change).......This paper provides background data and general information for the Zambia studies focusing on local food processing sub­‐sector; and the local suppliers to the mines as part of the SAFIC project (Successful African Firms and Institutional Change)....

  8. The natural radiation background

    International Nuclear Information System (INIS)

    Duggleby, J.C.

    1982-01-01

    The components of the natural background radiation and their variations are described. Cosmic radiation is a major contributor to the external dose to the human body whilst naturally-occurring radionuclides of primordial and cosmogenic origin contribute to both the external and internal doses, with the primordial radionuclides being the major contributor in both cases. Man has continually modified the radiation dose to which he has been subjected. The two traditional methods of measuring background radiation, ionisation chamber measurements and scintillation counting, are looked at and the prospect of using thermoluminescent dosimetry is considered

  9. Effects of background radiation

    International Nuclear Information System (INIS)

    Knox, E.G.; Stewart, A.M.; Gilman, E.A.; Kneale, G.W.

    1987-01-01

    The primary objective of this investigation is to measure the relationship between exposure to different levels of background gamma radiation in different parts of the country, and different Relative Risks for leukaemias and cancers in children. The investigation is linked to an earlier analysis of the effects of prenatal medical x-rays upon leukaemia and cancer risk; the prior hypothesis on which the background-study was based, is derived from the earlier results. In a third analysis, the authors attempted to measure varying potency of medical x-rays delivered at different stages of gestation and the results supply a link between the other two estimates. (author)

  10. The cosmic microwave background

    International Nuclear Information System (INIS)

    Silk, J.

    1991-01-01

    Recent limits on spectral distortions and angular anisotropies in the cosmic microwave background are reviewed. The various backgrounds are described, and the theoretical implications are assessed. Constraints on inflationary cosmology dominated by cold dark matter (CDM) and on open cosmological models dominated by baryonic dark matter (BDM), with, respectively, primordial random phase scale-invariant curvature fluctuations or non-gaussian isocurvature fluctuations are described. More exotic theories are addressed, and I conclude with the 'bottom line': what theories expect experimentalists to be measuring within the next two to three years without having to abandon their most cherished theorists. (orig.)

  11. The Cosmic Background Explorer

    Science.gov (United States)

    Gulkis, Samuel; Lubin, Philip M.; Meyer, Stephan S.; Silverberg, Robert F.

    1990-01-01

    The Cosmic Background Explorer (CBE), NASA's cosmological satellite which will observe a radiative relic of the big bang, is discussed. The major questions connected to the big bang theory which may be clarified using the CBE are reviewed. The satellite instruments and experiments are described, including the Differential Microwave Radiometer, which measures the difference between microwave radiation emitted from two points on the sky, the Far-Infrared Absolute Spectrophotometer, which compares the spectrum of radiation from the sky at wavelengths from 100 microns to one cm with that from an internal blackbody, and the Diffuse Infrared Background Experiment, which searches for the radiation from the earliest generation of stars.

  12. Thermal background noise limitations

    Science.gov (United States)

    Gulkis, S.

    1982-01-01

    Modern detection systems are increasingly limited in sensitivity by the background thermal photons which enter the receiving system. Expressions for the fluctuations of detected thermal radiation are derived. Incoherent and heterodyne detection processes are considered. References to the subject of photon detection statistics are given.

  13. Berkeley Low Background Facility

    International Nuclear Information System (INIS)

    Thomas, K. J.; Norman, E. B.; Smith, A. R.; Poon, A. W. P.; Chan, Y. D.; Lesko, K. T.

    2015-01-01

    The Berkeley Low Background Facility (BLBF) at Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California provides low background gamma spectroscopy services to a wide array of experiments and projects. The analysis of samples takes place within two unique facilities; locally within a carefully-constructed, low background laboratory on the surface at LBNL and at the Sanford Underground Research Facility (SURF) in Lead, SD. These facilities provide a variety of gamma spectroscopy services to low background experiments primarily in the form of passive material screening for primordial radioisotopes (U, Th, K) or common cosmogenic/anthropogenic products; active screening via neutron activation analysis for U,Th, and K as well as a variety of stable isotopes; and neutron flux/beam characterization measurements through the use of monitors. A general overview of the facilities, services, and sensitivities will be presented. Recent activities and upgrades will also be described including an overview of the recently installed counting system at SURF (recently relocated from Oroville, CA in 2014), the installation of a second underground counting station at SURF in 2015, and future plans. The BLBF is open to any users for counting services or collaboration on a wide variety of experiments and projects

  14. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  15. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  16. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  17. The Cosmic Microwave Background

    Directory of Open Access Journals (Sweden)

    Jones Aled

    1998-01-01

    Full Text Available We present a brief review of current theory and observations of the cosmic microwave background (CMB. New predictions for cosmological defect theories and an overview of the inflationary theory are discussed. Recent results from various observations of the anisotropies of the microwave background are described and a summary of the proposed experiments is presented. A new analysis technique based on Bayesian statistics that can be used to reconstruct the underlying sky fluctuations is summarised. Current CMB data is used to set some preliminary constraints on the values of fundamental cosmological parameters $Omega$ and $H_circ$ using the maximum likelihood technique. In addition, secondary anisotropies due to the Sunyaev-Zel'dovich effect are described.

  18. X-ray diffraction study of chalcopyrite CuFeS2, pentlandite (Fe,Ni)9S8 and Pyrrhotite Fe1-xS obtained from Cu-Ni orebodies

    International Nuclear Information System (INIS)

    Nkoma, J.S.; Ekosse, G.

    1998-05-01

    The X-ray Diffraction (XRD) technique is applied to study five samples of Cu-Ni orebodies, and it is shown that they contain chalcopyrite CuFeS 2 as the source of Cu, pentlandite (Fe,Ni) 9 S 8 as the source of Ni and pyrrhotite Fe 1-x S as a dominant compound. There are also other less dominant compounds such as bunsenite NiO, chalcocite Cu 2 S, penrosite (Ni, Cu)Se 2 and magnetite Fe 3 O 4 . Using the obtained XRD data, we obtain the lattice parameters for tetragonal chalcopyrite as a=b=5.3069A and c=10.3836A, cubic pentlandite as a=b=c=10.0487A, and hexagonal pyrrhotite as a=b=6.8820A and c=22.8037A. (author)

  19. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  20. Family Background and Entrepreneurship

    DEFF Research Database (Denmark)

    Lindquist, Matthew J.; Sol, Joeri; Van Praag, Mirjam

    Vast amounts of money are currently being spent on policies aimed at promoting entrepreneurship. The success of such policies, however, rests in part on the assumption that individuals are not ‘born entrepreneurs’. In this paper, we assess the importance of family background and neighborhood...... effects as determinants of entrepreneurship. We start by estimating sibling correlations in entrepreneurship. We find that between 20 and 50 percent of the variance in different entrepreneurial outcomes is explained by factors that siblings share. The average is 28 percent. Allowing for differential...... entrepreneurship does play a large role, as do shared genes....

  1. Malaysia; Background Paper

    OpenAIRE

    International Monetary Fund

    1996-01-01

    This Background Paper on Malaysia examines developments and trends in the labor market since the mid-1980s. The paper describes the changes in the employment structure and the labor force. It reviews wages and productivity trends and their effects on unit labor cost. The paper highlights that Malaysia’s rapid growth, sustained since 1987, has had a major impact on the labor market. The paper outlines the major policy measures to address the labor constraints. It also analyzes Malaysia’s recen...

  2. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  3. Backgrounded but not peripheral

    DEFF Research Database (Denmark)

    Hovmark, Henrik

    2013-01-01

    .e. the schema enters into apparently contradictory constructions of the informants’ local home-base and, possibly, of their identity (cf. Hovmark, 2010). Second, I discuss the status and role of the specific linguistic category in question, i.e. the directional adverbs. On the one hand we claim that the DDAs......In this paper I pay a closer look at the use of the CENTRE-PERIPHERY schema in context. I address two specific issues: first, I show how the CENTRE-PERIPHERY schema, encoded in the DDAs, enters into discourses that conceptualize and characterize a local community as both CENTRE and PERIPHERY, i......; furthermore, the DDAs are backgrounded in discourse. Is it reasonable to claim, rather boldly, that “the informants express their identity in the use of the directional adverb ud ‘out’ etc.”? In the course of this article, however, I suggest that the DDAs in question do contribute to the socio...

  4. OCRWM Backgrounder, January 1987

    International Nuclear Information System (INIS)

    1987-01-01

    The Nuclear Waste Policy Act of 1982 (NWPA) assigns to the US Department of Energy (DOE) responsibility for developing a system to safely and economically transport spent nuclear fuel and high-level radioactive waste from various storage sites to geologic repositories or other facilities that constitute elements of the waste management program. This transportation system will evolve from technologies and capabilities already developed. Shipments of spent fuel to a monitored retrievable storage (MRS) facility could begin as early as 1996 if Congress authorizes its construction. Shipments of spent fuel to a geologic repository are scheduled to begin in 1998. The backgrounder provides an overview of DOE's cask development program. Transportation casks are a major element in the DOE nuclear waste transportation system because they are the primary protection against any potential radiation exposure to the public and transportation workers in the event an accident occurs

  5. Monitored background radiometer

    International Nuclear Information System (INIS)

    Ruel, C.

    1988-01-01

    A monitored background radiometer is described comprising: a thermally conductive housing; low conductivity support means mounted on the housing; a sensing plate mounted on the low conductivity support means and spaced from the housing so as to be thermally insulated from the housing and having an outwardly facing first surface; the sensing plate being disposed relative to the housing to receive direct electromagnetic radiation from sources exterior to the radiometer upon the first surface only; means for controllably heating the sensing plate; first temperature sensitive means to measure the temperature of the housing; and second temperature sensitive means to measure the temperature of the sensing plate, so that the heat flux at the sensing plate may be determined from the temperatures of the housing and sensing plate after calibration of the radiometer by measuring the temperatures of the housing and sensing plate while controllably heating the sensing plate

  6. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  7. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  8. Low background infrared (LBIR) facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Low background infrared (LBIR) facility was originally designed to calibrate user supplied blackbody sources and to characterize low-background IR detectors and...

  9. Hanford Site background: Part 1, Soil background for nonradioactive analytes

    International Nuclear Information System (INIS)

    1993-04-01

    Volume two contains the following appendices: Description of soil sampling sites; sampling narrative; raw data soil background; background data analysis; sitewide background soil sampling plan; and use of soil background data for the detection of contamination at waste management unit on the Hanford Site

  10. Note on bouncing backgrounds

    Science.gov (United States)

    de Haro, Jaume; Pan, Supriya

    2018-05-01

    The theory of inflation is one of the fundamental and revolutionary developments of modern cosmology that became able to explain many issues of the early Universe in the context of the standard cosmological model (SCM). However, the initial singularity of the Universe, where physics is indefinite, is still obscure in the combined SCM +inflation scenario. An alternative to SCM +inflation without the initial singularity is thus always welcome, and bouncing cosmology is an attempt at that. The current work is thus motivated to investigate the bouncing solutions in modified gravity theories when the background universe is described by the spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) geometry. We show that the simplest way to obtain the bouncing cosmologies in such spacetime is to consider some kind of Lagrangian whose gravitational sector depends only on the square of the Hubble parameter of the FLRW universe. For these modified Lagrangians, the corresponding Friedmann equation, a constraint in the dynamics of the Universe, depicts a curve in the phase space (H ,ρ ), where H is the Hubble parameter and ρ is the energy density of the Universe. As a consequence, a bouncing cosmology is obtained when this curve is closed and crosses the axis H =0 at least twice, and whose simplest particular example is the ellipse depicting the well-known holonomy corrected Friedmann equation in loop quantum cosmology (LQC). Sometimes, a crucial point in such theories is the appearance of the Ostrogradski instability at the perturbative level; however, fortunately enough, in the present work, as long as the linear level of perturbations is concerned, this instability does not appear, although it may appear at the higher order of perturbations.

  11. NEUTRON ALGORITHM VERIFICATION TESTING

    International Nuclear Information System (INIS)

    COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-01-01

    Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility

  12. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  13. Executive Summary - Historical background

    International Nuclear Information System (INIS)

    2005-01-01

    matter physics experiments at the High Flux Reactor of The Laue Langevin Institute and the ISIS spallation source at Rutherford-Appleton. Recently, we very actively entered the ICARUS neutrino collaboration and were invited to the PIERRE AUGER collaboration which will search for the highest energies in the Universe. Having close ties with CERN we are very actively engaged in CROSS-GRID, a large computer network project. To better understand the historical background of the INP development, it is necessary to add a few comments on financing of science in Poland. During the 70's and the 80's, research was financed through the so-called Central Research Projects for Science and Technical Development. The advantage of this system was that state-allocated research funds were divided only by a few representatives of the scientific community, which allowed realistic allocation of money to a small number of projects. After 1989 we were able to purchase commercially available equipment, which led to the closure of our large and very experienced electronic workshop. We also considerably reduced our well equipped mechanical shop. During the 90's the reduced state financing of science was accompanied by a newly established Committee of Scientific Research which led to the creation of a system of small research projects. This precluded the development of more ambitious research projects and led to the dispersion of equipment among many smaller laboratories and universities. A large research establishment, such as our Institute, could not develop properly under such conditions. In all, between 1989 and 2004 we reduced our personnel from about 800 to 470 and our infrastructure became seriously undercapitalised. However, with energetic search for research funds, from European rather than national research programs, we hope to improve and modernize our laboratories and their infrastructure in the coming years

  14. Algorithmic causets

    International Nuclear Information System (INIS)

    Bolognesi, Tommaso

    2011-01-01

    In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.

  15. Background noise levels in Europe

    OpenAIRE

    Gjestland, Truls

    2008-01-01

    - This report gives a brief overview of typical background noise levels in Europe, and suggests a procedure for the prediction of background noise levels based on population density. A proposal for the production of background noise maps for Europe is included.

  16. Backgrounder

    International Development Research Centre (IDRC) Digital Library (Canada)

    IDRC CRDI

    Center for Mountain Ecosystem Studies, Kunming Institute of Botany of the Chinese Academy of Sciences, China: $1,526,000 to inform effective water governance in the Asian highlands of China, Nepal, and Pakistan. • Ashoka Trust for Research in Ecology and the Environment (ATREE), India: $1,499,300 for research on ...

  17. BACKGROUNDER

    International Development Research Centre (IDRC) Digital Library (Canada)

    IDRC CRDI

    demographic trends, socio-economic development pathways, and strong ... knowledge and experience, and encourage innovation. ... choices, and will work with stakeholders in government, business, civil society, and regional economic.

  18. Backgrounder

    International Development Research Centre (IDRC) Digital Library (Canada)

    IDRC CRDI

    Safe and Inclusive Cities: ... improving urban environments and public spaces might have on reducing the city's high ... violence against women among urban youth of working class neighbourhoods of Islamabad, Rawalpindi, and Karachi,.

  19. BACKGROUNDER

    International Development Research Centre (IDRC) Digital Library (Canada)

    IDRC CRDI

    CARIAA's research agenda addresses gaps and priorities highlighted in the ... Research focuses on climate risk, institutional and regulatory frameworks, markets, and ... The researchers will identify relevant drivers and trends and use develop ...

  20. BACKGROUNDER

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    achieving long‐term food security in Africa, with a focus on post‐harvest loss, ... nutrion and health, and the socio‐economic factors that affect food supply ... Water use. Agricultural producvity in sub‐Saharan Africa is the lowest in the world.

  1. Background elimination methods for multidimensional coincidence γ-ray spectra

    International Nuclear Information System (INIS)

    Morhac, M.

    1997-01-01

    In the paper new methods to separate useful information from background in one, two, three and multidimensional spectra (histograms) measured in large multidetector γ-ray arrays are derived. The sensitive nonlinear peak clipping algorithm is the basis of the methods for estimation of the background in multidimensional spectra. The derived procedures are simple and therefore have a very low cost in terms of computing time. (orig.)

  2. Real-Time Adaptive Foreground/Background Segmentation

    Directory of Open Access Journals (Sweden)

    Sridha Sridharan

    2005-08-01

    Full Text Available The automatic analysis of digital video scenes often requires the segmentation of moving objects from a static background. Historically, algorithms developed for this purpose have been restricted to small frame sizes, low frame rates, or offline processing. The simplest approach involves subtracting the current frame from the known background. However, as the background is rarely known beforehand, the key is how to learn and model it. This paper proposes a new algorithm that represents each pixel in the frame by a group of clusters. The clusters are sorted in order of the likelihood that they model the background and are adapted to deal with background and lighting variations. Incoming pixels are matched against the corresponding cluster group and are classified according to whether the matching cluster is considered part of the background. The algorithm has been qualitatively and quantitatively evaluated against three other well-known techniques. It demonstrated equal or better segmentation and proved capable of processing 320×240 PAL video at full frame rate using only 35%–40% of a 1.8 GHz Pentium 4 computer.

  3. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  4. Optical diffraction tomography in an inhomogeneous background medium

    International Nuclear Information System (INIS)

    Devaney, A; Cheng, J

    2008-01-01

    The filtered back-propagation algorithm (FBP algorithm) is a computationally fast and efficient inversion algorithm for reconstructing the 3D index of refraction distribution of weak scattering samples in free space from scattered field data collected in a set of coherent optical scattering experiments. This algorithm is readily derived using classical Fourier analysis applied to the Born or Rytov weak scattering models appropriate to scatterers embedded in a non-attenuating uniform background. In this paper, the inverse scattering problem for optical diffraction tomography (ODT) is formulated using the so-called distorted wave Born and Rytov approximations and a generalized version of the FBP algorithm is derived that applies to weakly scattering samples that are embedded in realistic, multiple scattering ODT experimental configurations. The new algorithms are based on the generalized linear inverse of the linear transformation relating the scattered field data to the complex index of refraction distribution of the scattering samples and are in the form of a superposition of filtered data, computationally back propagated into the ODT experimental configuration. The paper includes a computer simulation comparing the generalized Born and Rytov based FBP inversion algorithms as well as reconstructions generated using the generalized Born based FBP algorithm of a step index optical fiber from experimental ODT data

  5. A new algorithm for hip fracture surgery

    DEFF Research Database (Denmark)

    Palm, Henrik; Krasheninnikoff, Michael; Holck, Kim

    2012-01-01

    Background and purpose Treatment of hip fracture patients is controversial. We implemented a new operative and supervision algorithm (the Hvidovre algorithm) for surgical treatment of all hip fractures, primarily based on own previously published results. Methods 2,000 consecutive patients over 50...... years of age who were admitted and operated on because of a hip fracture were prospectively included. 1,000 of these patients were included after implementation of the algorithm. Demographic parameters, hospital treatment, and reoperations within the first postoperative year were assessed from patient...... by reoperations was reduced from 24% of total hospitalization before the algorithm was introduced to 18% after it was introduced. Interpretation It is possible to implement an algorithm for treatment of all hip fracture patients in a large teaching hospital. In our case, the Hvidovre algorithm both raised...

  6. Diffuse Cosmic Infrared Background Radiation

    Science.gov (United States)

    Dwek, Eli

    2002-01-01

    The diffuse cosmic infrared background (CIB) consists of the cumulative radiant energy released in the processes of structure formation that have occurred since the decoupling of matter and radiation following the Big Bang. In this lecture I will review the observational data that provided the first detections and limits on the CIB, and the theoretical studies explaining the origin of this background. Finally, I will also discuss the relevance of this background to the universe as seen in high energy gamma-rays.

  7. Background current of radioisotope manometer

    International Nuclear Information System (INIS)

    Vydrik, A.A.

    1987-01-01

    The technique for calculating the main component of the background current of radioisotopic monometers, current from direct collision of ionizing particles and a collector, is described. The reasons for appearance of background photoelectron current are clarified. The most effective way of eliminating background current components is collector protection from the source by a screen made of material with a high gamma-quanta absorption coefficient, such as lead, for example

  8. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  9. Hanford Site background: Part 1, Soil background for nonradioactive analytes

    International Nuclear Information System (INIS)

    1993-04-01

    The determination of soil background is one of the most important activities supporting environmental restoration and waste management on the Hanford Site. Background compositions serve as the basis for identifying soil contamination, and also as a baseline in risk assessment processes used to determine soil cleanup and treatment levels. These uses of soil background require an understanding of the extent to which analytes of concern occur naturally in the soils. This report documents the results of sampling and analysis activities designed to characterize the composition of soil background at the Hanford Site, and to evaluate the feasibility for use as Sitewide background. The compositions of naturally occurring soils in the vadose Zone have been-determined for-nonradioactive inorganic and organic analytes and related physical properties. These results confirm that a Sitewide approach to the characterization of soil background is technically sound and is a viable alternative to the determination and use of numerous local or area backgrounds that yield inconsistent definitions of contamination. Sitewide soil background consists of several types of data and is appropriate for use in identifying contamination in all soils in the vadose zone on the Hanford Site. The natural concentrations of nearly every inorganic analyte extend to levels that exceed calculated health-based cleanup limits. The levels of most inorganic analytes, however, are well below these health-based limits. The highest measured background concentrations occur in three volumetrically minor soil types, the most important of which are topsoils adjacent to the Columbia River that are rich in organic carbon. No organic analyte levels above detection were found in any of the soil samples

  10. Backgrounds and characteristics of arsonists

    NARCIS (Netherlands)

    Labree, W.; Nijman, H.L.I.; Marle, H.J.C. van; Rassin, E.

    2010-01-01

    The aim of this study was to gain more insight in the backgrounds and characteristics of arsonists. For this, the psychiatric, psychological, personal, and criminal backgrounds of all arsonists (n = 25), sentenced to forced treatment in the maximum security forensic hospital “De Kijvelanden”, were

  11. Measurement of natural background neutron

    CERN Document Server

    Li Jain, Ping; Tang Jin Hua; Tang, E S; Xie Yan Fong

    1982-01-01

    A high sensitive neutron monitor is described. It has an approximate counting rate of 20 cpm for natural background neutrons. The pulse amplitude resolution, sensitivity and direction dependence of the monitor were determined. This monitor has been used for natural background measurement in Beijing area. The yearly average dose is given and compared with the results of KEK and CERN.

  12. Aluminum as a source of background in low background experiments

    Energy Technology Data Exchange (ETDEWEB)

    Majorovits, B., E-mail: bela@mppmu.mpg.de [MPI fuer Physik, Foehringer Ring 6, 80805 Munich (Germany); Abt, I. [MPI fuer Physik, Foehringer Ring 6, 80805 Munich (Germany); Laubenstein, M. [Laboratori Nazionali del Gran Sasso, INFN, S.S.17/bis, km 18 plus 910, I-67100 Assergi (Italy); Volynets, O. [MPI fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)

    2011-08-11

    Neutrinoless double beta decay would be a key to understanding the nature of neutrino masses. The next generation of High Purity Germanium experiments will have to be operated with a background rate of better than 10{sup -5} counts/(kg y keV) in the region of interest around the Q-value of the decay. Therefore, so far irrelevant sources of background have to be considered. The metalization of the surface of germanium detectors is in general done with aluminum. The background from the decays of {sup 22}Na, {sup 26}Al, {sup 226}Ra and {sup 228}Th introduced by this metalization is discussed. It is shown that only a special selection of aluminum can keep these background contributions acceptable.

  13. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  14. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  15. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  16. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  17. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  18. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  19. JEM-X background models

    DEFF Research Database (Denmark)

    Huovelin, J.; Maisala, S.; Schultz, J.

    2003-01-01

    Background and determination of its components for the JEM-X X-ray telescope on INTEGRAL are discussed. A part of the first background observations by JEM-X are analysed and results are compared to predictions. The observations are based on extensive imaging of background near the Crab Nebula...... on revolution 41 of INTEGRAL. Total observing time used for the analysis was 216 502 s, with the average of 25 cps of background for each of the two JEM-X telescopes. JEM-X1 showed slightly higher average background intensity than JEM-X2. The detectors were stable during the long exposures, and weak orbital...... background was enhanced in the central area of a detector, and it decreased radially towards the edge, with a clear vignetting effect for both JEM-X units. The instrument background was weakest in the central area of a detector and showed a steep increase at the very edges of both JEM-X detectors...

  20. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  1. Adaptive sensor fusion using genetic algorithms

    International Nuclear Information System (INIS)

    Fitzgerald, D.S.; Adams, D.G.

    1994-01-01

    Past attempts at sensor fusion have used some form of Boolean logic to combine the sensor information. As an alteniative, an adaptive ''fuzzy'' sensor fusion technique is described in this paper. This technique exploits the robust capabilities of fuzzy logic in the decision process as well as the optimization features of the genetic algorithm. This paper presents a brief background on fuzzy logic and genetic algorithms and how they are used in an online implementation of adaptive sensor fusion

  2. Stochastic backgrounds of gravitational waves

    International Nuclear Information System (INIS)

    Maggiore, M.

    2001-01-01

    We review the motivations for the search for stochastic backgrounds of gravitational waves and we compare the experimental sensitivities that can be reached in the near future with the existing bounds and with the theoretical predictions. (author)

  3. Berkeley Low Background Counting Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Sensitive low background assay detectors and sample analysis are available for non-destructive direct gamma-ray assay of samples. Neutron activation analysis is also...

  4. Spectral characterization of natural backgrounds

    Science.gov (United States)

    Winkelmann, Max

    2017-10-01

    As the distribution and use of hyperspectral sensors is constantly increasing, the exploitation of spectral features is a threat for camouflaged objects. To improve camouflage materials at first the spectral behavior of backgrounds has to be known to adjust and optimize the spectral reflectance of camouflage materials. In an international effort, the NATO CSO working group SCI-295 "Development of Methods for Measurements and Evaluation of Natural Background EO Signatures" is developing a method how this characterization of backgrounds has to be done. It is obvious that the spectral characterization of a background will be quite an effort. To compare and exchange data internationally the measurements will have to be done in a similar way. To test and further improve this method an international field trial has been performed in Storkow, Germany. In the following we present first impressions and lessons learned from this field campaign and describe the data that has been measured.

  5. Cosmic microwave background, where next?

    CERN Multimedia

    CERN. Geneva

    2009-01-01

    Ground-based, balloon-borne and space-based experiments will observe the Cosmic Microwave Background in greater details to address open questions about the origin and the evolution of the Universe. In particular, detailed observations the polarization pattern of the Cosmic Microwave Background radiation have the potential to directly probe physics at the GUT scale and illuminate aspects of the physics of the very early Universe.

  6. Adaptive Filtering Algorithms and Practical Implementation

    CERN Document Server

    Diniz, Paulo S R

    2013-01-01

    In the fourth edition of Adaptive Filtering: Algorithms and Practical Implementation, author Paulo S.R. Diniz presents the basic concepts of adaptive signal processing and adaptive filtering in a concise and straightforward manner. The main classes of adaptive filtering algorithms are presented in a unified framework, using clear notations that facilitate actual implementation. The main algorithms are described in tables, which are detailed enough to allow the reader to verify the covered concepts. Many examples address problems drawn from actual applications. New material to this edition includes: Analytical and simulation examples in Chapters 4, 5, 6 and 10 Appendix E, which summarizes the analysis of set-membership algorithm Updated problems and references Providing a concise background on adaptive filtering, this book covers the family of LMS, affine projection, RLS and data-selective set-membership algorithms as well as nonlinear, sub-band, blind, IIR adaptive filtering, and more. Several problems are...

  7. Looking for Cosmic Neutrino Background

    Directory of Open Access Journals (Sweden)

    Chiaki eYanagisawa

    2014-06-01

    Full Text Available Since the discovery of neutrino oscillation in atmospheric neutrinos by the Super-Kamiokande experiment in 1998, study of neutrinos has been one of exciting fields in high-energy physics. All the mixing angles were measured. Quests for 1 measurements of the remaining parameters, the lightest neutrino mass, the CP violating phase(s, and the sign of mass splitting between the mass eigenstates m3 and m1, and 2 better measurements to determine whether the mixing angle theta23 is less than pi/4, are in progress in a well-controlled manner. Determining the nature of neutrinos, whether they are Dirac or Majorana particles is also in progress with continuous improvement. On the other hand, although the ideas of detecting cosmic neutrino background have been discussed since 1960s, there has not been a serious concerted effort to achieve this goal. One of the reasons is that it is extremely difficult to detect such low energy neutrinos from the Big Bang. While there has been tremendous accumulation of information on Cosmic Microwave Background since its discovery in 1965, there is no direct evidence for Cosmic Neutrino Background. The importance of detecting Cosmic Neutrino Background is that, although detailed studies of Big Bang Nucleosynthesis and Cosmic Microwave Background give information of the early Universe at ~a few minutes old and ~300 k years old, respectively, observation of Cosmic Neutrino Background allows us to study the early Universe at $sim$ 1 sec old. This article reviews progress made in the past 50 years on detection methods of Cosmic Neutrino Background.

  8. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  9. Neutron background estimates in GESA

    Directory of Open Access Journals (Sweden)

    Fernandes A.C.

    2014-01-01

    Full Text Available The SIMPLE project looks for nuclear recoil events generated by rare dark matter scattering interactions. Nuclear recoils are also produced by more prevalent cosmogenic neutron interactions. While the rock overburden shields against (μ,n neutrons to below 10−8 cm−2 s−1, it itself contributes via radio-impurities. Additional shielding of these is similar, both suppressing and contributing neutrons. We report on the Monte Carlo (MCNP estimation of the on-detector neutron backgrounds for the SIMPLE experiment located in the GESA facility of the Laboratoire Souterrain à Bas Bruit, and its use in defining additional shielding for measurements which have led to a reduction in the extrinsic neutron background to ∼ 5 × 10−3 evts/kgd. The calculated event rate induced by the neutron background is ∼ 0,3 evts/kgd, with a dominant contribution from the detector container.

  10. LOFT gamma densitometer background fluxes

    International Nuclear Information System (INIS)

    Grimesey, R.A.; McCracken, R.T.

    1978-01-01

    Background gamma-ray fluxes were calculated at the location of the γ densitometers without integral shielding at both the hot-leg and cold-leg primary piping locations. The principal sources for background radiation at the γ densitometers are 16 N activity from the primary piping H 2 O and γ radiation from reactor internal sources. The background radiation was calculated by the point-kernel codes QAD-BSA and QAD-P5A. Reasonable assumptions were required to convert the response functions calculated by point-kernel procedures into the gamma-ray spectrum from reactor internal sources. A brief summary of point-kernel equations and theory is included

  11. A definition of background independence

    International Nuclear Information System (INIS)

    Gryb, Sean

    2010-01-01

    We propose a definition for background (in)/dependence in dynamical theories of the evolution of configurations that have a continuous symmetry and test this definition on particle models and on gravity. Our definition draws from Barbour's best matching framework developed for the purpose of implementing spatial and temporal relationalism. Among other interesting theories, general relativity can be derived within this framework in novel ways. We study the detailed canonical structure of a wide range of best matching theories and show that their actions must have a local gauge symmetry. When gauge theory is derived in this way, we obtain at the same time a conceptual framework for distinguishing between background-dependent and -independent theories. Gauge invariant observables satisfying Kuchar's criterion are identified and, in simple cases, explicitly computed. We propose a procedure for inserting a global background time into temporally relational theories. Interestingly, using this procedure in general relativity leads to unimodular gravity.

  12. Background metric in supergravity theories

    International Nuclear Information System (INIS)

    Yoneya, T.

    1978-01-01

    In supergravity theories, we investigate the conformal anomaly of the path-integral determinant and the problem of fermion zero modes in the presence of a nontrivial background metric. Except in SO(3) -invariant supergravity, there are nonvanishing conformal anomalies. As a consequence, amplitudes around the nontrivial background metric contain unpredictable arbitrariness. The fermion zero modes which are explicitly constructed for the Euclidean Schwarzschild metric are interpreted as an indication of the supersymmetric multiplet structure of a black hole. The degree of degeneracy of a black hole is 2/sup 4n/ in SO(n) supergravity

  13. Background music and cognitive performance.

    Science.gov (United States)

    Angel, Leslie A; Polzella, Donald J; Elvers, Greg C

    2010-06-01

    The present experiment employed standardized test batteries to assess the effects of fast-tempo music on cognitive performance among 56 male and female university students. A linguistic processing task and a spatial processing task were selected from the Criterion Task Set developed to assess verbal and nonverbal performance. Ten excerpts from Mozart's music matched for tempo were selected. Background music increased the speed of spatial processing and the accuracy of linguistic processing. The findings suggest that background music can have predictable effects on cognitive performance.

  14. Children of ethnic minority backgrounds

    DEFF Research Database (Denmark)

    Johansen, Stine Liv

    2010-01-01

    media products and toys just as they will have knowledge of different media texts, play genres, rhymes etc. This has consequences for their ability to access social settings, for instance in play. New research in this field will focus on how children themselves make sense of this balancing of cultures......Children of ethnic minority background balance their everyday life between a cultural background rooted in their ethnic origin and a daily life in day care, schools and with peers that is founded in a majority culture. This means, among other things, that they often will have access to different...

  15. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  16. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  17. Systematic Assessment of Neutron and Gamma Backgrounds Relevant to Operational Modeling and Detection Technology Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Daniel E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hornback, Donald Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nicholson, Andrew D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peplow, Douglas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ayaz-Maierhafer, Birsen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-01

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  18. Low Background Micromegas in CAST

    DEFF Research Database (Denmark)

    Garza, J G; Aune, S.; Aznar, F.

    2014-01-01

    Solar axions could be converted into x-rays inside the strong magnetic field of an axion helioscope, triggering the detection of this elusive particle. Low background x-ray detectors are an essential component for the sensitivity of these searches. We report on the latest developments of the Micr...

  19. Teaching about Natural Background Radiation

    Science.gov (United States)

    Al-Azmi, Darwish; Karunakara, N.; Mustapha, Amidu O.

    2013-01-01

    Ambient gamma dose rates in air were measured at different locations (indoors and outdoors) to demonstrate the ubiquitous nature of natural background radiation in the environment and to show that levels vary from one location to another, depending on the underlying geology. The effect of a lead shield on a gamma radiation field was also…

  20. Educational Choice. A Background Paper.

    Science.gov (United States)

    Quality Education for Minorities Network, Washington, DC.

    This paper addresses school choice, one proposal to address parental involvement concerns, focusing on historical background, definitions, rationale for advocating choice, implementation strategies, and implications for minorities and low-income families. In the past, transfer payment programs such as tuition tax credits and vouchers were…

  1. Kerr metric in cosmological background

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, P C [Gujarat Univ., Ahmedabad (India). Dept. of Mathematics

    1977-06-01

    A metric satisfying Einstein's equation is given which in the vicinity of the source reduces to the well-known Kerr metric and which at large distances reduces to the Robertson-Walker metric of a nomogeneous cosmological model. The radius of the event horizon of the Kerr black hole in the cosmological background is found out.

  2. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  3. The Cosmic Infrared Background Experiment

    Science.gov (United States)

    Bock, James; Battle, J.; Cooray, A.; Hristov, V.; Kawada, M.; Keating, B.; Lee, D.; Matsumoto, T.; Matsuura, S.; Nam, U.; Renbarger, T.; Sullivan, I.; Tsumura, K.; Wada, T.; Zemcov, M.

    2009-01-01

    We are developing the Cosmic Infrared Background ExpeRiment (CIBER) to search for signatures of first-light galaxy emission in the extragalactic background. The first generation of stars produce characteristic signatures in the near-infrared extragalactic background, including a redshifted Ly-cutoff feature and a characteristic fluctuation power spectrum, that may be detectable with a specialized instrument. CIBER consists of two wide-field cameras to measure the fluctuation power spectrum, and a low-resolution and a narrow-band spectrometer to measure the absolute background. The cameras will search for fluctuations on angular scales from 7 arcseconds to 2 degrees, where the first-light galaxy spatial power spectrum peaks. The cameras have the necessary combination of sensitivity, wide field of view, spatial resolution, and multiple bands to make a definitive measurement. CIBER will determine if the fluctuations reported by Spitzer arise from first-light galaxies. The cameras observe in a single wide field of view, eliminating systematic errors associated with mosaicing. Two bands are chosen to maximize the first-light signal contrast, at 1.6 um near the expected spectral maximum, and at 1.0 um; the combination is a powerful discriminant against fluctuations arising from local sources. We will observe regions of the sky surveyed by Spitzer and Akari. The low-resolution spectrometer will search for the redshifted Lyman cutoff feature in the 0.7 - 1.8 um spectral region. The narrow-band spectrometer will measure the absolute Zodiacal brightness using the scattered 854.2 nm Ca II Fraunhofer line. The spectrometers will test if reports of a diffuse extragalactic background in the 1 - 2 um band continues into the optical, or is caused by an under estimation of the Zodiacal foreground. We report performance of the assembled and tested instrument as we prepare for a first sounding rocket flight in early 2009. CIBER is funded by the NASA/APRA sub-orbital program.

  4. Optimization of the Regularization in Background and Foreground Modeling

    Directory of Open Access Journals (Sweden)

    Si-Qi Wang

    2014-01-01

    Full Text Available Background and foreground modeling is a typical method in the application of computer vision. The current general “low-rank + sparse” model decomposes the frames from the video sequences into low-rank background and sparse foreground. But the sparse assumption in such a model may not conform with the reality, and the model cannot directly reflect the correlation between the background and foreground either. Thus, we present a novel model to solve this problem by decomposing the arranged data matrix D into low-rank background L and moving foreground M. Here, we only need to give the priori assumption of the background to be low-rank and let the foreground be separated from the background as much as possible. Based on this division, we use a pair of dual norms, nuclear norm and spectral norm, to regularize the foreground and background, respectively. Furthermore, we use a reweighted function instead of the normal norm so as to get a better and faster approximation model. Detailed explanation based on linear algebra about our two models will be presented in this paper. By the observation of the experimental results, we can see that our model can get better background modeling, and even simplified versions of our algorithms perform better than mainstream techniques IALM and GoDec.

  5. Weighted Low-Rank Approximation of Matrices and Background Modeling

    KAUST Repository

    Dutta, Aritra

    2018-04-15

    We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\\\\ell_1$ norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.

  6. Weighted Low-Rank Approximation of Matrices and Background Modeling

    KAUST Repository

    Dutta, Aritra; Li, Xin; Richtarik, Peter

    2018-01-01

    We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\\ell_1$ norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.

  7. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  8. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  9. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  10. Background radioactivity in environmental materials

    International Nuclear Information System (INIS)

    Maul, P.R.; O'Hara, J.P.

    1989-01-01

    This paper presents the results of a literature search to identify information on concentrations of 'background' radioactivity in foodstuffs and other commonly available environmental materials. The review has concentrated on naturally occurring radioactivity in foods and on UK data, although results from other countries have also been considered where appropriate. The data are compared with established definitions of a 'radioactive' substance and radionuclides which do not appear to be adequately covered in the literature are noted. (author)

  11. Background paper on aquaculture research

    OpenAIRE

    Wenblad, Axel; Jokumsen, Alfred; Eskelinen, Unto; Torrissen, Ole

    2013-01-01

    The Board of MISTRA established in 2012 a Working Group (WG) on Aquaculture to provide the Board with background information for its upcoming decision on whether the foundation should invest in aquaculture research. The WG included Senior Advisor Axel Wenblad, Sweden (Chairman), Professor Ole Torrissen, Norway, Senior Advisory Scientist Unto Eskelinen, Finland and Senior Advisory Scientist Alfred Jokumsen, Denmark. The WG performed an investigation of the Swedish aquaculture sector including ...

  12. The isotropic radio background revisited

    Energy Technology Data Exchange (ETDEWEB)

    Fornengo, Nicolao; Regis, Marco [Dipartimento di Fisica Teorica, Università di Torino, via P. Giuria 1, I–10125 Torino (Italy); Lineros, Roberto A. [Instituto de Física Corpuscular – CSIC/U. Valencia, Parc Científic, calle Catedrático José Beltrán, 2, E-46980 Paterna (Spain); Taoso, Marco, E-mail: fornengo@to.infn.it, E-mail: rlineros@ific.uv.es, E-mail: regis@to.infn.it, E-mail: taoso@cea.fr [Institut de Physique Théorique, CEA/Saclay, F-91191 Gif-sur-Yvette Cédex (France)

    2014-04-01

    We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky.

  13. The isotropic radio background revisited

    International Nuclear Information System (INIS)

    Fornengo, Nicolao; Regis, Marco; Lineros, Roberto A.; Taoso, Marco

    2014-01-01

    We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky

  14. Backgrounds and characteristics of arsonists.

    Science.gov (United States)

    Labree, Wim; Nijman, Henk; van Marle, Hjalmar; Rassin, Eric

    2010-01-01

    The aim of this study was to gain more insight in the backgrounds and characteristics of arsonists. For this, the psychiatric, psychological, personal, and criminal backgrounds of all arsonists (n=25), sentenced to forced treatment in the maximum security forensic hospital "De Kijvelanden", were compared to the characteristics of a control group of patients (n=50), incarcerated at the same institution for other severe crimes. Apart from DSM-IV Axis I and Axis II disorders, family backgrounds, level of education, treatment history, intelligence (WAIS scores), and PCL-R scores were included in the comparisons. Furthermore, the apparent motives for the arson offences were explored. It was found that arsonists had more often received psychiatric treatment, prior to committing their index offence, and had a history of severe alcohol abuse more often in comparison to the controls. The arsonists turned out to be less likely to suffer from a major psychotic disorder. Both groups did not differ significantly on the other variables, among which the PCL-R total scores and factor scores. Exploratory analyses however, did suggest that arsonists may differentiate from non-arsonists on three items of the PCL-R, namely impulsivity (higher scores), superficial charm (lower scores), and juvenile delinquency (lower scores). Although the number of arsonists with a major psychotic disorder was relatively low (28%), delusional thinking of some form was judged to play a role in causing arson crimes in about half of the cases (52%).

  15. Weakly supervised semantic segmentation using fore-background priors

    Science.gov (United States)

    Han, Zheng; Xiao, Zhitao; Yu, Mingjun

    2017-07-01

    Weakly-supervised semantic segmentation is a challenge in the field of computer vision. Most previous works utilize the labels of the whole training set and thereby need the construction of a relationship graph about image labels, thus result in expensive computation. In this study, we tackle this problem from a different perspective. We proposed a novel semantic segmentation algorithm based on background priors, which avoids the construction of a huge graph in whole training dataset. Specifically, a random forest classifier is obtained using weakly supervised training data .Then semantic texton forest (STF) feature is extracted from image superpixels. Finally, a CRF based optimization algorithm is proposed. The unary potential of CRF derived from the outputting probability of random forest classifier and the robust saliency map as background prior. Experiments on the MSRC21 dataset show that the new algorithm outperforms some previous influential weakly-supervised segmentation algorithms. Furthermore, the use of efficient decision forests classifier and parallel computing of saliency map significantly accelerates the implementation.

  16. Background modelling of diffraction data in the presence of ice rings

    Directory of Open Access Journals (Sweden)

    James M. Parkhurst

    2017-09-01

    Full Text Available An algorithm for modelling the background for each Bragg reflection in a series of X-ray diffraction images containing Debye–Scherrer diffraction from ice in the sample is presented. The method involves the use of a global background model which is generated from the complete X-ray diffraction data set. Fitting of this model to the background pixels is then performed for each reflection independently. The algorithm uses a static background model that does not vary over the course of the scan. The greatest improvement can be expected for data where ice rings are present throughout the data set and the local background shape at the size of a spot on the detector does not exhibit large time-dependent variation. However, the algorithm has been applied to data sets whose background showed large pixel variations (variance/mean > 2 and has been shown to improve the results of processing for these data sets. It is shown that the use of a simple flat-background model as in traditional integration programs causes systematic bias in the background determination at ice-ring resolutions, resulting in an overestimation of reflection intensities at the peaks of the ice rings and an underestimation of reflection intensities either side of the ice ring. The new global background-model algorithm presented here corrects for this bias, resulting in a noticeable improvement in R factors following refinement.

  17. Using background knowledge for picture organization and retrieval

    Science.gov (United States)

    Quintana, Yuri

    1997-01-01

    A picture knowledge base management system is described that is used to represent, organize and retrieve pictures from a frame knowledge base. Experiments with human test subjects were conducted to obtain further descriptions of pictures from news magazines. These descriptions were used to represent the semantic content of pictures in frame representations. A conceptual clustering algorithm is described which organizes pictures not only on the observable features, but also on implicit properties derived from the frame representations. The algorithm uses inheritance reasoning to take into account background knowledge in the clustering. The algorithm creates clusters of pictures using a group similarity function that is based on the gestalt theory of picture perception. For each cluster created, a frame is generated which describes the semantic content of pictures in the cluster. Clustering and retrieval experiments were conducted with and without background knowledge. The paper shows how the use of background knowledge and semantic similarity heuristics improves the speed, precision, and recall of queries processed. The paper concludes with a discussion of how natural language processing of can be used to assist in the development of knowledge bases and the processing of user queries.

  18. Numerical method for IR background and clutter simulation

    Science.gov (United States)

    Quaranta, Carlo; Daniele, Gina; Balzarotti, Giorgio

    1997-06-01

    The paper describes a fast and accurate algorithm of IR background noise and clutter generation for application in scene simulations. The process is based on the hypothesis that background might be modeled as a statistical process where amplitude of signal obeys to the Gaussian distribution rule and zones of the same scene meet a correlation function with exponential form. The algorithm allows to provide an accurate mathematical approximation of the model and also an excellent fidelity with reality, that appears from a comparison with images from IR sensors. The proposed method shows advantages with respect to methods based on the filtering of white noise in time or frequency domain as it requires a limited number of computation and, furthermore, it is more accurate than the quasi random processes. The background generation starts from a reticule of few points and by means of growing rules the process is extended to the whole scene of required dimension and resolution. The statistical property of the model are properly maintained in the simulation process. The paper gives specific attention to the mathematical aspects of the algorithm and provides a number of simulations and comparisons with real scenes.

  19. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  20. Seizure detection algorithms based on EMG signals

    DEFF Research Database (Denmark)

    Conradsen, Isa

    Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective...... on the amplitude of the signal. The other algorithm was based on information of the signal in the frequency domain, and it focused on synchronisation of the electrical activity in a single muscle during the seizure. Results: The amplitude-based algorithm reliably detected seizures in 2 of the patients, while...... the frequency-based algorithm was efficient for detecting the seizures in the third patient. Conclusion: Our results suggest that EMG signals could be used to develop an automatic seizuredetection system. However, different patients might require different types of algorithms /approaches....

  1. Family Background and Educational Choices

    DEFF Research Database (Denmark)

    McIntosh, James; D. Munk, Martin

    enrollments, especially for females. Not only did the educational opportunities for individuals with disadvantaged backgrounds improve absolutely, but their relative position also improved. A similarly dramatic increase in attendance at university for the period 1985-2005 was found for these cohorts when......We examine the participation in secondary and tertiary education of five cohorts of Danish males and females who were aged twenty starting in 1982 and ending in 2002. We find that the large expansion of secondary education in this period was characterized by a phenomenal increase in gymnasium...

  2. An improved algorithm for calculating cloud radiation

    International Nuclear Information System (INIS)

    Yuan Guibin; Sun Xiaogang; Dai Jingmin

    2005-01-01

    Clouds radiation characteristic is very important in cloud scene simulation, weather forecasting, pattern recognition, and other fields. In order to detect missiles against cloud backgrounds, to enhance the fidelity of simulation, it is critical to understand a cloud's thermal radiation model. Firstly, the definition of cloud layer infrared emittance is given. Secondly, the discrimination conditions of judging a pixel of focal plane on a satellite in daytime or night time are shown and equations are given. Radiance such as reflected solar radiance, solar scattering, diffuse solar radiance, solar and thermal sky shine, solar and thermal path radiance, cloud blackbody and background radiance are taken into account. Thirdly, the computing methods of background radiance for daytime and night time are given. Through simulations and comparison, this algorithm is proved to be an effective calculating algorithm for cloud radiation

  3. An algorithm for determination of peak regions and baseline elimination in spectroscopic data

    International Nuclear Information System (INIS)

    Morhac, Miroslav

    2009-01-01

    In the paper we propose a new algorithm for the determination of peaks containing regions and their separation from peak-free regions. Further based on this algorithm we propose a new background elimination algorithm which allows more accurate estimate of the background beneath the peaks than the algorithms known so far. The algorithm is based on a clipping operation with the window adjustable automatically to the widths of identified peak regions. The illustrative examples presented in the paper prove in favor of the proposed algorithms.

  4. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  5. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  6. Background radiation map of Thailand

    International Nuclear Information System (INIS)

    Angsuwathana, P.; Chotikanatis, P.

    1997-01-01

    The radioelement concentration in the natural environment as well as the radiation exposure to man in day-to-day life is now the most interesting topic. The natural radiation is frequently referred as a standard for comparing additional sources of man-made radiation such as atomic weapon fallout, nuclear power generation, radioactive waste disposal, etc. The Department of Mineral Resources commenced a five-year project of nationwide airborne geophysical survey by awarding to Kenting Earth Sciences International Limited in 1984. The original purpose of survey was to support mineral exploration and geological mapping. Subsequently, the data quantity has been proved to be suitable for natural radiation information. In 1993 the Department of Mineral Resources, with the assistance of IAEA, published a Background Radiation Map of Thailand at the scale of 1:1,000,000 from the existing airborne radiometric digital data. The production of Background Radiation Map of Thailand is the result of data compilation and correction procedure developed over the Canadian Shield. This end product will be used as a base map in environmental application not only for Thailand but also Southeast Asia region. (author)

  7. [Cosmic Microwave Background (CMB) Anisotropies

    Science.gov (United States)

    Silk, Joseph

    1998-01-01

    One of the main areas of research is the theory of cosmic microwave background (CMB) anisotropies and analysis of CMB data. Using the four year COBE data we were able to improve existing constraints on global shear and vorticity. We found that, in the flat case (which allows for greatest anisotropy), (omega/H)0 less than 10(exp -7), where omega is the vorticity and H is the Hubble constant. This is two orders of magnitude lower than the tightest, previous constraint. We have defined a new set of statistics which quantify the amount of non-Gaussianity in small field cosmic microwave background maps. By looking at the distribution of power around rings in Fourier space, and at the correlations between adjacent rings, one can identify non-Gaussian features which are masked by large scale Gaussian fluctuations. This may be particularly useful for identifying unresolved localized sources and line-like discontinuities. Levin and collaborators devised a method to determine the global geometry of the universe through observations of patterns in the hot and cold spots of the CMB. We have derived properties of the peaks (maxima) of the CMB anisotropies expected in flat and open CDM models. We represent results for angular resolutions ranging from 5 arcmin to 20 arcmin (antenna FWHM), scales that are relevant for the MAP and COBRA/SAMBA space missions and the ground-based interferometer. Results related to galaxy formation and evolution are also discussed.

  8. Optical polarization: background and camouflage

    Science.gov (United States)

    Škerlind, Christina; Hallberg, Tomas; Eriksson, Johan; Kariis, Hans; Bergström, David

    2017-10-01

    Polarimetric imaging sensors in the electro-optical region, already military and commercially available in both the visual and infrared, show enhanced capabilities for advanced target detection and recognition. The capabilities arise due to the ability to discriminate between man-made and natural background surfaces using the polarization information of light. In the development of materials for signature management in the visible and infrared wavelength regions, different criteria need to be met to fulfil the requirements for a good camouflage against modern sensors. In conventional camouflage design, the aimed design of the surface properties of an object is to spectrally match or adapt it to a background and thereby minimizing the contrast given by a specific threat sensor. Examples will be shown from measurements of some relevant materials and how they in different ways affect the polarimetric signature. Dimensioning properties relevant in an optical camouflage from a polarimetric perspective, such as degree of polarization, the viewing or incident angle, and amount of diffuse reflection, mainly in the infrared region, will be discussed.

  9. The Cosmic Microwave Background Anisotropy

    Science.gov (United States)

    Bennett, C. L.

    1994-12-01

    The properties of the cosmic microwave background radiation provide unique constraints on the history and evolution of the universe. The first detection of anisotropy of the microwave radiation was reported by the COBE Team in 1992, based on the first year of flight data. The latest analyses of the first two years of COBE data are reviewed in this talk, including the amplitude of the microwave anisotropy as a function of angular scale and the statistical nature of the fluctuations. The two-year results are generally consistent with the earlier first year results, but the additional data allow for a better determination of the key cosmological parameters. In this talk the COBE results are compared with other observational anisotropy results and directions for future cosmic microwave anisotropy observations will be discussed. The National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC) is responsible for the design, development, and operation of the Cosmic Background Explorer (COBE). Scientific guidance is provided by the COBE Science Working Group.

  10. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  11. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  12. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  13. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  14. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  15. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  16. Background Noise Removal in Ultrasonic B-scan Images Using Iterative Statistical Techniques

    NARCIS (Netherlands)

    Wells, I.; Charlton, P. C.; Mosey, S.; Donne, K. E.

    2008-01-01

    The interpretation of ultrasonic B-scan images can be a time-consuming process and its success depends on operator skills and experience. Removal of the image background will potentially improve its quality and hence improve operator diagnosis. An automatic background noise removal algorithm is

  17. Algorithm 426 : Merge sort algorithm [M1

    NARCIS (Netherlands)

    Bron, C.

    1972-01-01

    Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives

  18. Low background aspects of GERDA

    International Nuclear Information System (INIS)

    Simgen, Hardy

    2011-01-01

    The GERDA experiment operates bare Germanium diodes enriched in 76 Ge in an environment of pure liquid argon to search for neutrinoless double beta decay. A very low radioactive background is essential for the success of the experiment. We present here the research done in order to remove radio-impurities coming from the liquid argon, the stainless steel cryostat and the front-end electronics. We found that liquid argon can be purified efficiently from 222 Rn. The main source of 222 Rn in GERDA is the cryostat which emanates about 55 mBq. A thin copper shroud in the center of the cryostat was implemented to prevent radon from approaching the diodes. Gamma ray screening of radio-pure components for front-end electronics resulted in the development of a pre-amplifier with a total activity of less than 1 mBq 228 Th.

  19. The cosmic microwave background radiation

    International Nuclear Information System (INIS)

    Wilson, R.W.

    1980-01-01

    The history is described of the discovery of microwave radiation of the cosmic background using the 20-foot horn antenna at the Bell Laboratories back in 1965. Ruby masers with travelling wave were used, featuring the lowest noise in the world. The measurement proceeded on 7 cm. In measuring microwave radiation from the regions outside the Milky Way continuous noise was discovered whose temperature exceeded the calculated contributions of the individual detection system elements by 3 K. A comparison with the theory showed that relict radiation from the Big Bang period was the source of the noise. The discovery was verified by measurements on the 20.1 cm wavelength and by other authors' measurements on 0.5 mm to 74 cm, and by optical measurements of the interstellar molecule spectrum. (Ha)

  20. An ultra-low-background detector for axion searches

    Energy Technology Data Exchange (ETDEWEB)

    Aune, S; Ferrer Ribas, E; Giomataris, I; Mols, J P; Papaevangelou, T [IRFU, Centre d' Etudes de Saclay, Gif sur Yvette CEDEX (France); Dafni, T; Lacarra, J Galan; Iguaz, F J; Irastorza, I G; Morales, J; Ruz, J; Tomas, A [Instituto de Fisica Nuclear y Altas EnergIas, Zaragoza (Spain); Fanourakis, G; Geralis, T; Kousouris, K [Institute of Nuclear Physics, NCSR Demokritos, Athens (Greece); Vafeiadis, T, E-mail: Thomas.Papaevangelou@cern.c [Physics Department, Aristotle University, Thessaloniki (Greece)

    2009-07-01

    A low background Micromegas detector has been operating in the CAST experiment at CERN for the search of solar axions since the start of data taking in 2002. The detector, made out of low radioactivity materials, operated efficiently and achieved a very low level of background (5x10{sup -5} keV{sup -1}-cm{sup -2}-s{sup -1}) without any shielding. New manufacturing techniques (Bulk/Microbulk) have led to further improvement of the characteristics of the detector such as uniformity, stability and energy resolution. These characteristics, the implementation of passive shielding and the improvement of the analysis algorithms have dramatically reduced the background level (2x10{sup -7} keV{sup -1}-cm{sup -2}|s{sup -1}), improving thus the overall sensitivity of the experiment and opening new possibilities for future searches.

  1. An ultra-low-background detector for axion searches

    International Nuclear Information System (INIS)

    Aune, S; Ferrer Ribas, E; Giomataris, I; Mols, J P; Papaevangelou, T; Dafni, T; Lacarra, J Galan; Iguaz, F J; Irastorza, I G; Morales, J; Ruz, J; Tomas, A; Fanourakis, G; Geralis, T; Kousouris, K; Vafeiadis, T

    2009-01-01

    A low background Micromegas detector has been operating in the CAST experiment at CERN for the search of solar axions since the start of data taking in 2002. The detector, made out of low radioactivity materials, operated efficiently and achieved a very low level of background (5x10 -5 keV -1 -cm -2 -s -1 ) without any shielding. New manufacturing techniques (Bulk/Microbulk) have led to further improvement of the characteristics of the detector such as uniformity, stability and energy resolution. These characteristics, the implementation of passive shielding and the improvement of the analysis algorithms have dramatically reduced the background level (2x10 -7 keV -1 -cm -2 |s -1 ), improving thus the overall sensitivity of the experiment and opening new possibilities for future searches.

  2. Whitening of Background Brain Activity via Parametric Modeling

    Directory of Open Access Journals (Sweden)

    Nidal Kamel

    2007-01-01

    Full Text Available Several signal subspace techniques have been recently suggested for the extraction of the visual evoked potential signals from brain background colored noise. The majority of these techniques assume the background noise as white, and for colored noise, it is suggested to be whitened, without further elaboration on how this might be done. In this paper, we investigate the whitening capabilities of two parametric techniques: a direct one based on Levinson solution of Yule-Walker equations, called AR Yule-Walker, and an indirect one based on the least-squares solution of forward-backward linear prediction (FBLP equations, called AR-FBLP. The whitening effect of the two algorithms is investigated with real background electroencephalogram (EEG colored noise and compared in time and frequency domains.

  3. Composite Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  4. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  5. Finite lattice extrapolation algorithms

    International Nuclear Information System (INIS)

    Henkel, M.; Schuetz, G.

    1987-08-01

    Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)

  6. Recursive automatic classification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  7. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  8. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  9. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra

    2017-07-02

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  10. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra; Li, Xin; Richtarik, Peter

    2017-01-01

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  11. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  12. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  13. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  14. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  15. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  16. Where genetic algorithms excel.

    Science.gov (United States)

    Baum, E B; Boneh, D; Garrett, C

    2001-01-01

    We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

  17. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  18. NLSE: Parameter-Based Inversion Algorithm

    Science.gov (United States)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  19. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  20. Study of robot landmark recognition with complex background

    Science.gov (United States)

    Huang, Yuqing; Yang, Jia

    2007-12-01

    It's of great importance for assisting robot in path planning, position navigating and task performing by perceiving and recognising environment characteristic. To solve the problem of monocular-vision-oriented landmark recognition for mobile intelligent robot marching with complex background, a kind of nested region growing algorithm which fused with transcendental color information and based on current maximum convergence center is proposed, allowing invariance localization to changes in position, scale, rotation, jitters and weather conditions. Firstly, a novel experiment threshold based on RGB vision model is used for the first image segmentation, which allowing some objects and partial scenes with similar color to landmarks also are detected with landmarks together. Secondly, with current maximum convergence center on segmented image as each growing seed point, the above region growing algorithm accordingly starts to establish several Regions of Interest (ROI) orderly. According to shape characteristics, a quick and effectual contour analysis based on primitive element is applied in deciding whether current ROI could be reserved or deleted after each region growing, then each ROI is judged initially and positioned. When the position information as feedback is conveyed to the gray image, the whole landmarks are extracted accurately with the second segmentation on the local image that exclusive to landmark area. Finally, landmarks are recognised by Hopfield neural network. Results issued from experiments on a great number of images with both campus and urban district as background show the effectiveness of the proposed algorithm.

  1. Plenoptic background oriented schlieren imaging

    International Nuclear Information System (INIS)

    Klemkowsky, Jenna N; Fahringer, Timothy W; Clifford, Christopher J; Thurow, Brian S; Bathel, Brett F

    2017-01-01

    The combination of the background oriented schlieren (BOS) technique with the unique imaging capabilities of a plenoptic camera, termed plenoptic BOS, is introduced as a new addition to the family of schlieren techniques. Compared to conventional single camera BOS, plenoptic BOS is capable of sampling multiple lines-of-sight simultaneously. Displacements from each line-of-sight are collectively used to build a four-dimensional displacement field, which is a vector function structured similarly to the original light field captured in a raw plenoptic image. The displacement field is used to render focused BOS images, which qualitatively are narrow depth of field slices of the density gradient field. Unlike focused schlieren methods that require manually changing the focal plane during data collection, plenoptic BOS synthetically changes the focal plane position during post-processing, such that all focal planes are captured in a single snapshot. Through two different experiments, this work demonstrates that plenoptic BOS is capable of isolating narrow depth of field features, qualitatively inferring depth, and quantitatively estimating the location of disturbances in 3D space. Such results motivate future work to transition this single-camera technique towards quantitative reconstructions of 3D density fields. (paper)

  2. Concerning background from calorimeter ports

    International Nuclear Information System (INIS)

    Digiacomo, N.J.

    1985-01-01

    Any detector system viewing a port or slit in a calorimeter wall will see, in addition to the primary particles of interest, a background of charged and neutral particles and photons generated by scattering from the port walls and by leakage from incompletely contained primary particle showers in the calorimeter near the port. The signal to noise ratio attainable outside the port is a complex function of the primary source spectrum, the calorimeter and port design and, of course, the nature and acceptance of the detector system that views the port. Rather than making general statements about the overall suitability (or lack thereof) of calorimeter ports, we offer here a specific example based on the external spectrometer and slit of the NA34 experiment. This combination of slit and spectrometer is designed for fixed-target work, so that the primary particle momentum spectrum contains higher momentum particles than expected in a heavy ion colliding beam environment. The results are, nevertheless, quite relevant for the collider case

  3. Natural background radiation in Jordan

    International Nuclear Information System (INIS)

    Daoud, M.N.S.

    1997-01-01

    An Airborne Gamma Ray survey has been accomplished for Jordan since 1979. A complete report has been submitted to the Natural Resources Authority along with field and processed data ''digital and analogue''. Natural radioelements concentration is not provided with this report. From the corrected count rate data for each natural radioelement, Concentrations and exposure rates at the ground level were calculated. Contoured maps, showing the exposure rates and the dose rates were created. Both maps reflect the surface geology of Jordan, where the Phosphate areas are very well delineated by high-level contours. In southeastern Jordan the Ordovician sandstone, which contain high percentage of Th (around 2000 ppm in some places) and a moderate percentage of U (about 300 ppm), also show high gamma radiation exposures compared with the surrounding areas. Comparing the values of the exposure rates given in (μR/h) to those obtained from other countries such as United States, Canada, Germany, etc. Jordan shows higher background radiation which reach two folds and even more than those in these countries. More detailed studies should be performed in order to evaluate the radiological risk limits on people who are living in areas of high radiation such that the area of the phosphatic belt which covers a vast area of Jordan high Plateau. (author)

  4. Natural background radiation in Jordan

    Energy Technology Data Exchange (ETDEWEB)

    Daoud, M N.S. [National Resources Authority, Ministry of Energy and Mineral Resources, Amman (Jordan)

    1997-11-01

    An Airborne Gamma Ray survey has been accomplished for Jordan since 1979. A complete report has been submitted to the Natural Resources Authority along with field and processed data ``digital and analogue``. Natural radioelements concentration is not provided with this report. From the corrected count rate data for each natural radioelement, Concentrations and exposure rates at the ground level were calculated. Contoured maps, showing the exposure rates and the dose rates were created. Both maps reflect the surface geology of Jordan, where the Phosphate areas are very well delineated by high-level contours. In southeastern Jordan the Ordovician sandstone, which contain high percentage of Th (around 2000 ppm in some places) and a moderate percentage of U (about 300 ppm), also show high gamma radiation exposures compared with the surrounding areas. Comparing the values of the exposure rates given in ({mu}R/h) to those obtained from other countries such as United States, Canada, Germany, etc. Jordan shows higher background radiation which reach two folds and even more than those in these countries. More detailed studies should be performed in order to evaluate the radiological risk limits on people who are living in areas of high radiation such that the area of the phosphatic belt which covers a vast area of Jordan high Plateau. (author). 8 refs, 10 figs, 7 tabs.

  5. A linear-time algorithm for Euclidean feature transform sets

    NARCIS (Netherlands)

    Hesselink, Wim H.

    2007-01-01

    The Euclidean distance transform of a binary image is the function that assigns to every pixel the Euclidean distance to the background. The Euclidean feature transform is the function that assigns to every pixel the set of background pixels with this distance. We present an algorithm to compute the

  6. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  7. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  8. Scientific background of the project

    International Nuclear Information System (INIS)

    Christofidis, I.

    1997-01-01

    The main objective of the proposed project is the development of radioimmunometric assay(s) for the determination of free and total PSA in serum samples from normal and pathological individuals (BPH, PCa). This will be achieved by: A. Selection of appropriate antibody pairs (capture and labelled antibody) for determination of total PSA (free and complex) and for determination of free PSA. From bibliography we have already spotted some antibody pairs. B. Radiolabelling of antibodies. Several labelling and purification procedures will be followed in order to obtain the required analytical sensitivity and dynamic range of the assays. Special attention will be given to the affinity constant as well as to the stability of the radiolabelled molecules. C. Development of protocols for immobilisation of capture antibodies. We will use several solid support formats (plastic tubes, beads and magnetizable particles). Direct adsorption or covalent binding will be used. Immunoadsorption through immobilised second antibody will be also tested in order to decrease the preparation cost of the solid phase reagents. D. Preparation of standards of suitable purity levels. We will test different PSA-free matrices (Bovine serum, buffer solutions etc.) in order to select the most appropriate among them in terms of low background determination and low reagents cost. E. Optimisation of the immunoassays conditions for the free PSA and total PSA (e.g. assay buffers, incubation time, temperature, one or two step procedure, washings). F. Optimisation and standardisation of assay protocols for kit production. G. Production of kits for distribution in clinical laboratories in Greece for comparison with commercial kits. H. Evaluation of the developed assays in real clinical conditions using well characterised human serum samples. This will be performed in co-operation with the Hellenic Society for Tumor Markers, and other anticancer institutions and hospital clinicians of long standing relation

  9. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  10. Spatio-temporal Background Models for Outdoor Surveillance

    Directory of Open Access Journals (Sweden)

    Pless Robert

    2005-01-01

    Full Text Available Video surveillance in outdoor areas is hampered by consistent background motion which defeats systems that use motion to identify intruders. While algorithms exist for masking out regions with motion, a better approach is to develop a statistical model of the typical dynamic video appearance. This allows the detection of potential intruders even in front of trees and grass waving in the wind, waves across a lake, or cars moving past. In this paper we present a general framework for the identification of anomalies in video, and a comparison of statistical models that characterize the local video dynamics at each pixel neighborhood. A real-time implementation of these algorithms runs on an 800 MHz laptop, and we present qualitative results in many application domains.

  11. Cosmic Microwave Background Mapmaking with a Messenger Field

    Science.gov (United States)

    Huffenberger, Kevin M.; Næss, Sigurd K.

    2018-01-01

    We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.

  12. Hierarchical matrices algorithms and analysis

    CERN Document Server

    Hackbusch, Wolfgang

    2015-01-01

    This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

  13. Multicore and GPU algorithms for Nussinov RNA folding

    Science.gov (United States)

    2014-01-01

    Background One segment of a RNA sequence might be paired with another segment of the same RNA sequence due to the force of hydrogen bonds. This two-dimensional structure is called the RNA sequence's secondary structure. Several algorithms have been proposed to predict an RNA sequence's secondary structure. These algorithms are referred to as RNA folding algorithms. Results We develop cache efficient, multicore, and GPU algorithms for RNA folding using Nussinov's algorithm. Conclusions Our cache efficient algorithm provides a speedup between 1.6 and 3.0 relative to a naive straightforward single core code. The multicore version of the cache efficient single core algorithm provides a speedup, relative to the naive single core algorithm, between 7.5 and 14.0 on a 6 core hyperthreaded CPU. Our GPU algorithm for the NVIDIA C2050 is up to 1582 times as fast as the naive single core algorithm and between 5.1 and 11.2 times as fast as the fastest previously known GPU algorithm for Nussinov RNA folding. PMID:25082539

  14. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  15. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  16. Algorithmic phase diagrams

    Science.gov (United States)

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  17. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  18. Inclusive Flavour Tagging Algorithm

    International Nuclear Information System (INIS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-01-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)

  19. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  20. A method of background noise cancellation for SQUID applications

    International Nuclear Information System (INIS)

    He, D F; Yoshizawa, M

    2003-01-01

    When superconducting quantum inference devices (SQUIDs) operate in low-cost shielding or unshielded environments, the environmental background noise should be reduced to increase the signal-to-noise ratio. In this paper we present a background noise cancellation method based on a spectral subtraction algorithm. We first measure the background noise and estimate the noise spectrum using fast Fourier transform (FFT), then we subtract the spectrum of background noise from that of the observed noisy signal and the signal can be reconstructed by inverse FFT of the subtracted spectrum. With this method, the background noise, especially stationary inferences, can be suppressed well and the signal-to-noise ratio can be increased. Using high-T C radio-frequency SQUID gradiometer and magnetometer, we have measured the magnetic field produced by a watch, which was placed 35 cm under a SQUID. After noise cancellation, the signal-to-noise ratio could be greatly increased. We also used this method to eliminate the vibration noise of a cryocooler SQUID

  1. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  2. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  3. From Genetics to Genetic Algorithms

    Indian Academy of Sciences (India)

    Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.

  4. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  5. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  6. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  7. Compressive sensing based algorithms for electronic defence

    CERN Document Server

    Mishra, Amit Kumar

    2017-01-01

    This book details some of the major developments in the implementation of compressive sensing in radio applications for electronic defense and warfare communication use. It provides a comprehensive background to the subject and at the same time describes some novel algorithms. It also investigates application value and performance-related parameters of compressive sensing in scenarios such as direction finding, spectrum monitoring, detection, and classification.

  8. Deconvolution map-making for cosmic microwave background observations

    International Nuclear Information System (INIS)

    Armitage, Charmaine; Wandelt, Benjamin D.

    2004-01-01

    We describe a new map-making code for cosmic microwave background observations. It implements fast algorithms for convolution and transpose convolution of two functions on the sphere [B. Wandelt and K. Gorski, Phys. Rev. D 63, 123002 (2001)]. Our code can account for arbitrary beam asymmetries and can be applied to any scanning strategy. We demonstrate the method using simulated time-ordered data for three beam models and two scanning patterns, including a coarsened version of the WMAP strategy. We quantitatively compare our results with a standard map-making method and demonstrate that the true sky is recovered with high accuracy using deconvolution map-making

  9. Improved multivariate polynomial factoring algorithm

    International Nuclear Information System (INIS)

    Wang, P.S.

    1978-01-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included

  10. Phase shift extraction and wavefront retrieval from interferograms with background and contrast fluctuations

    International Nuclear Information System (INIS)

    Liu, Qian; Wang, Yang; He, Jianguo; Ji, Fang

    2015-01-01

    The fluctuations of background and contrast cause measurement errors in the phase-shifting technique. To extract the phase shifts from interferograms with background and contrast fluctuations, an iterative algorithm is represented. The phase shifts and wavefront phase are calculated in two individual steps with the least-squares method. The fluctuation factors are determined when the phase shifts are calculated, and the fluctuations are compensated when the wavefront phase is calculated. The advantage of the algorithm lies in its ability to extract phase shifts from interferograms with background and contrast fluctuations converging stably and rapidly. Simulations and experiments verify the effectiveness and reliability of the proposed algorithm. The convergence accuracy and speed are demonstrated by the simulation results. The experiment results show its ability for suppressing phase retrieval errors. (paper)

  11. Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium

    International Nuclear Information System (INIS)

    Chen, Xudong

    2010-01-01

    This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging

  12. Algorithms for optimizing drug therapy

    Directory of Open Access Journals (Sweden)

    Martin Lene

    2004-07-01

    Full Text Available Abstract Background Drug therapy has become increasingly efficient, with more drugs available for treatment of an ever-growing number of conditions. Yet, drug use is reported to be sub optimal in several aspects, such as dosage, patient's adherence and outcome of therapy. The aim of the current study was to investigate the possibility to optimize drug therapy using computer programs, available on the Internet. Methods One hundred and ten officially endorsed text documents, published between 1996 and 2004, containing guidelines for drug therapy in 246 disorders, were analyzed with regard to information about patient-, disease- and drug-related factors and relationships between these factors. This information was used to construct algorithms for identifying optimum treatment in each of the studied disorders. These algorithms were categorized in order to define as few models as possible that still could accommodate the identified factors and the relationships between them. The resulting program prototypes were implemented in HTML (user interface and JavaScript (program logic. Results Three types of algorithms were sufficient for the intended purpose. The simplest type is a list of factors, each of which implies that the particular patient should or should not receive treatment. This is adequate in situations where only one treatment exists. The second type, a more elaborate model, is required when treatment can by provided using drugs from different pharmacological classes and the selection of drug class is dependent on patient characteristics. An easily implemented set of if-then statements was able to manage the identified information in such instances. The third type was needed in the few situations where the selection and dosage of drugs were depending on the degree to which one or more patient-specific factors were present. In these cases the implementation of an established decision model based on fuzzy sets was required. Computer programs

  13. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  14. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  15. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  16. Infrared image background modeling based on improved Susan filtering

    Science.gov (United States)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  17. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  18. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  19. Holistic approach for automated background EEG assessment in asphyxiated full-term infants

    Science.gov (United States)

    Matic, Vladimir; Cherian, Perumpillichira J.; Koolen, Ninah; Naulaers, Gunnar; Swarte, Renate M.; Govaert, Paul; Van Huffel, Sabine; De Vos, Maarten

    2014-12-01

    Objective. To develop an automated algorithm to quantify background EEG abnormalities in full-term neonates with hypoxic ischemic encephalopathy. Approach. The algorithm classifies 1 h of continuous neonatal EEG (cEEG) into a mild, moderate or severe background abnormality grade. These classes are well established in the literature and a clinical neurophysiologist labeled 272 1 h cEEG epochs selected from 34 neonates. The algorithm is based on adaptive EEG segmentation and mapping of the segments into the so-called segments’ feature space. Three features are suggested and further processing is obtained using a discretized three-dimensional distribution of the segments’ features represented as a 3-way data tensor. Further classification has been achieved using recently developed tensor decomposition/classification methods that reduce the size of the model and extract a significant and discriminative set of features. Main results. Effective parameterization of cEEG data has been achieved resulting in high classification accuracy (89%) to grade background EEG abnormalities. Significance. For the first time, the algorithm for the background EEG assessment has been validated on an extensive dataset which contained major artifacts and epileptic seizures. The demonstrated high robustness, while processing real-case EEGs, suggests that the algorithm can be used as an assistive tool to monitor the severity of hypoxic insults in newborns.

  20. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  1. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  2. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  3. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  4. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  5. The Chandra Source Catalog: Background Determination and Source Detection

    Science.gov (United States)

    McCollough, Michael; Rots, Arnold; Primini, Francis A.; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Danny G. Gibbs, II; Grier, John D.; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory are used to generate one of the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).

  6. Chandra Source Catalog: Background Determination and Source Detection

    Science.gov (United States)

    McCollough, Michael L.; Rots, A. H.; Primini, F. A.; Evans, I. N.; Glotfelty, K. J.; Hain, R.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory will used to generate the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).

  7. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  8. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  9. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  10. Animation of planning algorithms

    OpenAIRE

    Sun, Fan

    2014-01-01

    Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...

  11. Secondary Vertex Finder Algorithm

    CERN Document Server

    Heer, Sebastian; The ATLAS collaboration

    2017-01-01

    If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.

  12. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  13. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...

  14. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    International Nuclear Information System (INIS)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; McIntyre, Justin

    2017-01-01

    The aim of this study is to compare radioxenon beta–gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Finally, our results show that existing algorithms can be improved and some newer algorithms can be better than the ones currently used.

  15. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    International Nuclear Information System (INIS)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; McIntyre, Justin

    2017-01-01

    The aim of this paper is to compare radioxenon beta-gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Our results show that existing algorithms can be improved and some newer algorithms can be better than the ones currently used. (author)

  16. An efficient background modeling approach based on vehicle detection

    Science.gov (United States)

    Wang, Jia-yan; Song, Li-mei; Xi, Jiang-tao; Guo, Qing-hua

    2015-10-01

    The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.

  17. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  18. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  19. Gamma-Ray Background Variability in Mobile Detectors

    Science.gov (United States)

    Aucott, Timothy John

    . This is accomplished by making many hours of background measurements with a truck-mounted system, which utilizes high-purity germanium detectors for spectroscopy and sodium iodide detectors for coded aperture imaging. This system also utilizes various peripheral sensors, such as panoramic cameras, laser ranging systems, global positioning systems, and a weather station to provide context for the gamma-ray data. About three hundred hours of data were taken in the San Francisco Bay Area, covering a wide variety of environments that might be encountered in operational scenarios. These measurements were used in a source injection study to evaluate the sensitivity of different algorithms (imaging and spectroscopy) and hardware (sodium iodide and high-purity germanium detectors). These measurements confirm that background distributions in large, mobile detector systems are dominated by systematic, not statistical variations, and both spectroscopy and imaging were found to substantially reduce this variability. Spectroscopy performed better than the coded aperture for the given scintillator array (one square meter of sodium iodide) for a variety of sources and geometries. By modeling the statistical and systematic uncertainties of the background, the data can be sampled to simulate the performance of a detector array of arbitrary size and resolution. With a larger array or lower resolution detectors, however imaging was better able to compensate for background variability.

  20. A propositional CONEstrip algorithm

    NARCIS (Netherlands)

    E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)

    2014-01-01

    textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations

  1. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...

  2. Efficient graph algorithms

    Indian Academy of Sciences (India)

    Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...

  3. The Copenhagen Triage Algorithm

    DEFF Research Database (Denmark)

    Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia

    2016-01-01

    is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...

  4. de Casteljau's Algorithm Revisited

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  5. Algorithms in ambient intelligence

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.

    2005-01-01

    We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of

  6. General Algorithm (High level)

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...

  7. Comprehensive eye evaluation algorithm

    Science.gov (United States)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  8. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  9. Algorithm Theory - SWAT 2006

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

  10. Optimal Quadratic Programming Algorithms

    CERN Document Server

    Dostal, Zdenek

    2009-01-01

    Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers

  11. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  12. Comparison of spatial models for foreground-background segmentation in underwater videos

    OpenAIRE

    Radolko, Martin

    2015-01-01

    The low-level task of foreground-background segregation is an important foundation for many high-level computer vision tasks and has been intensively researched in the past. Nonetheless, unregulated environments usually impose challenging problems, especially the difficult and often neglected underwater environment. There, among others, the edges are blurred, the contrast is impaired and the colors attenuated. Our approach to this problem uses an efficient Background Subtraction algorithm and...

  13. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  14. Test of TEDA, Tsunami Early Detection Algorithm

    Science.gov (United States)

    Bressan, Lidia; Tinti, Stefano

    2010-05-01

    Tsunami detection in real-time, both offshore and at the coastline, plays a key role in Tsunami Warning Systems since it provides so far the only reliable and timely proof of tsunami generation, and is used to confirm or cancel tsunami warnings previously issued on the basis of seismic data alone. Moreover, in case of submarine or coastal landslide generated tsunamis, which are not announced by clear seismic signals and are typically local, real-time detection at the coastline might be the fastest way to release a warning, even if the useful time for emergency operations might be limited. TEDA is an algorithm for real-time detection of tsunami signal on sea-level records, developed by the Tsunami Research Team of the University of Bologna. The development and testing of the algorithm has been accomplished within the framework of the Italian national project DPC-INGV S3 and the European project TRANSFER. The algorithm is to be implemented at station level, and it is based therefore only on sea-level data of a single station, either a coastal tide-gauge or an offshore buoy. TEDA's principle is to discriminate the first tsunami wave from the previous background signal, which implies the assumption that the tsunami waves introduce a difference in the previous sea-level signal. Therefore, in TEDA the instantaneous (most recent) and the previous background sea-level elevation gradients are characterized and compared by proper functions (IS and BS) that are updated at every new data acquisition. Detection is triggered when the instantaneous signal function passes a set threshold and at the same time it is significantly bigger compared to the previous background signal. The functions IS and BS depend on temporal parameters that allow the algorithm to be adapted different situations: in general, coastal tide-gauges have a typical background spectrum depending on the location where the instrument is installed, due to local topography and bathymetry, while offshore buoys are

  15. AN IMPROVED FUZZY CLUSTERING ALGORITHM FOR MICROARRAY IMAGE SPOTS SEGMENTATION

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-11-01

    Full Text Available An automatic cDNA microarray image processing using an improved fuzzy clustering algorithm is presented in this paper. The spot segmentation algorithm proposed uses the gridding technique developed by the authors earlier, for finding the co-ordinates of each spot in an image. Automatic cropping of spots from microarray image is done using these co-ordinates. The present paper proposes an improved fuzzy clustering algorithm Possibility fuzzy local information c means (PFLICM to segment the spot foreground (FG from background (BG. The PFLICM improves fuzzy local information c means (FLICM algorithm by incorporating typicality of a pixel along with gray level information and local spatial information. The performance of the algorithm is validated using a set of simulated cDNA microarray images added with different levels of AWGN noise. The strength of the algorithm is tested by computing the parameters such as the Segmentation matching factor (SMF, Probability of error (pe, Discrepancy distance (D and Normal mean square error (NMSE. SMF value obtained for PFLICM algorithm shows an improvement of 0.9 % and 0.7 % for high noise and low noise microarray images respectively compared to FLICM algorithm. The PFLICM algorithm is also applied on real microarray images and gene expression values are computed.

  16. [A new peak detection algorithm of Raman spectra].

    Science.gov (United States)

    Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing

    2014-01-01

    The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.

  17. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  18. Algorithmic foundation of multi-scale spatial representation

    CERN Document Server

    Li, Zhilin

    2006-01-01

    With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...

  19. Reactive Collision Avoidance Algorithm

    Science.gov (United States)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  20. A Moving Object Detection Algorithm Based on Color Information

    International Nuclear Information System (INIS)

    Fang, X H; Xiong, W; Hu, B J; Wang, L T

    2006-01-01

    This paper designed a new algorithm of moving object detection for the aim of quick moving object detection and orientation, which used a pixel and its neighbors as an image vector to represent that pixel and modeled different chrominance component pixel as a mixture of Gaussians, and set up different mixture model of Gauss for different YUV chrominance components. In order to make full use of the spatial information, color segmentation and background model were combined. Simulation results show that the algorithm can detect intact moving objects even when the foreground has low contrast with background

  1. Algorithm for lamotrigine dose adjustment before, during, and after pregnancy

    DEFF Research Database (Denmark)

    Sabers, A

    2012-01-01

    Sabers A. Algorithm for lamotrigine dose adjustment before, during, and after pregnancy. Acta Neurol Scand: DOI: 10.1111/j.1600-0404.2011.01627.x. © 2011 John Wiley & Sons A/S. Background -  Treatment with lamotrigine (LTG) during pregnancy is associated with a pronounced risk of seizure deterior......Sabers A. Algorithm for lamotrigine dose adjustment before, during, and after pregnancy. Acta Neurol Scand: DOI: 10.1111/j.1600-0404.2011.01627.x. © 2011 John Wiley & Sons A/S. Background -  Treatment with lamotrigine (LTG) during pregnancy is associated with a pronounced risk of seizure...

  2. The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Fang Hu

    2014-04-01

    Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.

  3. Simulation of Experimental Background using FLUKA

    Energy Technology Data Exchange (ETDEWEB)

    Rokni, Sayed

    1999-05-11

    In November 1997, Experiment T423 began acquiring data with the intentions of understanding the energy spectra of high-energy neutrons generated in the interaction of electrons with lead. The following describes a series of FLUKA simulations studying (1) particle yields in the absence of all background; (2) the background caused from scattering in the room; (3) the effects of the thick lead shielding which surrounded the detector; (4) the sources of neutron background created in this lead shielding; and (5) the ratio of the total background to the ideal yield. In each case, particular attention is paid to the neutron yield.

  4. Modeling the Thermal Signature of Natural Backgrounds

    National Research Council Canada - National Science Library

    Gamborg, Marius

    2002-01-01

    Two measuring stations have been established the purpose being to collect comprehensive databases of thermal signatures of background elements in addition to the prevailing meteorological conditions...

  5. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  6. Treatment Algorithm for Ameloblastoma

    Directory of Open Access Journals (Sweden)

    Madhumati Singh

    2014-01-01

    Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.

  7. An Algorithmic Diversity Diet?

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik

    2016-01-01

    With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....

  8. DAL Algorithms and Python

    CERN Document Server

    Aydemir, Bahar

    2017-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...

  9. Genetic algorithm essentials

    CERN Document Server

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  10. BALL - biochemical algorithms library 1.3

    Directory of Open Access Journals (Sweden)

    Stöckel Daniel

    2010-10-01

    Full Text Available Abstract Background The Biochemical Algorithms Library (BALL is a comprehensive rapid application development framework for structural bioinformatics. It provides an extensive C++ class library of data structures and algorithms for molecular modeling and structural bioinformatics. Using BALL as a programming toolbox does not only allow to greatly reduce application development times but also helps in ensuring stability and correctness by avoiding the error-prone reimplementation of complex algorithms and replacing them with calls into the library that has been well-tested by a large number of developers. In the ten years since its original publication, BALL has seen a substantial increase in functionality and numerous other improvements. Results Here, we discuss BALL's current functionality and highlight the key additions and improvements: support for additional file formats, molecular edit-functionality, new molecular mechanics force fields, novel energy minimization techniques, docking algorithms, and support for cheminformatics. Conclusions BALL is available for all major operating systems, including Linux, Windows, and MacOS X. It is available free of charge under the Lesser GNU Public License (LPGL. Parts of the code are distributed under the GNU Public License (GPL. BALL is available as source code and binary packages from the project web site at http://www.ball-project.org. Recently, it has been accepted into the debian project; integration into further distributions is currently pursued.

  11. Training nuclei detection algorithms with simple annotations

    Directory of Open Access Journals (Sweden)

    Henning Kost

    2017-01-01

    Full Text Available Background: Generating good training datasets is essential for machine learning-based nuclei detection methods. However, creating exhaustive nuclei contour annotations, to derive optimal training data from, is often infeasible. Methods: We compared different approaches for training nuclei detection methods solely based on nucleus center markers. Such markers contain less accurate information, especially with regard to nuclear boundaries, but can be produced much easier and in greater quantities. The approaches use different automated sample extraction methods to derive image positions and class labels from nucleus center markers. In addition, the approaches use different automated sample selection methods to improve the detection quality of the classification algorithm and reduce the run time of the training process. We evaluated the approaches based on a previously published generic nuclei detection algorithm and a set of Ki-67-stained breast cancer images. Results: A Voronoi tessellation-based sample extraction method produced the best performing training sets. However, subsampling of the extracted training samples was crucial. Even simple class balancing improved the detection quality considerably. The incorporation of active learning led to a further increase in detection quality. Conclusions: With appropriate sample extraction and selection methods, nuclei detection algorithms trained on the basis of simple center marker annotations can produce comparable quality to algorithms trained on conventionally created training sets.

  12. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  13. Stochastic split determinant algorithms

    International Nuclear Information System (INIS)

    Horvatha, Ivan

    2000-01-01

    I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed

  14. Quantum gate decomposition algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  15. KAM Tori Construction Algorithms

    Science.gov (United States)

    Wiesel, W.

    In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.

  16. Irregular Applications: Architectures & Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    2012-02-06

    Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.

  17. A statistical background noise correction sensitive to the steadiness of background noise.

    Science.gov (United States)

    Oppenheimer, Charles H

    2016-10-01

    A statistical background noise correction is developed for removing background noise contributions from measured source levels, producing a background noise-corrected source level. Like the standard background noise corrections of ISO 3741, ISO 3744, ISO 3745, and ISO 11201, the statistical background correction increases as the background level approaches the measured source level, decreasing the background noise-corrected source level. Unlike the standard corrections, the statistical background correction increases with steadiness of the background and is excluded from use when background fluctuation could be responsible for measured differences between the source and background noise levels. The statistical background noise correction has several advantages over the standard correction: (1) enveloping the true source with known confidence, (2) assuring physical source descriptions when measuring sources in fluctuating backgrounds, (3) reducing background corrected source descriptions by 1 to 8 dB for sources in steady backgrounds, and (4) providing a means to replace standardized background correction caps that incentivize against high precision grade methods.

  18. Hanford Site background: Part 1, Soil background for nonradioactive analytes. Revision 1, Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    1993-04-01

    Volume two contains the following appendices: Description of soil sampling sites; sampling narrative; raw data soil background; background data analysis; sitewide background soil sampling plan; and use of soil background data for the detection of contamination at waste management unit on the Hanford Site.

  19. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  20. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  1. OccuPeak: ChIP-Seq peak calling based on internal background modelling

    NARCIS (Netherlands)

    de Boer, Bouke A.; van Duijvenboden, Karel; van den Boogaard, Malou; Christoffels, Vincent M.; Barnett, Phil; Ruijter, Jan M.

    2014-01-01

    ChIP-seq has become a major tool for the genome-wide identification of transcription factor binding or histone modification sites. Most peak-calling algorithms require input control datasets to model the occurrence of background reads to account for local sequencing and GC bias. However, the

  2. Backtracking algorithm for lepton reconstruction with HADES

    International Nuclear Information System (INIS)

    Sellheim, P

    2015-01-01

    The High Acceptance Di-Electron Spectrometer (HADES) at the GSI Helmholtzzentrum für Schwerionenforschung investigates dilepton and strangeness production in elementary and heavy-ion collisions. In April - May 2012 HADES recorded 7 billion Au+Au events at a beam energy of 1.23 GeV/u with the highest multiplicities measured so far. The track reconstruction and particle identification in the high track density environment are challenging. The most important detector component for lepton identification is the Ring Imaging Cherenkov detector. Its main purpose is the separation of electrons and positrons from large background of charged hadrons produced in heavy-ion collisions. In order to improve lepton identification this backtracking algorithm was developed. In this contribution we will show the results of the algorithm compared to the currently applied method for e +/- identification. Efficiency and purity of a reconstructed e +/- sample will be discussed as well. (paper)

  3. Clinical algorithms to aid osteoarthritis guideline dissemination

    DEFF Research Database (Denmark)

    Meneses, S. R. F.; Goode, A. P.; Nelson, A. E

    2016-01-01

    Background: Numerous scientific organisations have developed evidence-based recommendations aiming to optimise the management of osteoarthritis (OA). Uptake, however, has been suboptimal. The purpose of this exercise was to harmonize the recent recommendations and develop a user-friendly treatment...... algorithm to facilitate translation of evidence into practice. Methods: We updated a previous systematic review on clinical practice guidelines (CPGs) for OA management. The guidelines were assessed using the Appraisal of Guidelines for Research and Evaluation for quality and the standards for developing...... to facilitate the implementation of guidelines in clinical practice are necessary. The algorithms proposed are examples of how to apply recommendations in the clinical context, helping the clinician to visualise the patient flow and timing of different treatment modalities. (C) 2016 Osteoarthritis Research...

  4. 41 CFR 128-1.8001 - Background.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Background. 128-1.8001 Section 128-1.8001 Public Contracts and Property Management Federal Property Management Regulations System (Continued) DEPARTMENT OF JUSTICE 1-INTRODUCTION 1.80-Seismic Safety Program § 128-1.8001 Background. The...

  5. Introduction to the background field method

    International Nuclear Information System (INIS)

    Abbott, L.F.; Brandeis Univ., Waltham, MA

    1982-01-01

    The background field approach to calculations in gauge field theories is presented. Conventional functional techniques are reviewed and the background field method is introduced. Feynman rules and renormalization are discussed and, as an example, the Yang-Mills β function is computed. (author)

  6. 12 CFR 408.1 - Background.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Background. 408.1 Section 408.1 Banks and Banking EXPORT-IMPORT BANK OF THE UNITED STATES PROCEDURES FOR COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT General § 408.1 Background. (a) The National Environmental Policy Act (NEPA) of 1969 (42 U.S.C...

  7. Observing a Gravitational Wave Background With Lisa

    National Research Council Canada - National Science Library

    Tinto, M; Armstrong, J; Estabrook, F

    2000-01-01

    .... Comparison of the conventional Michelson interferometer observable with the fully-symmetric Sagnac data-type allows unambiguous discrimination between a gravitational wave background and instrumental noise. The method presented here can be used to detect a confusion-limited gravitational wave background.

  8. 28 CFR 23.2 - Background.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Background. 23.2 Section 23.2 Judicial Administration DEPARTMENT OF JUSTICE CRIMINAL INTELLIGENCE SYSTEMS OPERATING POLICIES § 23.2 Background. It is... potential threats to the privacy of individuals to whom such data relates, policy guidelines for Federally...

  9. 16 CFR 1404.2 - Background.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Background. 1404.2 Section 1404.2 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION CONSUMER PRODUCT SAFETY ACT REGULATIONS CELLULOSE INSULATION § 1404.2 Background. Based on available fire incident information, engineering analysis of the probable...

  10. Beam-gas Background Observations at LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00214737; The ATLAS collaboration; Alici, Andrea; Lazic, Dragoslav-Laza; Alemany Fernandez, Reyes; Alessio, Federico; Bregliozzi, Giuseppe; Burkhardt, Helmut; Corti, Gloria; Guthoff, Moritz; Manousos, Athanasios; Sjoebaek, Kyrre; D'Auria, Saverio

    2017-01-01

    Observations of beam-induced background at LHC during 2015 and 2016 are presented in this paper. The four LHC experiments use the non-colliding bunches present in the physics-filling pattern of the accelerator to trigger on beam-gas interactions. During luminosity production the LHC experiments record the beam-gas interactions using dedicated background monitors. These data are sent to the LHC control system and are used to monitor the background levels at the experiments during accelerator operation. This is a very important measurement, since poor beam-induced background conditions can seriously affect the performance of the detectors. A summary of the evolution of the background levels during 2015 and 2016 is given in these proceedings.

  11. PENGARUH BACKGROUND MAHASISWA TERHADAP KINERJA AKADEMIK

    Directory of Open Access Journals (Sweden)

    Trianasari Angkawijaya

    2014-09-01

    Full Text Available Abstract: The Effect of Students’ Background on Academic Performance. This study examines the effect of background variables on the academic performance of accounting students in a private university in Surabaya. The background variables under study included previous academic performance, prior knowledge on accounting, sex, motivation, preparedness, and expectations. The results show that previous academic performance, motivation, and expectations have positive and significant effects on the students’ overall academic performance in accounting, while preparedness affects only the students’ performance in management accounting. In contrast, prior knowledge on accounting and sex do not give significant impacts to the students’ overall academic performance.These findings indicate the importance of previous aca­demic performance as well as motivation and expectations as background variables in current academic performance. Keywords: students’ background, academic performance, accounting Abstrak: Pengaruh Background Mahasiswa terhadap Kinerja Akademik. Penelitian ini mengkaji pengaruh variabel background terhadap kinerja akademik mahasiswa akuntansi di Universitas Surabaya. Lima variabel background utama dipergunakan, yaitu kinerja akademik sebelumnya, pengetahuan akun­tansi sebelumnya, jenis kelamin, motivasi, kesiapan, dan ekspektasi. Hipotesis diuji menggunakan model regresi linier berganda OLS dan Robust Standar Error. Hasil penelitian memerlihatkan bahwa kinerja akademik sebelumnya, motivasi, dan ekspektasi memiliki pengaruh positif signifikan terhadap kinerja akademik keseluruhan, sementara kesiapan memberikan pengaruh positif hanya pada kinerja akademik akuntansi manajemen. Sebaliknya, pengetahuan akuntansi sebelumnya dan jenis kelamin tidak memberi­kan pengaruh signifikan terhadap kinerja akademik keseluruhan. Temuan ini mengindikasikan bahwa kinerja akademik sebelumnya beserta motivasi dan ekspektasi adalah variabel background

  12. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  13. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  14. Efficient GPS Position Determination Algorithms

    National Research Council Canada - National Science Library

    Nguyen, Thao Q

    2007-01-01

    ... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

  15. Algorithmic approach to diagram techniques

    International Nuclear Information System (INIS)

    Ponticopoulos, L.

    1980-10-01

    An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)

  16. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  17. Efficient motif finding algorithms for large-alphabet inputs

    Directory of Open Access Journals (Sweden)

    Pavlovic Vladimir

    2010-10-01

    Full Text Available Abstract Background We consider the problem of identifying motifs, recurring or conserved patterns, in the biological sequence data sets. To solve this task, we present a new deterministic algorithm for finding patterns that are embedded as exact or inexact instances in all or most of the input strings. Results The proposed algorithm (1 improves search efficiency compared to existing algorithms, and (2 scales well with the size of alphabet. On a synthetic planted DNA motif finding problem our algorithm is over 10× more efficient than MITRA, PMSPrune, and RISOTTO for long motifs. Improvements are orders of magnitude higher in the same setting with large alphabets. On benchmark TF-binding site problems (FNP, CRP, LexA we observed reduction in running time of over 12×, with high detection accuracy. The algorithm was also successful in rapidly identifying protein motifs in Lipocalin, Zinc metallopeptidase, and supersecondary structure motifs for Cadherin and Immunoglobin families. Conclusions Our algorithm reduces computational complexity of the current motif finding algorithms and demonstrate strong running time improvements over existing exact algorithms, especially in important and difficult cases of large-alphabet sequences.

  18. Improved radiological/nuclear source localization in variable NORM background: An MLEM approach with segmentation data

    Energy Technology Data Exchange (ETDEWEB)

    Penny, Robert D., E-mail: robert.d.penny@leidos.com [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Crowley, Tanya M.; Gardner, Barbara M.; Mandell, Myron J.; Guo, Yanlin; Haas, Eric B.; Knize, Duane J.; Kuharski, Robert A.; Ranta, Dale; Shyffer, Ryan [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Labov, Simon; Nelson, Karl; Seilhan, Brandon [Lawrence Livermore National Laboratory, Livermore, CA (United States); Valentine, John D. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2015-06-01

    A novel approach and algorithm have been developed to rapidly detect and localize both moving and static radiological/nuclear (R/N) sources from an airborne platform. Current aerial systems with radiological sensors are limited in their ability to compensate for variable naturally occurring radioactive material (NORM) background. The proposed approach suppresses the effects of NORM background by incorporating additional information to segment the survey area into regions over which the background is likely to be uniform. The method produces pixelated Source Activity Maps (SAMs) of both target and background radionuclide activity over the survey area. The task of producing the SAMs requires (1) the development of a forward model which describes the transformation of radionuclide activity to detector measurements and (2) the solution of the associated inverse problem. The inverse problem is ill-posed as there are typically fewer measurements than unknowns. In addition the measurements are subject to Poisson statistical noise. The Maximum-Likelihood Expectation-Maximization (MLEM) algorithm is used to solve the inverse problem as it is well suited for under-determined problems corrupted by Poisson noise. A priori terrain information is incorporated to segment the reconstruction space into regions within which we constrain NORM background activity to be uniform. Descriptions of the algorithm and examples of performance with and without segmentation on simulated data are presented.

  19. String pair production in non homogeneous backgrounds

    Energy Technology Data Exchange (ETDEWEB)

    Bolognesi, S. [Department of Physics “E. Fermi” University of Pisa, and INFN - Sezione di Pisa,Largo Pontecorvo, 3, Ed. C, 56127 Pisa (Italy); Rabinovici, E. [Racah Institute of Physics, The Hebrew University of Jerusalem,91904 Jerusalem (Israel); Tallarita, G. [Departamento de Ciencias, Facultad de Artes Liberales,Universidad Adolfo Ibáñez, Santiago 7941169 (Chile)

    2016-04-28

    We consider string pair production in non homogeneous electric backgrounds. We study several particular configurations which can be addressed with the Euclidean world-sheet instanton technique, the analogue of the world-line instanton for particles. In the first case the string is suspended between two D-branes in flat space-time, in the second case the string lives in AdS and terminates on one D-brane (this realizes the holographic Schwinger effect). In some regions of parameter space the result is well approximated by the known analytical formulas, either the particle pair production in non-homogeneous background or the string pair production in homogeneous background. In other cases we see effects which are intrinsically stringy and related to the non-homogeneity of the background. The pair production is enhanced already for particles in time dependent electric field backgrounds. The string nature enhances this even further. For spacial varying electrical background fields the string pair production is less suppressed than the rate of particle pair production. We discuss in some detail how the critical field is affected by the non-homogeneity, for both time and space dependent electric field backgrouds. We also comment on what could be an interesting new prediction for the small field limit. The third case we consider is pair production in holographic confining backgrounds with homogeneous and non-homogeneous fields.

  20. String pair production in non homogeneous backgrounds

    International Nuclear Information System (INIS)

    Bolognesi, S.; Rabinovici, E.; Tallarita, G.

    2016-01-01

    We consider string pair production in non homogeneous electric backgrounds. We study several particular configurations which can be addressed with the Euclidean world-sheet instanton technique, the analogue of the world-line instanton for particles. In the first case the string is suspended between two D-branes in flat space-time, in the second case the string lives in AdS and terminates on one D-brane (this realizes the holographic Schwinger effect). In some regions of parameter space the result is well approximated by the known analytical formulas, either the particle pair production in non-homogeneous background or the string pair production in homogeneous background. In other cases we see effects which are intrinsically stringy and related to the non-homogeneity of the background. The pair production is enhanced already for particles in time dependent electric field backgrounds. The string nature enhances this even further. For spacial varying electrical background fields the string pair production is less suppressed than the rate of particle pair production. We discuss in some detail how the critical field is affected by the non-homogeneity, for both time and space dependent electric field backgrouds. We also comment on what could be an interesting new prediction for the small field limit. The third case we consider is pair production in holographic confining backgrounds with homogeneous and non-homogeneous fields.

  1. Slavnov-Taylor constraints for nontrivial backgrounds

    International Nuclear Information System (INIS)

    Binosi, D.; Quadri, A.

    2011-01-01

    We devise an algebraic procedure for the evaluation of Green's functions in SU(N) Yang-Mills theory in the presence of a nontrivial background field. In the ghost-free sector the dependence of the vertex functional on the background is shown to be uniquely determined by the Slavnov-Taylor identities in terms of a certain 1-PI correlator of the covariant derivatives of the ghost and the antighost fields. At nonvanishing background this amplitude is shown to encode the quantum deformations to the tree-level background-quantum splitting. The approach only relies on the functional identities of the model (Slavnov-Taylor identities, b-equation, antighost equation) and thus it is valid beyond perturbation theory, and, in particular, in a lattice implementation of the background field method. As an example of the formalism we analyze the ghost two-point function and the Kugo-Ojima function in an instanton background in SU(2) Yang-Mills theory, quantized in the background Landau gauge.

  2. Methods in algorithmic analysis

    CERN Document Server

    Dobrushkin, Vladimir A

    2009-01-01

    …helpful to any mathematics student who wishes to acquire a background in classical probability and analysis … This is a remarkably beautiful book that would be a pleasure for a student to read, or for a teacher to make into a year's course.-Harvey Cohn, Computing Reviews, May 2010

  3. Testing mapping algorithms of the cancer-specific EORTC QLQ-C30 onto EQ-5D in malignant mesothelioma

    NARCIS (Netherlands)

    D.T. Arnold (David); D. Rowen (Donna); M.M. Versteegh (Matthijs); A. Morley (Anna); C.E. Hooper (Clare); N.A. Maskell (Nicholas)

    2015-01-01

    markdownabstract__Background:__ In order to estimate utilities for cancer studies where the EQ-5D was not used, the EORTC QLQ-C30 can be used to estimate EQ-5D using existing mapping algorithms. Several mapping algorithms exist for this transformation, however, algorithms tend to lose accuracy in

  4. Sources of the Radio Background Considered

    Energy Technology Data Exchange (ETDEWEB)

    Singal, J.; /KIPAC, Menlo Park /Stanford U.; Stawarz, L.; /KIPAC, Menlo Park /Stanford U. /Jagiellonian U., Astron. Observ.; Lawrence, A.; /Edinburgh U., Inst. Astron. /KIPAC, Menlo Park /Stanford U.; Petrosian, V.; /KIPAC, Menlo Park /Stanford U., Phys. Dept. /Stanford U., Appl. Phys. Dept.

    2011-08-22

    We investigate possible origins of the extragalactic radio background reported by the ARCADE 2 collaboration. The surface brightness of the background is several times higher than that which would result from currently observed radio sources. We consider contributions to the background from diffuse synchrotron emission from clusters and the intergalactic medium, previously unrecognized flux from low surface brightness regions of radio sources, and faint point sources below the flux limit of existing surveys. By examining radio source counts available in the literature, we conclude that most of the radio background is produced by radio point sources that dominate at sub {mu}Jy fluxes. We show that a truly diffuse background produced by elections far from galaxies is ruled out because such energetic electrons would overproduce the observed X-ray/{gamma}-ray background through inverse Compton scattering of the other photon fields. Unrecognized flux from low surface brightness regions of extended radio sources, or moderate flux sources missed entirely by radio source count surveys, cannot explain the bulk of the observed background, but may contribute as much as 10%. We consider both radio supernovae and radio quiet quasars as candidate sources for the background, and show that both fail to produce it at the observed level because of insufficient number of objects and total flux, although radio quiet quasars contribute at the level of at least a few percent. We conclude that the most important population for production of the background is likely ordinary starforming galaxies above redshift 1 characterized by an evolving radio far-infrared correlation, which increases toward the radio loud with redshift.

  5. Extragalactic background light measurements and applications.

    Science.gov (United States)

    Cooray, Asantha

    2016-03-01

    This review covers the measurements related to the extragalactic background light intensity from γ-rays to radio in the electromagnetic spectrum over 20 decades in wavelength. The cosmic microwave background (CMB) remains the best measured spectrum with an accuracy better than 1%. The measurements related to the cosmic optical background (COB), centred at 1 μm, are impacted by the large zodiacal light associated with interplanetary dust in the inner Solar System. The best measurements of COB come from an indirect technique involving γ-ray spectra of bright blazars with an absorption feature resulting from pair-production off of COB photons. The cosmic infrared background (CIB) peaking at around 100 μm established an energetically important background with an intensity comparable to the optical background. This discovery paved the way for large aperture far-infrared and sub-millimetre observations resulting in the discovery of dusty, starbursting galaxies. Their role in galaxy formation and evolution remains an active area of research in modern-day astrophysics. The extreme UV (EUV) background remains mostly unexplored and will be a challenge to measure due to the high Galactic background and absorption of extragalactic photons by the intergalactic medium at these EUV/soft X-ray energies. We also summarize our understanding of the spatial anisotropies and angular power spectra of intensity fluctuations. We motivate a precise direct measurement of the COB between 0.1 and 5 μm using a small aperture telescope observing either from the outer Solar System, at distances of 5 AU or more, or out of the ecliptic plane. Other future applications include improving our understanding of the background at TeV energies and spectral distortions of CMB and CIB.

  6. Honing process optimization algorithms

    Science.gov (United States)

    Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.

    2018-03-01

    This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.

  7. Do instantons like a colorful background?

    Energy Technology Data Exchange (ETDEWEB)

    Gies, H.; Pawlowski, J.M.; Wetterich, C. [Heidelberg Univ. (Germany). Inst. fuer Theoretische Physik; Jaeckel, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2006-08-15

    We investigate chiral symmetry breaking and color symmetry breaking in QCD. The effective potential of the corresponding scalar condensates is discussed in the presence of non-perturbative contributions from the semiclassical one-instanton sector. We concentrate on a color singlet scalar background which can describe chiral condensation, as well as a color cotet scalar background which can generate mass for the gluons. Whereas a non-vanishing singlet chiral field is favored by the instantons, we have found no indication for a preference of color octet backgrounds. (orig.)

  8. Opposite Degree Algorithm and Its Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Guang Yue

    2015-12-01

    Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.

  9. Reconciling taxonomy and phylogenetic inference: formalism and algorithms for describing discord and inferring taxonomic roots

    Directory of Open Access Journals (Sweden)

    Matsen Frederick A

    2012-05-01

    Full Text Available Abstract Background Although taxonomy is often used informally to evaluate the results of phylogenetic inference and the root of phylogenetic trees, algorithmic methods to do so are lacking. Results In this paper we formalize these procedures and develop algorithms to solve the relevant problems. In particular, we introduce a new algorithm that solves a "subcoloring" problem to express the difference between a taxonomy and a phylogeny at a given rank. This algorithm improves upon the current best algorithm in terms of asymptotic complexity for the parameter regime of interest; we also describe a branch-and-bound algorithm that saves orders of magnitude in computation on real data sets. We also develop a formalism and an algorithm for rooting phylogenetic trees according to a taxonomy. Conclusions The algorithms in this paper, and the associated freely-available software, will help biologists better use and understand taxonomically labeled phylogenetic trees.

  10. Timing of Pulsed Prompt Gamma Rays for Background Discrimination

    International Nuclear Information System (INIS)

    Hueso-Gonzalez, F.; Golnik, C.; Berthel, M.; Dreyer, A.; Kormoll, T.; Rohling, H.; Pausch, G.; Enghardt, W.; Fiedler, F.; Heidel, K.; Schoene, S.; Schwengner, R.; Wagner, A.

    2013-06-01

    In the context of particle therapy, particle range verification is a major challenge for the quality assurance of the treatment. One approach is the measurement of the prompt gamma rays resulting from the tissue irradiation. A Compton camera based on several planes of position sensitive gamma ray detectors, together with an imaging algorithm, is expected to reconstruct the prompt gamma ray emission density profile, which is correlated with the dose distribution. At Helmholtz- Zentrum Dresden-Rossendorf (HZDR) and OncoRay, a camera prototype has been developed consisting of two scatter planes (CdZnTe cross strip detectors) and an absorber plane (Lu 2 SiO 5 block detector). The data acquisition is based on VME electronics and handled by software developed on the ROOT platform. The prototype was tested at the linear electron accelerator ELBE at HZDR, which was set up to produce bunched bremsstrahlung photons. Their spectrum has similarities with the one expected from prompt gamma rays in the clinical case, and these are also bunched with the accelerator frequency. The time correlation between the pulsed prompt photons and the measured signals was used for background discrimination, achieving a time resolution of 3 ns (2 ns) FWHM for the CZT (LSO) detector. A time-walk correction was applied for the LSO detector and improved its resolution to 1 ns. In conclusion, the detectors are suitable for time-resolved background discrimination in pulsed clinical particle accelerators. Ongoing tasks are the test of the imaging algorithms and the quantitative comparison with simulations. Further experiments will be performed at proton accelerators. (authors)

  11. Uniform background assumption produces misleading lung EIT images.

    Science.gov (United States)

    Grychtol, Bartłomiej; Adler, Andy

    2013-06-01

    Electrical impedance tomography (EIT) estimates an image of conductivity change within a body from stimulation and measurement at body surface electrodes. There is significant interest in EIT for imaging the thorax, as a monitoring tool for lung ventilation. To be useful in this application, we require an understanding of if and when EIT images can produce inaccurate images. In this paper, we study the consequences of the homogeneous background assumption, frequently made in linear image reconstruction, which introduces a mismatch between the reference measurement and the linearization point. We show in simulation and experimental data that the resulting images may contain large and clinically significant errors. A 3D finite element model of thorax conductivity is used to simulate EIT measurements for different heart and lung conductivity, size and position, as well as different amounts of gravitational collapse and ventilation-associated conductivity change. Three common linear EIT reconstruction algorithms are studied. We find that the asymmetric position of the heart can cause EIT images of ventilation to show up to 60% undue bias towards the left lung and that the effect is particularly strong for a ventilation distribution typical of mechanically ventilated patients. The conductivity gradient associated with gravitational lung collapse causes conductivity changes in non-dependent lung to be overestimated by up to 100% with respect to the dependent lung. Eliminating the mismatch by using a realistic conductivity distribution in the forward model of the reconstruction algorithm strongly reduces these undesirable effects. We conclude that subject-specific anatomically accurate forward models should be used in lung EIT and extra care is required when analysing EIT images of subjects whose background conductivity distribution in the lungs is known to be heterogeneous or exhibiting large changes.

  12. Uniform background assumption produces misleading lung EIT images

    International Nuclear Information System (INIS)

    Grychtol, Bartłomiej; Adler, Andy

    2013-01-01

    Electrical impedance tomography (EIT) estimates an image of conductivity change within a body from stimulation and measurement at body surface electrodes. There is significant interest in EIT for imaging the thorax, as a monitoring tool for lung ventilation. To be useful in this application, we require an understanding of if and when EIT images can produce inaccurate images. In this paper, we study the consequences of the homogeneous background assumption, frequently made in linear image reconstruction, which introduces a mismatch between the reference measurement and the linearization point. We show in simulation and experimental data that the resulting images may contain large and clinically significant errors. A 3D finite element model of thorax conductivity is used to simulate EIT measurements for different heart and lung conductivity, size and position, as well as different amounts of gravitational collapse and ventilation-associated conductivity change. Three common linear EIT reconstruction algorithms are studied. We find that the asymmetric position of the heart can cause EIT images of ventilation to show up to 60% undue bias towards the left lung and that the effect is particularly strong for a ventilation distribution typical of mechanically ventilated patients. The conductivity gradient associated with gravitational lung collapse causes conductivity changes in non-dependent lung to be overestimated by up to 100% with respect to the dependent lung. Eliminating the mismatch by using a realistic conductivity distribution in the forward model of the reconstruction algorithm strongly reduces these undesirable effects. We conclude that subject-specific anatomically accurate forward models should be used in lung EIT and extra care is required when analysing EIT images of subjects whose background conductivity distribution in the lungs is known to be heterogeneous or exhibiting large changes. (paper)

  13. Making maps of the cosmic microwave background: The MAXIMA example

    Science.gov (United States)

    Stompor, Radek; Balbi, Amedeo; Borrill, Julian D.; Ferreira, Pedro G.; Hanany, Shaul; Jaffe, Andrew H.; Lee, Adrian T.; Oh, Sang; Rabii, Bahman; Richards, Paul L.; Smoot, George F.; Winant, Celeste D.; Wu, Jiun-Huei Proty

    2002-01-01

    This work describes cosmic microwave background (CMB) data analysis algorithms and their implementations, developed to produce a pixelized map of the sky and a corresponding pixel-pixel noise correlation matrix from time ordered data for a CMB mapping experiment. We discuss in turn algorithms for estimating noise properties from the time ordered data, techniques for manipulating the time ordered data, and a number of variants of the maximum likelihood map-making procedure. We pay particular attention to issues pertinent to real CMB data, and present ways of incorporating them within the framework of maximum likelihood map making. Making a map of the sky is shown to be not only an intermediate step rendering an image of the sky, but also an important diagnostic stage, when tests for and/or removal of systematic effects can efficiently be performed. The case under study is the MAXIMA-I data set. However, the methods discussed are expected to be applicable to the analysis of other current and forthcoming CMB experiments.

  14. Hyperspectral imaging simulation of object under sea-sky background

    Science.gov (United States)

    Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui

    2016-10-01

    Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.

  15. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  16. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  17. On the microwave background spectrum and noise

    International Nuclear Information System (INIS)

    De Bernardis, P.; Masi, S.

    1982-01-01

    We show that the combined measurement of the cosmic background radiation (CBR) intensity and noise can provide direct information on the temperature and the emissivity of the source responsible for the CBR. (orig.)

  18. Quantum background independence in string theory

    International Nuclear Information System (INIS)

    Witten, E.

    1994-01-01

    Not only in physical string theories, but also in some highly simplified situations, background independence has been difficult to understand. It is argued that the ''holomorphic anomaly'' of Bershadsky, Cecotti, Ooguri and Vafa gives a fundamental explanation of some of the problems. Moreover, their anomaly equation can be interpreted in terms of a rather peculiar quantum version of background independence: in systems afflicted by the anomaly, background independence does not hold order by order in perturbation theory, but the exact partition function as a function of the coupling constants has a background independent interpretation as a state in an auxiliary quantum Hilbert space. The significance of this auxiliary space is otherwise unknown. (author). 23 refs

  19. Background-cross-section-dependent subgroup parameters

    International Nuclear Information System (INIS)

    Yamamoto, Toshihisa

    2003-01-01

    A new set of subgroup parameters was derived that can reproduce the self-shielded cross section against a wide range of background cross sections. The subgroup parameters are expressed with a rational equation which numerator and denominator are expressed as the expansion series of background cross section, so that the background cross section dependence is exactly taken into account in the parameters. The advantage of the new subgroup parameters is that they can reproduce the self-shielded effect not only by group basis but also by subgroup basis. Then an adaptive method is also proposed which uses fitting procedure to evaluate the background-cross-section-dependence of the parameters. One of the simple fitting formula was able to reproduce the self-shielded subgroup cross section by less than 1% error from the precise evaluation. (author)

  20. REQUEST FOR EXPRESSIONS OF INTEREST For Background ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    sbickram

    2012-07-10

    Jul 10, 2012 ... the applicant's qualifications and ability to undertake the hot spot background study. ... any interviews, presentations and subsequent proposals, are the sole ... The program will focus on increasing the resilience of the most ...

  1. Background music: effects on attention performance.

    Science.gov (United States)

    Shih, Yi-Nuo; Huang, Rong-Hwa; Chiang, Hsin-Yu

    2012-01-01

    Previous studies indicate that noise may affect worker attention. However, some background music in the work environment can increase worker satisfaction and productivity. This study compared how music with, and without, lyrics affects human attention. One hundred and two participants, aged 20-24 years, were recruited into this study. Fifty-six males and 46 females participated in this study. Background music with, and without lyrics, was tested for effects on listener concentration in attention testing using a randomized controlled trial (RCT) study. The comparison results revealed that background music with lyrics had significant negative effects on concentration and attention. The findings suggest that, if background music is played in the work environment, music without lyrics is preferable because songs with lyrics are likely to reduce worker attention and performance.

  2. History and background of the project

    Digital Repository Service at National Institute of Oceanography (India)

    Jauhari, P.; Nair, R.R.

    The history of oceanography, the discovery of manganese nodules and the background of the developments in nodule research and mining is given The first nodules were collected in 1981 on board the research vessel R V Gaveshani Following the success...

  3. 47 CFR 215.1 - Background.

    Science.gov (United States)

    2010-10-01

    ... POINT FOR ELECTROMAGNETIC PULSE (EMP) INFORMATION § 215.1 Background. (a) The nuclear electromagnetic pulse (EMP) is part of the complex environment produced by nuclear explosions. It consists of transient...

  4. 32 CFR 770.49 - Background.

    Science.gov (United States)

    2010-07-01

    ..., Washington § 770.49 Background. (a) Puget Sound Naval Shipyard is a major naval ship repair facility, with... interruption. Additionally, most of Puget Sound Naval Shipyard is dedicated to heavy industrial activity where...

  5. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  6. Texture orientation-based algorithm for detecting infrared maritime targets.

    Science.gov (United States)

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  7. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  8. Algorithmic Relative Complexity

    Directory of Open Access Journals (Sweden)

    Daniele Cerra

    2011-04-01

    Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.

  9. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  10. Fuzzy Information Retrieval Using Genetic Algorithms and Relevance Feedback.

    Science.gov (United States)

    Petry, Frederick E.; And Others

    1993-01-01

    Describes an approach that combines concepts from information retrieval, fuzzy set theory, and genetic programing to improve weighted Boolean query formulation via relevance feedback. Highlights include background on information retrieval systems; genetic algorithms; subproblem formulation; and preliminary results based on a testbed. (Contains 12…

  11. Mathematical Background of Public Key Cryptography

    DEFF Research Database (Denmark)

    Frey, Gerhard; Lange, Tanja

    2005-01-01

    The two main systems used for public key cryptography are RSA and protocols based on the discrete logarithm problem in some cyclic group. We focus on the latter problem and state cryptographic protocols and mathematical background material.......The two main systems used for public key cryptography are RSA and protocols based on the discrete logarithm problem in some cyclic group. We focus on the latter problem and state cryptographic protocols and mathematical background material....

  12. A Practical Theorem on Gravitational Wave Backgrounds

    OpenAIRE

    Phinney, E. S.

    2001-01-01

    There is an extremely simple relationship between the spectrum of the gravitational wave background produced by a cosmological distribution of discrete gravitational wave sources, the total time-integrated energy spectrum of an individual source, and the present-day comoving number density of remnants. Stated in this way, the background is entirely independent of the cosmology, and only weakly dependent on the evolutionary history of the sources. This relationship allows one easily to compute...

  13. Isotherms clustering in cosmic microwave background

    International Nuclear Information System (INIS)

    Bershadskii, A.

    2006-01-01

    Isotherms clustering in cosmic microwave background (CMB) has been studied using the 3-year WMAP data on cosmic microwave background radiation. It is shown that the isotherms clustering could be produced by the baryon-photon fluid turbulence in the last scattering surface. The Taylor-microscale Reynolds number of the turbulence is estimated directly from the CMB data as Re λ ∼10 2

  14. Classification of supersymmetric backgrounds of string theory

    International Nuclear Information System (INIS)

    Gran, U.; Gutowski, J.; Papadopoulos, G.; Roest, D.

    2007-01-01

    We review the recent progress made towards the classification of supersymmetric solutions in ten and eleven dimensions with emphasis on those of IIB supergravity. In particular, the spinorial geometry method is outlined and adapted to nearly maximally supersymmetric backgrounds. We then demonstrate its effectiveness by classifying the maximally supersymmetric IIB G-backgrounds and by showing that N=31 IIB solutions do not exist. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  15. Superstring gravitational wave backgrounds with spacetime supersymmetry

    CERN Document Server

    Kiritsis, Elias B; Lüst, Dieter; Kiritsis, E; Kounnas, C; Lüst, D

    1994-01-01

    We analyse the stringy gravitational wave background based on the current algebra E.sup(c).sub(2). We determine its exact spectrum and construct the modular invariant vacuum energy. The corresponding N=1 extension is also constructed. The algebra is again mapped to free bosons and fermions and we show that this background has N=4 (N=2) unbroken spacetime supersymmetry in the type II (heterotic case).

  16. Background dose subtraction in personnel dosimetry

    International Nuclear Information System (INIS)

    Picazo, T.; Llorca, N.; Alabau, J.

    1997-01-01

    In this paper it is proposed to consider the mode of the frequency distribution of the low dose dosemeters from each clinic that uses X rays as the background environmental dose that should be subtracted from the personnel dosimetry to evaluate the doses due to practice. The problems and advantages of this indirect method to estimate the environmental background dose are discussed. The results for 60 towns are presented. (author)

  17. Moving object detection using background subtraction

    CERN Document Server

    Shaikh, Soharab Hossain; Chaki, Nabendu

    2014-01-01

    This Springer Brief presents a comprehensive survey of the existing methodologies of background subtraction methods. It presents a framework for quantitative performance evaluation of different approaches and summarizes the public databases available for research purposes. This well-known methodology has applications in moving object detection from video captured with a stationery camera, separating foreground and background objects and object classification and recognition. The authors identify common challenges faced by researchers including gradual or sudden illumination change, dynamic bac

  18. Aircraft and background noise annoyance effects

    Science.gov (United States)

    Willshire, K. F.

    1984-01-01

    To investigate annoyance of multiple noise sources, two experiments were conducted. The first experiment, which used 48 subjects, was designed to establish annoyance-noise level functions for three community noise sources presented individually: jet aircraft flyovers, air conditioner, and traffic. The second experiment, which used 216 subjects, investigated the effects of background noise on aircraft annoyance as a function of noise level and spectrum shape; and the differences between overall, aircraft, and background noise annoyance. In both experiments, rated annoyance was the dependent measure. Results indicate that the slope of the linear relationship between annoyance and noise level for traffic is significantly different from that of flyover and air conditioner noise and that further research was justified to determine the influence of the two background noises on overall, aircraft, and background noise annoyance (e.g., experiment two). In experiment two, total noise exposure, signal-to-noise ratio, and background source type were found to have effects on all three types of annoyance. Thus, both signal-to-noise ratio, and the background source must be considered when trying to determine community response to combined noise sources.

  19. Low energy background radiation in India

    International Nuclear Information System (INIS)

    Gopinath, D.V.

    1980-01-01

    Spectral distribution of background radiation at 9 locations spread all over India has been measured. Specifications of the counting set-up standardised for measurement are given. At one of the places, the background spectrum was measured with four different types of detectors. A broad peak in 60-100 keV with differing intensity and standard deviation is observed in all the spectra. In the Kalpakkam area, the peak near the seashore is observed to be more intense than away from the shore. This could be due to the presence of monazite sands on the seashore. The natural background radiation is observed to have a steep rise below 20 keV. Peak intensity is found to be independent of both the location (i.e. the source of energy) and the type of detector used for measurement. The calculated spectra due to multiple scattered radiation (with a nominal source energy of 1 MeV) through paraffin wax and the measured background spectrum with the detector shielded with 20 cm wax show good agreement above 40 keV. This shows that 80 keV hump in the natural background radiation is a property of air. The peak, therefore, in the spectra of natural background radiation is essentially a property of medium and it is independent of location or detector. (M.G.B.)

  20. Background of SAM atom-fraction profiles

    International Nuclear Information System (INIS)

    Ernst, Frank

    2017-01-01

    Atom-fraction profiles acquired by SAM (scanning Auger microprobe) have important applications, e.g. in the context of alloy surface engineering by infusion of carbon or nitrogen through the alloy surface. However, such profiles often exhibit an artifact in form of a background with a level that anti-correlates with the local atom fraction. This article presents a theory explaining this phenomenon as a consequence of the way in which random noise in the spectrum propagates into the discretized differentiated spectrum that is used for quantification. The resulting model of “energy channel statistics” leads to a useful semi-quantitative background reduction procedure, which is validated by applying it to simulated data. Subsequently, the procedure is applied to an example of experimental SAM data. The analysis leads to conclusions regarding optimum experimental acquisition conditions. The proposed method of background reduction is based on general principles and should be useful for a broad variety of applications. - Highlights: • Atom-fraction–depth profiles of carbon measured by scanning Auger microprobe • Strong background, varies with local carbon concentration. • Needs correction e.g. for quantitative comparison with simulations • Quantitative theory explains background. • Provides background removal strategy and practical advice for acquisition

  1. Background of SAM atom-fraction profiles

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, Frank

    2017-03-15

    Atom-fraction profiles acquired by SAM (scanning Auger microprobe) have important applications, e.g. in the context of alloy surface engineering by infusion of carbon or nitrogen through the alloy surface. However, such profiles often exhibit an artifact in form of a background with a level that anti-correlates with the local atom fraction. This article presents a theory explaining this phenomenon as a consequence of the way in which random noise in the spectrum propagates into the discretized differentiated spectrum that is used for quantification. The resulting model of “energy channel statistics” leads to a useful semi-quantitative background reduction procedure, which is validated by applying it to simulated data. Subsequently, the procedure is applied to an example of experimental SAM data. The analysis leads to conclusions regarding optimum experimental acquisition conditions. The proposed method of background reduction is based on general principles and should be useful for a broad variety of applications. - Highlights: • Atom-fraction–depth profiles of carbon measured by scanning Auger microprobe • Strong background, varies with local carbon concentration. • Needs correction e.g. for quantitative comparison with simulations • Quantitative theory explains background. • Provides background removal strategy and practical advice for acquisition.

  2. Non-collision backgrounds in ATLAS

    CERN Document Server

    Gibson, S M; The ATLAS collaboration

    2012-01-01

    The proton-proton collision events recorded by the ATLAS experiment are on top of a background that is due to both collision debris and non-collision components. The latter comprises of three types: beam-induced backgrounds, cosmic particles and detector noise. We present studies that focus on the first two of these. We give a detailed description of beam-related and cosmic backgrounds based on the full 2011 ATLAS data set, and present their rates throughout the whole data-taking period. Studies of correlations between tertiary proton halo and muon backgrounds, as well as, residual pressure and resulting beam-gas events seen in beam-condition monitors will be presented. Results of simulations based on the LHC geometry and its parameters will be presented. They help to better understand the features of beam-induced backgrounds in each ATLAS sub-detector. The studies of beam-induced backgrounds in ATLAS reveal their characteristics and serve as a basis for designing rejection tools that can be applied in physic...

  3. Background enhancement in breast MR: Correlation with breast density in mammography and background echotexture in ultrasound

    International Nuclear Information System (INIS)

    Ko, Eun Sook; Lee, Byung Hee; Choi, Hye Young; Kim, Rock Bum; Noh, Woo-Chul

    2011-01-01

    Objective: This study aimed to determine whether background enhancement on MR was related to mammographic breast density or ultrasonographic background echotexture in premenopausal and postmenopausal women. Materials and methods: We studied 142 patients (79 premenopausal, 63 postmenopausal) who underwent mammography, ultrasonography, and breast MR. We reviewed the mammography for overall breast density of the contralateral normal breast according to the four-point scale of the BI-RADS classification. Ultrasound findings were classified as homogeneous or heterogeneous background echotexture according to the BI-RADS lexicon. We rated background enhancement on a contralateral breast MR into four categories based on subtraction images: absent, mild, moderate, and marked. All imaging findings were interpreted independently by two readers without knowledge of menstrual status, imaging findings of other modalities. Results: There were significant differences between the premenopausal and postmenopausal group in distribution of mammographic breast density, ultrasonographic background echotexture, and degree of background enhancement. Regarding the relationship between mammographic density and background enhancement, there was no significant correlation. There was significant relationship between ultrasonographic background echotexture and background enhancement in both premenopausal and postmenopausal groups. Conclusion: There is a significant correlation between ultrasonographic background echotexture and background enhancement in MR regardless of menopausal status. Interpreting breast MR, or scheduling for breast MR of women showing heterogeneous background echotexture needs more caution.

  4. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  5. The binary collision approximation: Background and introduction

    International Nuclear Information System (INIS)

    Robinson, M.T.

    1992-08-01

    The binary collision approximation (BCA) has long been used in computer simulations of the interactions of energetic atoms with solid targets, as well as being the basis of most analytical theory in this area. While mainly a high-energy approximation, the BCA retains qualitative significance at low energies and, with proper formulation, gives useful quantitative information as well. Moreover, computer simulations based on the BCA can achieve good statistics in many situations where those based on full classical dynamical models require the most advanced computer hardware or are even impracticable. The foundations of the BCA in classical scattering are reviewed, including methods of evaluating the scattering integrals, interaction potentials, and electron excitation effects. The explicit evaluation of time at significant points on particle trajectories is discussed, as are scheduling algorithms for ordering the collisions in a developing cascade. An approximate treatment of nearly simultaneous collisions is outlined and the searching algorithms used in MARLOWE are presented

  6. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  7. The background in the experiment Gerda

    Science.gov (United States)

    Agostini, M.; Allardt, M.; Andreotti, E.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Barnabé Heider, M.; Barros, N.; Baudis, L.; Bauer, C.; Becerici-Schmidt, N.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode, T.; Brudanin, V.; Brugnera, R.; Budjáš, D.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; Cossavella, F.; Demidova, E. V.; Domula, A.; Egorov, V.; Falkenstein, R.; Ferella, A.; Freund, K.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Gotti, C.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Guthikonda, K. K.; Hampel, W.; Hegai, A.; Heisel, M.; Hemmer, S.; Heusser, G.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Ioannucci, L.; Csáthy, J. Janicskó; Jochum, J.; Junker, M.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Klimenko, A.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Liu, X.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Machado, A. A.; Majorovits, B.; Maneschg, W.; Nemchenok, I.; Nisi, S.; O'Shaughnessy, C.; Palioselitis, D.; Pandola, L.; Pelczar, K.; Pessina, G.; Pullia, A.; Riboldi, S.; Sada, C.; Salathe, M.; Schmitt, C.; Schreiner, J.; Schulz, O.; Schwingenheuer, B.; Schönert, S.; Shevchik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Strecker, H.; Tarka, M.; Ur, C. A.; Vasenko, A. A.; Volynets, O.; von Sturm, K.; Wagner, V.; Walter, M.; Wegmann, A.; Wester, T.; Wojcik, M.; Yanovich, E.; Zavarise, P.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.

    2014-04-01

    The GERmanium Detector Array ( Gerda) experiment at the Gran Sasso underground laboratory (LNGS) of INFN is searching for neutrinoless double beta () decay of Ge. The signature of the signal is a monoenergetic peak at 2039 keV, the value of the decay. To avoid bias in the signal search, the present analysis does not consider all those events, that fall in a 40 keV wide region centered around . The main parameters needed for the analysis are described. A background model was developed to describe the observed energy spectrum. The model contains several contributions, that are expected on the basis of material screening or that are established by the observation of characteristic structures in the energy spectrum. The model predicts a flat energy spectrum for the blinding window around with a background index ranging from 17.6 to 23.8 cts/(keV kg yr). A part of the data not considered before has been used to test if the predictions of the background model are consistent. The observed number of events in this energy region is consistent with the background model. The background at is dominated by close sources, mainly due to K, Bi, Th, Co and emitting isotopes from the Ra decay chain. The individual fractions depend on the assumed locations of the contaminants. It is shown, that after removal of the known peaks, the energy spectrum can be fitted in an energy range of 200 keV around with a constant background. This gives a background index consistent with the full model and uncertainties of the same size.

  8. The spinorial geometry of supersymmetric backgrounds

    International Nuclear Information System (INIS)

    Gillard, J; Gran, U; Papadopoulos, G

    2005-01-01

    We propose a new method to solve the Killing spinor equations of 11-dimensional supergravity based on a description of spinors in terms of forms and on the Spin(1, 10) gauge symmetry of the supercovariant derivative. We give the canonical form of Killing spinors for backgrounds preserving two supersymmetries, N = 2, provided that one of the spinors represents the orbit of Spin(1, 10) with stability subgroup SU(5). We directly solve the Killing spinor equations of N = 1 and some N = 2, N = 3 and N = 4 backgrounds. In the N = 2 case, we investigate backgrounds with SU(5) and SU(4) invariant Killing spinors and compute the associated spacetime forms. We find that N = 2 backgrounds with SU(5) invariant Killing spinors admit a timelike Killing vector and that the space transverse to the orbits of this vector field is a Hermitian manifold with an SU(5)-structure. Furthermore, N = 2 backgrounds with SU(4) invariant Killing spinors admit two Killing vectors, one timelike and one spacelike. The space transverse to the orbits of the former is an almost Hermitian manifold with an SU(4)-structure. The spacelike Killing vector field leaves the almost complex structure invariant. We explore the canonical form of Killing spinors for backgrounds preserving more than two supersymmetries, N > 2. We investigate a class of N = 3 and N = 4 backgrounds with SU(4) invariant spinors. We find that in both cases the space transverse to a timelike vector field is a Hermitian manifold equipped with an SU(4)-structure and admits two holomorphic Killing vector fields. We also present an application to M-theory Calabi-Yau compactifications with fluxes to one dimension

  9. The spinorial geometry of supersymmetric backgrounds

    Energy Technology Data Exchange (ETDEWEB)

    Gillard, J; Gran, U; Papadopoulos, G [Department of Mathematics, King' s College London, Strand, London WC2R 2LS (United Kingdom)

    2005-03-21

    We propose a new method to solve the Killing spinor equations of 11-dimensional supergravity based on a description of spinors in terms of forms and on the Spin(1, 10) gauge symmetry of the supercovariant derivative. We give the canonical form of Killing spinors for backgrounds preserving two supersymmetries, N = 2, provided that one of the spinors represents the orbit of Spin(1, 10) with stability subgroup SU(5). We directly solve the Killing spinor equations of N = 1 and some N = 2, N = 3 and N = 4 backgrounds. In the N = 2 case, we investigate backgrounds with SU(5) and SU(4) invariant Killing spinors and compute the associated spacetime forms. We find that N = 2 backgrounds with SU(5) invariant Killing spinors admit a timelike Killing vector and that the space transverse to the orbits of this vector field is a Hermitian manifold with an SU(5)-structure. Furthermore, N = 2 backgrounds with SU(4) invariant Killing spinors admit two Killing vectors, one timelike and one spacelike. The space transverse to the orbits of the former is an almost Hermitian manifold with an SU(4)-structure. The spacelike Killing vector field leaves the almost complex structure invariant. We explore the canonical form of Killing spinors for backgrounds preserving more than two supersymmetries, N > 2. We investigate a class of N = 3 and N = 4 backgrounds with SU(4) invariant spinors. We find that in both cases the space transverse to a timelike vector field is a Hermitian manifold equipped with an SU(4)-structure and admits two holomorphic Killing vector fields. We also present an application to M-theory Calabi-Yau compactifications with fluxes to one dimension.

  10. ADAPTIVE BACKGROUND DENGAN METODE GAUSSIAN MIXTURE MODELS UNTUK REAL-TIME TRACKING

    Directory of Open Access Journals (Sweden)

    Silvia Rostianingsih

    2008-01-01

    Full Text Available Nowadays, motion tracking application is widely used for many purposes, such as detecting traffic jam and counting how many people enter a supermarket or a mall. A method to separate background and the tracked object is required for motion tracking. It will not be hard to develop the application if the tracking is performed on a static background, but it will be difficult if the tracked object is at a place with a non-static background, because the changing part of the background can be recognized as a tracking area. In order to handle the problem an application can be made to separate background where that separation can adapt to change that occur. This application is made to produce adaptive background using Gaussian Mixture Models (GMM as its method. GMM method clustered the input pixel data with pixel color value as it’s basic. After the cluster formed, dominant distributions are choosen as background distributions. This application is made by using Microsoft Visual C 6.0. The result of this research shows that GMM algorithm could made adaptive background satisfactory. This proofed by the result of the tests that succeed at all condition given. This application can be developed so the tracking process integrated in adaptive background maker process. Abstract in Bahasa Indonesia : Saat ini, aplikasi motion tracking digunakan secara luas untuk banyak tujuan, seperti mendeteksi kemacetan dan menghitung berapa banyak orang yang masuk ke sebuah supermarket atau sebuah mall. Sebuah metode untuk memisahkan antara background dan obyek yang di-track dibutuhkan untuk melakukan motion tracking. Membuat aplikasi tracking pada background yang statis bukanlah hal yang sulit, namun apabila tracking dilakukan pada background yang tidak statis akan lebih sulit, dikarenakan perubahan background dapat dikenali sebagai area tracking. Untuk mengatasi masalah tersebut, dapat dibuat suatu aplikasi untuk memisahkan background dimana aplikasi tersebut dapat

  11. Research on machine learning framework based on random forest algorithm

    Science.gov (United States)

    Ren, Qiong; Cheng, Hui; Han, Hai

    2017-03-01

    With the continuous development of machine learning, industry and academia have released a lot of machine learning frameworks based on distributed computing platform, and have been widely used. However, the existing framework of machine learning is limited by the limitations of machine learning algorithm itself, such as the choice of parameters and the interference of noises, the high using threshold and so on. This paper introduces the research background of machine learning framework, and combined with the commonly used random forest algorithm in machine learning classification algorithm, puts forward the research objectives and content, proposes an improved adaptive random forest algorithm (referred to as ARF), and on the basis of ARF, designs and implements the machine learning framework.

  12. A new algorithmic approach for fingers detection and identification

    Science.gov (United States)

    Mubashar Khan, Arslan; Umar, Waqas; Choudhary, Taimoor; Hussain, Fawad; Haroon Yousaf, Muhammad

    2013-03-01

    Gesture recognition is concerned with the goal of interpreting human gestures through mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Hand gesture detection in a real time environment, where the time and memory are important issues, is a critical operation. Hand gesture recognition largely depends on the accurate detection of the fingers. This paper presents a new algorithmic approach to detect and identify fingers of human hand. The proposed algorithm does not depend upon the prior knowledge of the scene. It detects the active fingers and Metacarpophalangeal (MCP) of the inactive fingers from an already detected hand. Dynamic thresholding technique and connected component labeling scheme are employed for background elimination and hand detection respectively. Algorithm proposed a new approach for finger identification in real time environment keeping the memory and time constraint as low as possible.

  13. An AK-LDMeans algorithm based on image clustering

    Science.gov (United States)

    Chen, Huimin; Li, Xingwei; Zhang, Yongbin; Chen, Nan

    2018-03-01

    Clustering is an effective analytical technique for handling unmarked data for value mining. Its ultimate goal is to mark unclassified data quickly and correctly. We use the roadmap for the current image processing as the experimental background. In this paper, we propose an AK-LDMeans algorithm to automatically lock the K value by designing the Kcost fold line, and then use the long-distance high-density method to select the clustering centers to further replace the traditional initial clustering center selection method, which further improves the efficiency and accuracy of the traditional K-Means Algorithm. And the experimental results are compared with the current clustering algorithm and the results are obtained. The algorithm can provide effective reference value in the fields of image processing, machine vision and data mining.

  14. DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Srinivasan

    2010-11-01

    Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.

  15. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  16. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    Science.gov (United States)

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  17. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  18. Algorithmic Reflections on Choreography

    Directory of Open Access Journals (Sweden)

    Pablo Ventura

    2016-11-01

    Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.

  19. Optimal Background Attenuation for Fielded Spectroscopic Detection Systems

    International Nuclear Information System (INIS)

    Robinson, Sean M.; Ashbaker, Eric D.; Schweppe, John E.; Siciliano, Edward R.

    2007-01-01

    Radiation detectors are often placed in positions difficult to shield from the effects of terrestrial background gamma radiation. This is particularly true in the case of Radiation Portal Monitor (RPM) systems, as their wide viewing angle and outdoor installations make them susceptible to radiation from the surrounding area. Reducing this source of background can improve gross-count detection capabilities in the current generation of non-spectroscopic RPM's as well as source identification capabilities in the next generation of spectroscopic RPM's. To provide guidance for designing such systems, the problem of shielding a general spectroscopic-capable RPM system from terrestrial gamma radiation is considered. This analysis is carried out by template matching algorithms, to determine and isolate a set of non-threat isotopes typically present in the commerce stream. Various model detector and shielding scenarios are calculated using the Monte-Carlo N Particle (MCNP) computer code. Amounts of nominal-density shielding needed to increase the probability of detection for an ensemble of illicit sources are given. Common shielding solutions such as steel plating are evaluated based on the probability of detection for 3 particular illicit sources of interest, and the benefits are weighed against the incremental cost of shielding. Previous work has provided optimal shielding scenarios for RPMs based on gross-counting measurements, and those same solutions (shielding the internal detector cavity, direct shielding of the ground between the detectors, and the addition of collimators) are examined with respect to their utility to improving spectroscopic detection

  20. Evaluating ortholog prediction algorithms in a yeast model clade.

    Directory of Open Access Journals (Sweden)

    Leonidas Salichos

    Full Text Available BACKGROUND: Accurate identification of orthologs is crucial for evolutionary studies and for functional annotation. Several algorithms have been developed for ortholog delineation, but so far, manually curated genome-scale biological databases of orthologous genes for algorithm evaluation have been lacking. We evaluated four popular ortholog prediction algorithms (MultiParanoid; and OrthoMCL; RBH: Reciprocal Best Hit; RSD: Reciprocal Smallest Distance; the last two extended into clustering algorithms cRBH and cRSD, respectively, so that they can predict orthologs across multiple taxa against a set of 2,723 groups of high-quality curated orthologs from 6 Saccharomycete yeasts in the Yeast Gene Order Browser. RESULTS: Examination of sensitivity [TP/(TP+FN], specificity [TN/(TN+FP], and accuracy [(TP+TN/(TP+TN+FP+FN] across a broad parameter range showed that cRBH was the most accurate and specific algorithm, whereas OrthoMCL was the most sensitive. Evaluation of the algorithms across a varying number of species showed that cRBH had the highest accuracy and lowest false discovery rate [FP/(FP+TP], followed by cRSD. Of the six species in our set, three descended from an ancestor that underwent whole genome duplication. Subsequent differential duplicate loss events in the three descendants resulted in distinct classes of gene loss patterns, including cases where the genes retained in the three descendants are paralogs, constituting 'traps' for ortholog prediction algorithms. We found that the false discovery rate of all algorithms dramatically increased in these traps. CONCLUSIONS: These results suggest that simple algorithms, like cRBH, may be better ortholog predictors than more complex ones (e.g., OrthoMCL and MultiParanoid for evolutionary and functional genomics studies where the objective is the accurate inference of single-copy orthologs (e.g., molecular phylogenetics, but that all algorithms fail to accurately predict orthologs when paralogy

  1. THE QUASIPERIODIC AUTOMATED TRANSIT SEARCH ALGORITHM

    International Nuclear Information System (INIS)

    Carter, Joshua A.; Agol, Eric

    2013-01-01

    We present a new algorithm for detecting transiting extrasolar planets in time-series photometry. The Quasiperiodic Automated Transit Search (QATS) algorithm relaxes the usual assumption of strictly periodic transits by permitting a variable, but bounded, interval between successive transits. We show that this method is capable of detecting transiting planets with significant transit timing variations without any loss of significance— s mearing — as would be incurred with traditional algorithms; however, this is at the cost of a slightly increased stochastic background. The approximate times of transit are standard products of the QATS search. Despite the increased flexibility, we show that QATS has a run-time complexity that is comparable to traditional search codes and is comparably easy to implement. QATS is applicable to data having a nearly uninterrupted, uniform cadence and is therefore well suited to the modern class of space-based transit searches (e.g., Kepler, CoRoT). Applications of QATS include transiting planets in dynamically active multi-planet systems and transiting planets in stellar binary systems.

  2. Ablation plume dynamics in a background gas

    DEFF Research Database (Denmark)

    Amoruso, Salvatore; Schou, Jørgen; Lunney, James G.

    2010-01-01

    The expansion of a plume in a background gas of pressure comparable to that used in pulsed laser deposition (PLD) has been analyzed in terms of the model of Predtechensky and Mayorov (PM). This approach gives a relatively clear and simple description of the essential hydrodynamics during the expa......The expansion of a plume in a background gas of pressure comparable to that used in pulsed laser deposition (PLD) has been analyzed in terms of the model of Predtechensky and Mayorov (PM). This approach gives a relatively clear and simple description of the essential hydrodynamics during...... the expansion. The model also leads to an insightful treatment of the stopping behavior in dimensionless units for plumes and background gases of different atomic/molecular masses. The energetics of the plume dynamics can also be treated with this model. Experimental time-of-flight data of silver ions in a neon...... background gas show a fair agreement with predictions from the PM-model. Finally we discuss the validity of the model, if the work done by the pressure of the background gas is neglected....

  3. Measurements of the cosmic background radiation

    International Nuclear Information System (INIS)

    Weiss, R.

    1980-01-01

    Measurements of the attributes of the 2.7-K microwave background radiation (CBR) are reviewed, with emphasis on the analytic phase of CBR studies. Methods for the direct measurement of the CBR spectrum are discussed. Attention is given to receivers, antennas, absolute receiver calibration, atmospheric emission and absorption, the galactic background contribution, the analysis of LF measurements, and recent HF observations of the CBR spectrum. Measurements of the large-angular-scale intensity distribution of the CBR (the most convincing evidence that the radiation is of cosmological origin) are examined, along with limits on the linear polarization of the CBR. A description is given of the NASA-sponsored Cosmic Background Explorer (COBE) satellite mission. The results of the COBE mission will be a set of sky maps showing, in the wave number range from 1 to 10,000 kaysers, the galactic background radiation due to synchrotron emission from galactic cosmic rays, to diffuse thermal emission from H II regions, and to diffuse thermal emission from interstellar and interplanetary dust, as well as a residue consisting of the CBR and whatever other cosmological background might exist

  4. BLINCK?A diagnostic algorithm for skin cancer diagnosis combining clinical features with dermatoscopy findings

    OpenAIRE

    Bourne, Peter; Rosendahl, Cliff; Keir, Jeff; Cameron, Alan

    2012-01-01

    Background: Deciding whether a skin lesion requires biopsy to exclude skin cancer is often challenging for primary care clinicians in Australia. There are several published algorithms designed to assist with the diagnosis of skin cancer but apart from the clinical ABCD rule, these algorithms only evaluate the dermatoscopic features of a lesion. Objectives: The BLINCK algorithm explores the effect of combining clinical history and examination with fundamental dermatoscopic assessment in primar...

  5. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    Energy Technology Data Exchange (ETDEWEB)

    Kagie, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lanterman, Aaron D. [Georgia Inst. of Technology, Atlanta, GA (United States)

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  6. Kepler Planet Detection Metrics: Automatic Detection of Background Objects Using the Centroid Robovetter

    Science.gov (United States)

    Mullally, Fergal

    2017-01-01

    We present an automated method of identifying background eclipsing binaries masquerading as planet candidates in the Kepler planet candidate catalogs. We codify the manual vetting process for Kepler Objects of Interest (KOIs) described in Bryson et al. (2013) with a series of measurements and tests that can be performed algorithmically. We compare our automated results with a sample of manually vetted KOIs from the catalog of Burke et al. (2014) and find excellent agreement. We test the performance on a set of simulated transits and find our algorithm correctly identifies simulated false positives approximately 50 of the time, and correctly identifies 99 of simulated planet candidates.

  7. A Teaching Approach from the Exhaustive Search Method to the Needleman-Wunsch Algorithm

    Science.gov (United States)

    Xu, Zhongneng; Yang, Yayun; Huang, Beibei

    2017-01-01

    The Needleman-Wunsch algorithm has become one of the core algorithms in bioinformatics; however, this programming requires more suitable explanations for students with different major backgrounds. In supposing sample sequences and using a simple store system, the connection between the exhaustive search method and the Needleman-Wunsch algorithm…

  8. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  9. Adaptation of Rejection Algorithms for a Radar Clutter

    Directory of Open Access Journals (Sweden)

    D. Popov

    2017-09-01

    Full Text Available In this paper, the algorithms for adaptive rejection of a radar clutter are synthesized for the case of a priori unknown spectral-correlation characteristics at wobbulation of a repetition period of the radar signal. The synthesis of algorithms for the non-recursive adaptive rejection filter (ARF of a given order is reduced to determination of the vector of weighting coefficients, which realizes the best effectiveness index for radar signal extraction from the moving targets on the background of the received clutter. As the effectiveness criterion, we consider the averaged (over the Doppler signal phase shift improvement coefficient for a signal-to-clutter ratio (SCR. On the base of extreme properties of the characteristic numbers (eigennumbers of the matrices, the optimal vector (according to this criterion maximum is defined as the eigenvector of the clutter correlation matrix corresponding to its minimal eigenvalue. The general type of the vector of optimal ARF weighting coefficients is de-termined and specific adaptive algorithms depending upon the ARF order are obtained, which in the specific cases can be reduced to the known algorithms confirming its authenticity. The comparative analysis of the synthesized and known algorithms is performed. Significant bene-fits are established in clutter rejection effectiveness by the offered processing algorithms compared to the known processing algorithms.

  10. Monte Carlo simulations of low background detectors

    International Nuclear Information System (INIS)

    Miley, H.S.; Brodzinski, R.L.; Hensley, W.K.; Reeves, J.H.

    1995-01-01

    An implementation of the Electron Gamma Shower 4 code (EGS4) has been developed to allow convenient simulation of typical gamma ray measurement systems. Coincidence gamma rays, beta spectra, and angular correlations have been added to adequately simulate a complete nuclear decay and provide corrections to experimentally determined detector efficiencies. This code has been used to strip certain low-background spectra for the purpose of extremely low-level assay. Monte Carlo calculations of this sort can be extremely successful since low background detectors are usually free of significant contributions from poorly localized radiation sources, such as cosmic muons, secondary cosmic neutrons, and radioactive construction or shielding materials. Previously, validation of this code has been obtained from a series of comparisons between measurements and blind calculations. An example of the application of this code to an exceedingly low background spectrum stripping will be presented. (author) 5 refs.; 3 figs.; 1 tab

  11. Background harmonic superfields in N=2 supergravity

    International Nuclear Information System (INIS)

    Zupnik, B.M.

    1998-01-01

    A modification of the harmonic superfield formalism in D=4, N=2 supergravity using a subsidiary condition of covariance under the background supersymmetry with a central charge (B-covariance) is considered. Conservation of analyticity together with the B-covariance leads to the appearance of linear gravitational superfields. Analytic prepotentials arise in a decomposition of the background linear superfields in terms of spinor coordinates and transform in a nonstandard way under the background supersymmetry. The linear gravitational superfields can be written via spinor derivatives of nonanalytic spinor prepotentials. The perturbative expansion of supergravity action in terms of the B-covariant superfields and the corresponding version of the differential-geometric formalism are considered. We discuss the dual harmonic representation of the linearized extended supergravity, which corresponds to the dynamical condition of Grassmann analyticity

  12. Background modeling for the GERDA experiment

    Science.gov (United States)

    Becerici-Schmidt, N.; Gerda Collaboration

    2013-08-01

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Qββ come from 214Bi, 228Th, 42K, 60Co and α emitting isotopes in the 226Ra decay chain, with a fraction depending on the assumed source positions.

  13. Background modeling for the GERDA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Becerici-Schmidt, N. [Max-Planck-Institut für Physik, München (Germany); Collaboration: GERDA Collaboration

    2013-08-08

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.

  14. In-beam background suppression shield

    DEFF Research Database (Denmark)

    Santoro, V.; Cai, Xiao Xiao; DiJulio, D. D.

    2015-01-01

    The long (3 ms) proton pulse of the European Spallation Source (ESS) gives rise to unique and potentially high backgrounds for the instrument suite. In such a source an instrument's capabilities will be limited by its Signal to Noise (S/N) ratio. The instruments with a direct view of the moderator......, which do not use a bender to help mitigate the fast neutron background, are the most challenging. For these beam lines we propose the innovative shielding of placing blocks of material directly into the guide system, which allow a minimum attenuation of the cold and thermal fluxes relative...... to the background suppression. This shielding configuration has been worked into a beam line model using Geant4. We study particularly the advantages of single crystal sapphire and silicon blocks....

  15. Background Model for the Majorana Demonstrator

    Science.gov (United States)

    Cuesta, C.; Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S.; Mertens, S.; Nomachi, M.; Orrell, J. L.; O'Shaughnessy, C.; Overman, N. R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, A. G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Suriano, A. M.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.

  16. Background compensation for a radiation level monitor

    Science.gov (United States)

    Keefe, D.J.

    1975-12-01

    Background compensation in a device such as a hand and foot monitor is provided by digital means using a scaler. With no radiation level test initiated, a scaler is down-counted from zero according to the background measured. With a radiation level test initiated, the scaler is up-counted from the previous down-count position according to the radiation emitted from the monitored object and an alarm is generated if, with the scaler having crossed zero in the positive going direction, a particular number is exceeded in a specific time period after initiation of the test. If the test is initiated while the scale is down-counting, the background count from the previous down- count stored in a memory is used as the initial starting point for the up-count.

  17. Non-perturbative background field calculations

    International Nuclear Information System (INIS)

    Stephens, C.R.; Department of Physics, University of Utah, Salt Lake City, Utah 84112)

    1988-01-01

    New methods are developed for calculating one loop functional determinants in quantum field theory. Instead of relying on a calculation of all the eigenvalues of the small fluctuation equation, these techniques exploit the ability of the proper time formalism to reformulate an infinite dimensional field theoretic problem into a finite dimensional covariant quantum mechanical analog, thereby allowing powerful tools such as the method of Jacobi fields to be used advantageously in a field theory setting. More generally the methods developed herein should be extremely valuable when calculating quantum processes in non-constant background fields, offering a utilitarian alternative to the two standard methods of calculation: perturbation theory in the background field or taking the background field into account exactly. The formalism developed also allows for the approximate calculation of covariances of partial differential equations from a knowledge of the solutions of a homogeneous ordinary differential equation. copyright 1988 Academic Press, Inc

  18. Origin of the diffuse background gamma radiation

    International Nuclear Information System (INIS)

    Stecker, F.W.; Puget, J.L.

    1974-05-01

    Recent observations have now provided evidence for diffuse background gamma radiation extending to energies beyond 100 MeV. There is some evidence of isotropy and implied cosmological origin. Significant features in the spectrum of this background radiation were observed which provide evidence for its origin in nuclear processes in the early stages of the big-band cosmology and tie in these processes with galaxy formation theory. A crucial test of the theory may lie in future observations of the background radiation in the 100 MeV to 100 GeV energy range which may be made with large orbiting spark-chamber satellite detectors. A discussion of the theoretical interpretations of present data, their connection with baryon symmetric cosmology and galaxy formation theory, and the need for future observations are given. (U.S.)

  19. Cosmic microwave background distortions at high frequencies

    International Nuclear Information System (INIS)

    Peter, W.; Peratt, A.L.

    1988-01-01

    The authors analyze the deviation of the cosmic background radiation spectrum from the 2.76+-0.02 0 Κ blackbody curve. If the cosmic background radiation is due to absorption and re-emission of synchrotron radiation from galactic-width current filaments, higher-order synchrotron modes are less thermalized than lower-order modes, causing a distortion of the blackbody curve at higher frequencies. New observations of the microwave background spectrum at short wavelengths should provide an indication of the number of synchrotron modes thermalized in this process. The deviation of the spectrum from that of a perfect blackbody can thus be correlated with astronomical observations such as filament temperatures and electron energies. The results are discussed and compared with the theoretical predictions of other models which assume the presence of intergalactic superconducting cosmic strings

  20. Non-perturbative background field calculations

    Science.gov (United States)

    Stephens, C. R.

    1988-01-01

    New methods are developed for calculating one loop functional determinants in quantum field theory. Instead of relying on a calculation of all the eigenvalues of the small fluctuation equation, these techniques exploit the ability of the proper time formalism to reformulate an infinite dimensional field theoretic problem into a finite dimensional covariant quantum mechanical analog, thereby allowing powerful tools such as the method of Jacobi fields to be used advantageously in a field theory setting. More generally the methods developed herein should be extremely valuable when calculating quantum processes in non-constant background fields, offering a utilitarian alternative to the two standard methods of calculation—perturbation theory in the background field or taking the background field into account exactly. The formalism developed also allows for the approximate calculation of covariances of partial differential equations from a knowledge of the solutions of a homogeneous ordinary differential equation.

  1. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  2. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  3. One improved LSB steganography algorithm

    Science.gov (United States)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  4. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  5. Graph Algorithm Animation with Grrr

    OpenAIRE

    Rodgers, Peter; Vidal, Natalia

    2000-01-01

    We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...

  6. Algorithms over partially ordered sets

    DEFF Research Database (Denmark)

    Baer, Robert M.; Østerby, Ole

    1969-01-01

    in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....

  7. Background characterization for the GERDA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Becerici-Schmidt, Neslihan [Max-Planck-Institut fuer Physik, Muenchen (Germany); Collaboration: GERDA-Collaboration

    2013-07-01

    The GERmanium Detector Array (Gerda) experiment at the LNGS laboratory of INFN searches for the neutrinoless double beta (0νββ) decay of {sup 76}Ge. A discovery of this decay can greatly advance our knowledge on the nature and properties of neutrinos. The current best limit on the half-life of {sup 76}Ge 0νββ decay is 1.9 . 10{sup 25} years (90% C.L.). In order to increase the sensitivity on the half-life with respect to past experiments, the background rate in the energy region of interest (ROI) around Q{sub ββ} = 2039 keV has been reduced by a factor 10. Gerda started data-taking with the full set of Phase I detectors in November 2011. Identification of the background in the first phase of the experiment is of major importance to further mitigate the background for Gerda Phase II. An analysis of the Phase I data resulted in a good understanding of the individual components in the Gerda background spectrum. The background components in the ROI have been identified to be mainly due to β- and γ-induced events originating from {sup 214}Bi ({sup 238}U-series), {sup 208}Tl ({sup 232}Th-series), {sup 42}K (progeny of {sup 42}Ar) and α-induced events coming from isotopes in the {sup 226}Ra decay chain. A background decomposition in the ROI will be presented, with a special emphasis on the contribution from α-induced events.

  8. An overview of smart grid routing algorithms

    Science.gov (United States)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  9. Background simulation for the GENIUS project

    International Nuclear Information System (INIS)

    Ponkratenko, O.A.; Tretyak, V.I.; Zdesenko, Yu.G.

    1999-01-01

    The background simulations for the GENIUS experiment were performed with the help of GEANT 3.21 package and event generator DECAY 4.Contributions from the cosmogenic activity produced in the Ge detectors and from its radioactive impurities as well as from contamination of the liquid nitrogen and other materials were calculated.External gamma and neutron background were taking into consideration also.The results of calculations evidently show feasibility of the GENIUS project,which can substantially promote development of modern astroparticle physics

  10. Electromagnetic wave collapse in a radiation background

    International Nuclear Information System (INIS)

    Marklund, Mattias; Brodin, Gert; Stenflo, Lennart

    2003-01-01

    The nonlinear interaction, due to quantum electrodynamical (QED) effects between an electromagnetic pulse and a radiation background, is investigated by combining the methods of radiation hydrodynamics with the QED theory for photon-photon scattering. For the case of a single coherent electromagnetic pulse, we obtain a Zakharov-like system, where the radiation pressure of the pulse acts as a driver of acoustic waves in the photon gas. For a sufficiently intense pulse and/or background energy density, there is focusing and the subsequent collapse of the pulse. The relevance of our results for various astrophysical applications are discussed

  11. Optimization of the ECT background coil

    International Nuclear Information System (INIS)

    Ballou, J.K.; Luton, J.N.

    1975-01-01

    This study was begun to optimize the Eccentric Coil Test (ECT) background coil. In the course of this work a general optimization code was obtained, tested, and applied to the ECT problem. So far this code has proven to be very satisfactory. The results obtained with this code and earlier codes have illustrated the parametric behavior of such a coil system and that the optimum for this type system is broad. This study also shows that a background coil with a winding current density of less than 3000 A/cm 2 is not feasible for the ECT models presented in this paper

  12. Elastic lattice in an incommensurate background

    International Nuclear Information System (INIS)

    Dickman, R.; Chudnovsky, E.M.

    1995-01-01

    We study a harmonic triangular lattice, which relaxes in the presence of an incommensurate short-wavelength potential. Monte Carlo simulations reveal that the elastic lattice exhibits only short-ranged translational correlations, despite the absence of defects in either lattice. Extended orientational order, however, persists in the presence of the background. Translational correlation lengths exhibit approximate power-law dependence upon cooling rate and background strength. Our results may be relevant to Wigner crystals, atomic monolayers on crystals surfaces, and flux-line and magnetic bubble lattices

  13. Cognitive psychology and depth psychology backgrounds

    International Nuclear Information System (INIS)

    Fritzsche, A.F.

    1986-01-01

    The sixth chapter gives an insight into the risk perception process which is highly determined by emotions, and, thus, deals with the psychological backgrounds of both the conscious cognitive and the subconscious intuitive realms of the human psyche. The chapter deals with the formation of opinion and the origination of an attitude towards an issue; cognitive-psychological patterns of thinking from the field of risk perception; the question of man's rationality; pertinent aspects of group behaviour; depth psychological backgrounds of the fear of technology; the collective subconscious; nuclear energy as a preferred object of projection for various psychological problems of modern man. (HSCH) [de

  14. Conserved quantities in background independent theories

    Energy Technology Data Exchange (ETDEWEB)

    Markopoulou, Fotini [Perimeter Institute for Theoretical Physics, 35 King Street North, Waterloo, Ontario N2J 2W9 (Canada); Department of Physics, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada)

    2007-05-15

    We discuss the difficulties that background independent theories based on quantum geometry encounter in deriving general relativity as the low energy limit. We follow a geometrogenesis scenario of a phase transition from a pre-geometric theory to a geometric phase which suggests that a first step towards the low energy limit is searching for the effective collective excitations that will characterize it. Using the correspondence between the pre-geometric background independent theory and a quantum information processor, we are able to use the method of noiseless subsystems to extract such coherent collective excitations. We illustrate this in the case of locally evolving graphs.

  15. Fermionic bound states in distinct kinklike backgrounds

    Energy Technology Data Exchange (ETDEWEB)

    Bazeia, D. [Universidade Federal da Paraiba, Departamento de Fisica, Joao Pessoa, Paraiba (Brazil); Mohammadi, A. [Universidade Federal de Campina Grande, Departamento de Fisica, Caixa Postal 10071, Campina Grande, Paraiba (Brazil)

    2017-04-15

    This work deals with fermions in the background of distinct localized structures in the two-dimensional spacetime. Although the structures have a similar topological character, which is responsible for the appearance of fractionally charged excitations, we want to investigate how the geometric deformations that appear in the localized structures contribute to the change in the physical properties of the fermionic bound states. We investigate the two-kink and compact kinklike backgrounds, and we consider two distinct boson-fermion interactions, one motivated by supersymmetry and the other described by the standard Yukawa coupling. (orig.)

  16. A low background pulsed neutron polyenergetic beam

    International Nuclear Information System (INIS)

    Adib, M.; Abdelkawy, A.; Habib, N.; abuelela, M.; Wahba, M.; kilany, M.; Kalebebin, S.M.

    1992-01-01

    A low background pulsed neutron polyenergetic thermal beam at ET-R R-1 is produced by a rotor and rotating collimator suspended in magnetic fields. Each of them is mounted on its mobile platform and whose centres are 66 cm apart, rotating synchronously at speeds up to 16000 rpm. It was found that the neutron burst produced by the rotor with almost 100% transmission passes through the collimator, when the rotation phase between them is 28.8 degree Moreover the background level achieved at the detector position is low, constant and free from peaks due to gamma rays and fast neutrons accompanying the reactor thermal beam.3 fig

  17. Noise correlations in cosmic microwave background experiments

    Science.gov (United States)

    Dodelson, Scott; Kosowsky, Arthur; Myers, Steven T.

    1995-01-01

    Many analysis of microwave background experiments neglect the correlation of noise in different frequency of polarization channels. We show that these correlations, should they be present, can lead to serve misinterpretation of an experiment. In particular, correlated noise arising from either electronics or atmosphere may mimic a cosmic signal. We quantify how the likelihood function for a given experiment varies with noise correlation, using both simple analytic models and actual data. For a typical microwave background anisotropy experiment, noise correlations at the level of 1% of the overall noise can seriously reduce the significance of a given detection.

  18. Cosmic microwave background probes models of inflation

    Science.gov (United States)

    Davis, Richard L.; Hodges, Hardy M.; Smoot, George F.; Steinhardt, Paul J.; Turner, Michael S.

    1992-01-01

    Inflation creates both scalar (density) and tensor (gravity wave) metric perturbations. We find that the tensor-mode contribution to the cosmic microwave background anisotropy on large-angular scales can only exceed that of the scalar mode in models where the spectrum of perturbations deviates significantly from scale invariance. If the tensor mode dominates at large-angular scales, then the value of DeltaT/T predicted on 1 deg is less than if the scalar mode dominates, and, for cold-dark-matter models, bias factors greater than 1 can be made consistent with Cosmic Background Explorer (COBE) DMR results.

  19. Cosmic microwave background at its twentieth anniversary

    International Nuclear Information System (INIS)

    Partridge, R.B.

    1986-01-01

    The role of cosmic microwave background radiation in cosmology is examined. The thermal spectrum, the large entropy in the universe, the large-scale isotropy of the radiation, and the small-scale isotropy or homogeneity of the radiation are analyzed in order to describe the properties of the universe. It is observed that the microwave background spectrum is thermal over a wide range, there is a significant detectable dipole anisotropy in the radiation, but no quadrupole anisotropy, and there is a high deree of radiation isotropy on angular scales between 1-5 degrees. 62 references

  20. Natural background radiation and oncologic disease incidence

    International Nuclear Information System (INIS)

    Burenin, P.I.

    1982-01-01

    Cause and effect relationships between oncologic disease incidence in human population and environmental factors are examined using investigation materials of Soviet and foreign authors. The data concerning US white population are adduced. The role and contribution of natural background radiation oncologic disease prevalence have been determined with the help of system information analysis. The probable damage of oncologic disease is shown to decrease as the background radiation level diminishes. The linear nature of dose-response relationspip has been established. The necessity to include the life history of the studied population along with environmental factors in epidemiological study under conditions of multiplicity of cancerogenesis causes is emphasized

  1. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  2. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  3. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  4. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  5. Does Social Background Influence Political Science Grades?

    Science.gov (United States)

    Tiruneh, Gizachew

    2013-01-01

    This paper tests a hypothesized linear relationship between social background and final grades in several political science courses that I taught at the University of Central Arkansas. I employ a cross-sectional research design and ordinary least square (OLS) estimators to test the foregoing hypothesis. Relying on a sample of up to 204…

  6. Physiologic correlates to background noise acceptance

    Science.gov (United States)

    Tampas, Joanna; Harkrider, Ashley; Nabelek, Anna

    2004-05-01

    Acceptance of background noise can be evaluated by having listeners indicate the highest background noise level (BNL) they are willing to accept while following the words of a story presented at their most comfortable listening level (MCL). The difference between the selected MCL and BNL is termed the acceptable noise level (ANL). One of the consistent findings in previous studies of ANL is large intersubject variability in acceptance of background noise. This variability is not related to age, gender, hearing sensitivity, personality, type of background noise, or speech perception in noise performance. The purpose of the current experiment was to determine if individual differences in physiological activity measured from the peripheral and central auditory systems of young female adults with normal hearing can account for the variability observed in ANL. Correlations between ANL and various physiological responses, including spontaneous, click-evoked, and distortion-product otoacoustic emissions, auditory brainstem and middle latency evoked potentials, and electroencephalography will be presented. Results may increase understanding of the regions of the auditory system that contribute to individual noise acceptance.

  7. Egypt: Background and U.S. Relations

    Science.gov (United States)

    2013-06-27

    right to govern; the more the Brotherhood charges ahead, the more it confirms the others’ belief of its monopolistic designs over power. Even if...appropriate market -reform and economic growth activities.” Egypt: Background and U.S. Relations Congressional Research Service 18 according to the State

  8. Natural background approach to setting radiation standards

    International Nuclear Information System (INIS)

    Adler, H.I.; Federow, H.; Weinberg, A.M.

    1979-01-01

    The suggestion has often been made that an additional radiation exposure imposed on humanity as a result of some important activity such as electricity generation would be acceptable if the exposure was small compared to the natural background. In order to make this concept quantitative and objective, we propose that small compared with the natural background be interpreted as the standard deviation (weighted with the exposed population) of the natural background. This use of the variation in natural background radiation is less arbitrary and requires fewer unfounded assumptions than some current approaches to standard-setting. The standard deviation is an easily calculated statistic that is small compared with the mean value for natural exposures of populations. It is an objectively determined quantity and its significance is generally understood. Its determination does not omit any of the pertinent data. When this method is applied to the population of the United States, it suggests that a dose of 20 mrem/year would be an acceptable standard. This is comparable to the 25 mrem/year suggested as the maximum allowable exposure to an individual from the complete uranium fuel cycle

  9. 20 CFR 410.700 - Background.

    Science.gov (United States)

    2010-04-01

    ... LUNG BENEFITS (1969- ) Rules for the Review of Denied and Pending Claims Under the Black Lung Benefits Reform Act (BLBRA) of 1977 § 410.700 Background. (a) The Black Lung Benefits Reform Act of 1977 broadens... establish entitlement to black lung benefits. Section 435 of the Black Lung Benefits Reform Act of 1977...

  10. The projected background for the CUORE experiment

    Energy Technology Data Exchange (ETDEWEB)

    Alduino, C.; Avignone, F.T.; Chott, N.; Creswick, R.J.; Rosenfeld, C.; Wilson, J. [University of South Carolina, Department of Physics and Astronomy, Columbia, SC (United States); Alfonso, K.; Hickerson, K.P.; Huang, H.Z.; Sakai, M.; Schmidt, J.; Trentalange, S.; Zhu, B.X. [University of California, Department of Physics and Astronomy, Los Angeles, CA (United States); Artusa, D.R.; Rusconi, C. [University of South Carolina, Department of Physics and Astronomy, Columbia, SC (United States); INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); Azzolini, O.; Camacho, A.; Keppel, G.; Palmieri, V.; Pira, C. [INFN-Laboratori Nazionali di Legnaro, Padua (Italy); Banks, T.I.; Drobizhev, A.; Freedman, S.J.; Hennings-Yeomans, R.; Kolomensky, Yu.G.; Wagaarachchi, S.L. [University of California, Department of Physics, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Bari, G.; Deninno, M.M. [INFN-Sezione di Bologna, Bologna (Italy); Beeman, J.W. [Lawrence Berkeley National Laboratory, Materials Science Division, Berkeley, CA (United States); Bellini, F.; Cosmelli, C.; Ferroni, F.; Piperno, G. [Sapienza Universita di Roma, Dipartimento di Fisica, Rome (Italy); INFN-Sezione di Roma, Rome (Italy); Benato, G.; Singh, V. [University of California, Department of Physics, Berkeley, CA (United States); Bersani, A.; Caminata, A. [INFN-Sezione di Genova, Genoa (Italy); Biassoni, M.; Brofferio, C.; Capelli, S.; Carniti, P.; Cassina, L.; Chiesa, D.; Clemenza, M.; Faverzani, M.; Fiorini, E.; Gironi, L.; Gotti, C.; Maino, M.; Nastasi, M.; Nucciotti, A.; Pavan, M.; Pozzi, S.; Sisti, M.; Terranova, F.; Zanotti, L. [Universita di Milano-Bicocca, Dipartimento di Fisica, Milan (Italy); INFN-Sezione di Milano Bicocca, Milan (Italy); Branca, A.; Taffarello, L. [INFN-Sezione di Padova, Padua (Italy); Bucci, C.; Cappelli, L.; D' Addabbo, A.; Gorla, P.; Pattavina, L.; Pirro, S.; Laubenstein, M. [INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); Canonica, L. [INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); Massachusetts Institute of Technology, Cambridge, MA (United States); Cao, X.G.; Fang, D.Q.; Ma, Y.G.; Wang, H.W.; Zhang, G.Q. [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai (China); Carbone, L.; Cremonesi, O.; Ferri, E.; Giachero, A.; Pessina, G.; Previtali, E. [INFN-Sezione di Milano Bicocca, Milan (Italy); Cardani, L.; Casali, N.; Dafinei, I.; Morganti, S.; Mosteiro, P.J.; Pettinacci, V.; Tomei, C.; Vignati, M. [INFN-Sezione di Roma, Rome (Italy); Copello, S.; Di Domizio, S.; Fernandes, G.; Marini, L.; Pallavicini, M. [INFN-Sezione di Genova, Genoa (Italy); Universita di Genova, Dipartimento di Fisica, Genoa (Italy); Cushman, J.S.; Davis, C.J.; Heeger, K.M.; Lim, K.E.; Maruyama, R.H. [Yale University, Department of Physics, New Haven, CT (United States); Dell' Oro, S. [INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); INFN-Gran Sasso Science Institute, L' Aquila (Italy); Di Vacri, M.L.; Santone, D. [INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); Universita dell' Aquila, Dipartimento di Scienze Fisiche e Chimiche, L' Aquila (Italy); Franceschi, M.A.; Ligi, C.; Napolitano, T. [INFN-Laboratori Nazionali di Frascati, Rome (Italy); Fujikawa, B.K.; Mei, Y.; Schmidt, B.; Smith, A.R.; Welliver, B. [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Giuliani, A.; Novati, V.; Tenconi, M. [Universit Paris-Saclay, CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Orsay (France); Gladstone, L.; Leder, A.; Ouellet, J.L.; Winslow, L.A. [Massachusetts Institute of Technology, Cambridge, MA (United States); Gutierrez, T.D. [California Polytechnic State University, Physics Department, San Luis Obispo, CA (United States); Haller, E.E. [Lawrence Berkeley National Laboratory, Materials Science Division, Berkeley, CA (United States); University of California, Department of Materials Science and Engineering, Berkeley, CA (United States); Han, K. [Shanghai Jiao Tong University, Department of Physics and Astronomy, Shanghai (China); Hansen, E. [University of California, Department of Physics and Astronomy, Los Angeles, CA (United States); Massachusetts Institute of Technology, Cambridge, MA (United States); Kadel, R. [Lawrence Berkeley National Laboratory, Physics Division, Berkeley, CA (United States); Martinez, M. [Sapienza Universita di Roma, Dipartimento di Fisica, Rome (Italy); INFN-Sezione di Roma, Rome (Italy); Universidad de Zaragoza, Laboratorio de Fisica Nuclear y Astroparticulas, Zaragoza (Spain); Moggi, N. [INFN-Sezione di Bologna, Bologna (Italy); Alma Mater Studiorum-Universita di Bologna, Dipartimento di Scienze per la Qualita della Vita, Bologna (Italy); Nones, C. [CEA/Saclay, Service de Physique des Particules, Gif-sur-Yvette (France); Norman, E.B.; Wang, B.S. [Lawrence Livermore National Laboratory, Livermore, CA (United States); University of California, Department of Nuclear Engineering, Berkeley, CA (United States); O' Donnell, T. [Virginia Polytechnic Institute and State University, Center for Neutrino Physics, Blacksburg, VA (United States); Pagliarone, C.E. [INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); Universita degli Studi di Cassino e del Lazio Meridionale, Dipartimento di Ingegneria Civile e Meccanica, Cassino (Italy); Sangiorgio, S.; Scielzo, N.D. [Lawrence Livermore National Laboratory, Livermore, CA (United States); Wise, T. [Yale University, Department of Physics, New Haven, CT (United States); University of Wisconsin, Department of Physics, Madison, WI (United States); Woodcraft, A. [University of Edinburgh, SUPA, Institute for Astronomy, Edinburgh (United Kingdom); Zimmermann, S. [Lawrence Berkeley National Laboratory, Engineering Division, Berkeley, CA (United States); Zucchelli, S. [INFN-Sezione di Bologna, Bologna (Italy); Alma Mater Studiorum-Universita di Bologna, Dipartimento di Fisica e Astronomia, Bologna (Italy)

    2017-08-15

    The Cryogenic Underground Observatory for Rare Events (CUORE) is designed to search for neutrinoless double beta decay of {sup 130}Te with an array of 988 TeO{sub 2} bolometers operating at temperatures around 10 mK. The experiment is currently being commissioned in Hall A of Laboratori Nazionali del Gran Sasso, Italy. The goal of CUORE is to reach a 90% C.L. exclusion sensitivity on the {sup 130}Te decay half-life of 9 x 10{sup 25} years after 5 years of data taking. The main issue to be addressed to accomplish this aim is the rate of background events in the region of interest, which must not be higher than 10{sup -2} counts/keV/kg/year. We developed a detailed Monte Carlo simulation, based on results from a campaign of material screening, radioassays, and bolometric measurements, to evaluate the expected background. This was used over the years to guide the construction strategies of the experiment and we use it here to project a background model for CUORE. In this paper we report the results of our study and our expectations for the background rate in the energy region where the peak signature of neutrinoless double beta decay of {sup 130}Te is expected. (orig.)

  11. Bloemfontein's Greek community: historical background, emigration ...

    African Journals Online (AJOL)

    Bloemfontein's Greek community: historical background, emigration and settlement, ca 1885 - ca 1985. ... South African Journal of Cultural History ... In this study a review is provided of the reasons why Greeks settled in Bloemfontein since about 1885, where these Greek immigrants came from, and how they travelled to ...

  12. Racial background and possible relationships between physical ...

    African Journals Online (AJOL)

    The aim of this research was to investigate possible relationships between physical activity and physical fitness of girls between the ages of 13 and 15 years and the role of different racial backgrounds in this relationship. A cross-sectional research design was used to obtain information from 290 girls between the ages of 13 ...

  13. Controllable forms of natural background radiation

    International Nuclear Information System (INIS)

    1988-03-01

    RENA is a research programm into the controllable forms of natural background radiation, which cover the activities originating from the naturally occurring radionuclides enhanced by human intervention. In the RENA-program emphasis lays upon the policy aspects of environmental-hygienic, economical and governmental character. (H.W.). 15 refs.; 2 tabs

  14. 44 CFR 334.3 - Background.

    Science.gov (United States)

    2010-10-01

    ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Background. 334.3 Section 334.3 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND... take into account the need to mobilize the Nation's resources in response to a wide range of crisis or...

  15. ttH multilepton: background estimation

    CERN Document Server

    Angelidakis, Stylianos; The ATLAS collaboration

    2018-01-01

    The slides present the background encountered in the ttH->Multilepton search and describe the data-driven techniques used for the determination of the dominant non-prompt-lepton contamination as well as the contribution of electron charge mis-identification.

  16. Climate change: Scientific background and process

    OpenAIRE

    Alfsen, Knut H.; Fuglestvedt, Jan S.; Seip, Hans Martin; Skodvin, Tora

    2000-01-01

    The paper gives a brief description of natural and man-made forces behind climate change and outlines climate variations in the past together with a brief synopsis likely future impacts of anthropogenic emissions of greenhouse gases. The paper also gives a briefing on the background, organisation and functioning of the Intergovernmental Panel on Climate Change (IPCC).

  17. On the maximal superalgebras of supersymmetric backgrounds

    International Nuclear Information System (INIS)

    Figueroa-O'Farrill, Jose; Hackett-Jones, Emily; Moutsopoulos, George; Simon, Joan

    2009-01-01

    In this paper we give a precise definition of the notion of a maximal superalgebra of certain types of supersymmetric supergravity backgrounds, including the Freund-Rubin backgrounds, and propose a geometric construction extending the well-known construction of its Killing superalgebra. We determine the structure of maximal Lie superalgebras and show that there is a finite number of isomorphism classes, all related via contractions from an orthosymplectic Lie superalgebra. We use the structure theory to show that maximally supersymmetric waves do not possess such a maximal superalgebra, but that the maximally supersymmetric Freund-Rubin backgrounds do. We perform the explicit geometric construction of the maximal superalgebra of AdS 4 X S 7 and find that it is isomorphic to osp(1|32). We propose an algebraic construction of the maximal superalgebra of any background asymptotic to AdS 4 X S 7 and we test this proposal by computing the maximal superalgebra of the M2-brane in its two maximally supersymmetric limits, finding agreement.

  18. Background reduction in a young interferometer biosensor

    NARCIS (Netherlands)

    Mulder, H. K P; Subramaniam, V.; Kanger, J. S.

    2014-01-01

    Integrated optical Young interferometer (IOYI) biosensors are among the most sensitive label-free biosensors. Detection limits are in the range of 20 fg/mm2. The applicability of these sensors is however strongly hampered by the large background that originates from both bulk refractive index

  19. Probabilistic Model-based Background Subtraction

    DEFF Research Database (Denmark)

    Krüger, Volker; Anderson, Jakob; Prehn, Thomas

    2005-01-01

    is the correlation between pixels. In this paper we introduce a model-based background subtraction approach which facilitates prior knowledge of pixel correlations for clearer and better results. Model knowledge is being learned from good training video data, the data is stored for fast access in a hierarchical...

  20. The projected background for the CUORE experiment

    Science.gov (United States)

    Alduino, C.; Alfonso, K.; Artusa, D. R.; Avignone, F. T.; Azzolini, O.; Banks, T. I.; Bari, G.; Beeman, J. W.; Bellini, F.; Benato, G.; Bersani, A.; Biassoni, M.; Branca, A.; Brofferio, C.; Bucci, C.; Camacho, A.; Caminata, A.; Canonica, L.; Cao, X. G.; Capelli, S.; Cappelli, L.; Carbone, L.; Cardani, L.; Carniti, P.; Casali, N.; Cassina, L.; Chiesa, D.; Chott, N.; Clemenza, M.; Copello, S.; Cosmelli, C.; Cremonesi, O.; Creswick, R. J.; Cushman, J. S.; D'Addabbo, A.; Dafinei, I.; Davis, C. J.; Dell'Oro, S.; Deninno, M. M.; Di Domizio, S.; Di Vacri, M. L.; Drobizhev, A.; Fang, D. Q.; Faverzani, M.; Fernandes, G.; Ferri, E.; Ferroni, F.; Fiorini, E.; Franceschi, M. A.; Freedman, S. J.; Fujikawa, B. K.; Giachero, A.; Gironi, L.; Giuliani, A.; Gladstone, L.; Gorla, P.; Gotti, C.; Gutierrez, T. D.; Haller, E. E.; Han, K.; Hansen, E.; Heeger, K. M.; Hennings-Yeomans, R.; Hickerson, K. P.; Huang, H. Z.; Kadel, R.; Keppel, G.; Kolomensky, Yu. G.; Leder, A.; Ligi, C.; Lim, K. E.; Ma, Y. G.; Maino, M.; Marini, L.; Martinez, M.; Maruyama, R. H.; Mei, Y.; Moggi, N.; Morganti, S.; Mosteiro, P. J.; Napolitano, T.; Nastasi, M.; Nones, C.; Norman, E. B.; Novati, V.; Nucciotti, A.; O'Donnell, T.; Ouellet, J. L.; Pagliarone, C. E.; Pallavicini, M.; Palmieri, V.; Pattavina, L.; Pavan, M.; Pessina, G.; Pettinacci, V.; Piperno, G.; Pira, C.; Pirro, S.; Pozzi, S.; Previtali, E.; Rosenfeld, C.; Rusconi, C.; Sakai, M.; Sangiorgio, S.; Santone, D.; Schmidt, B.; Schmidt, J.; Scielzo, N. D.; Singh, V.; Sisti, M.; Smith, A. R.; Taffarello, L.; Tenconi, M.; Terranova, F.; Tomei, C.; Trentalange, S.; Vignati, M.; Wagaarachchi, S. L.; Wang, B. S.; Wang, H. W.; Welliver, B.; Wilson, J.; Winslow, L. A.; Wise, T.; Woodcraft, A.; Zanotti, L.; Zhang, G. Q.; Zhu, B. X.; Zimmermann, S.; Zucchelli, S.; Laubenstein, M.

    2017-08-01

    The Cryogenic Underground Observatory for Rare Events (CUORE) is designed to search for neutrinoless double beta decay of ^{130}Te with an array of 988 TeO_2 bolometers operating at temperatures around 10 mK. The experiment is currently being commissioned in Hall A of Laboratori Nazionali del Gran Sasso, Italy. The goal of CUORE is to reach a 90% C.L. exclusion sensitivity on the ^{130}Te decay half-life of 9 × 10^{25} years after 5 years of data taking. The main issue to be addressed to accomplish this aim is the rate of background events in the region of interest, which must not be higher than 10^{-2} counts/keV/kg/year. We developed a detailed Monte Carlo simulation, based on results from a campaign of material screening, radioassays, and bolometric measurements, to evaluate the expected background. This was used over the years to guide the construction strategies of the experiment and we use it here to project a background model for CUORE. In this paper we report the results of our study and our expectations for the background rate in the energy region where the peak signature of neutrinoless double beta decay of ^{130}Te is expected.