International Nuclear Information System (INIS)
Wolfangel, R.G.
1976-01-01
The invention concerns the dispersions that may be used for preparing radio-isotopic tracers, technetium labelled dispersions, processes for preparing these dispersions and their use as tracers. Technetium 99m sulphur colloids are utilized as scintillation tracers to give a picture of the reticulo-endothelial system, particularly the liver and spleen. A dispersion is provided which only requires the addition of a radioactive nuclide to form a radioactively labelled dispersion that can be injected as a tracer. It is formed of a colloid of tin sulphur dispersed in an aqueous buffer solution. Such a reagent has the advantage of being safe and reliable and is easier to use. The colloid can be prepared more quickly since additions of several different reagents are avoided. There is no need to heat up and no sulphuretted hydrogen, which is a toxic gas, is used [fr
Radio-isotope production using laser Wakefield accelerators
International Nuclear Information System (INIS)
Leemans, W.P.; Rodgers, D.; Catravas, P.E.; Geddes, C.G.R.; Fubiani, G.; Toth, C.; Esarey, E.; Shadwick, B.A.; Donahue, R.; Smith, A.; Reitsma, A.
2001-01-01
A 10 Hz, 10 TW solid state laser system has been used to produce electron beams suitable for radio-isotope production. The laser beam was focused using a 30 cm focal length f/6 off-axis parabola on a gas plume produced by a high pressure pulsed gas jet. Electrons were trapped and accelerated by high gradient wakefields excited in the ionized gas through the self-modulated laser wakefield instability. The electron beam was measured to contain excesses of 5 nC/bunch. A composite Pb/Cu target was used to convert the electron beam into gamma rays which subsequently produced radio-isotopes through (gamma, n) reactions. Isotope identification through gamma-ray spectroscopy and half-life time measurements demonstrated that Cu 61 was produced which indicates that 20-25 MeV gamma rays were produced, and hence electrons with energies greater than 25-30 MeV. The production of high energy electrons was independently confirmed using a bending magnet spectrometer. The measured spectra had an exponential distribution with a 3 MeV width. The amount of activation was on the order of 2.5 uCi after 3 hours of operation at 1 Hz. Future experiments will aim at increasing this yield by post-accelerating the electron beam using a channel guided laser wakefield accelerator
Tau reconstruction and identification algorithm
Indian Academy of Sciences (India)
CMS has developed sophisticated tau identification algorithms for tau hadronic decay modes. Production of tau lepton decaying to hadrons are studied at 7 TeV centre-of-mass energy with 2011 collision data collected by CMS detector and has been used to measure the performance of tau identification algorithms by ...
Improved autonomous star identification algorithm
International Nuclear Information System (INIS)
Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong
2015-01-01
The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)
High radio-isotope uptakes in patients with hypothyroidism
Energy Technology Data Exchange (ETDEWEB)
Wing, J.; Kalk, W.J.; Ganda, C. (University of the Witwatersrand, Johannesburg (South Africa). Dept. of Medicine)
1982-12-04
Hypothyroidism is usually associated with a low radio-isotope uptake by the thyriod gland. We report 8 cases of Hashimoto's thyroiditis with clinical and biochemical hypothyroidism and with borderline high or overtly increased technetium-99m pertechnetate and/or iodine-131 uptakes.
Library correlation nuclide identification algorithm
International Nuclear Information System (INIS)
Russ, William R.
2007-01-01
A novel nuclide identification algorithm, Library Correlation Nuclide Identification (LibCorNID), is proposed. In addition to the spectrum, LibCorNID requires the standard energy, peak shape and peak efficiency calibrations. Input parameters include tolerances for some expected variations in the calibrations, a minimum relative nuclide peak area threshold, and a correlation threshold. Initially, the measured peak spectrum is obtained as the residual after baseline estimation via peak erosion, removing the continuum. Library nuclides are filtered by examining the possible nuclide peak areas in terms of the measured peak spectrum and applying the specified relative area threshold. Remaining candidates are used to create a set of theoretical peak spectra based on the calibrations and library entries. These candidate spectra are then simultaneously fit to the measured peak spectrum while also optimizing the calibrations within the bounds of the specified tolerances. Each candidate with optimized area still exceeding the area threshold undergoes a correlation test. The normalized Pearson's correlation value is calculated as a comparison of the optimized nuclide peak spectrum to the measured peak spectrum with the other optimized peak spectra subtracted. Those candidates with correlation values that exceed the specified threshold are identified and their optimized activities are output. An evaluation of LibCorNID was conducted to verify identification performance in terms of detection probability and false alarm rate. LibCorNID has been shown to perform well compared to standard peak-based analyses
Poumellec, M-A; Dejode, M; Figl, A; Darcourt, J; Haudebourg, J; Sabah, Y; Voury, A; Martaens, A; Barranger, E
2016-04-01
Assess the biopsy's feasibility of the sentinel lymph node biopsy (SLNB) using optonuclear probe after of indocyanine green (ICG) and radio-isotope (RI) injections. Twenty-one patients with a localized breast cancer and unsuspicious axillary nodes underwent a SLNB after both injections of ICG and radio-isotope. One or more SLN were identified on the 21 patients (identification rate of 100%). The median number SLN was 2 (1-3). Twenty SLN were both radio-actives and fluorescents (54.1%), 11 fluorescent only (29.7%) and 6 were only radio-actives (16.2%). Seven patients had a metastatic SLN (8 SLN overall). Among them, only one had a micrometastasic SLN, 5 others had a macrometastatic SLN and one patient had two macrometastatic SLNs. Among the 8 metastatic SLN, 5 were both fluorescent and radioactive, 2 were only fluorescent and 1 was only radioactive. Detection SLN using optonuclear probe after indocyanine green and radio-isotope injections is effective and could be, after validation by randomized trial, a reliable alternative to the blue dye injection for teams who consider that combined detection as the reference. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Star identification methods, techniques and algorithms
Zhang, Guangjun
2017-01-01
This book summarizes the research advances in star identification that the author’s team has made over the past 10 years, systematically introducing the principles of star identification, general methods, key techniques and practicable algorithms. It also offers examples of hardware implementation and performance evaluation for the star identification algorithms. Star identification is the key step for celestial navigation and greatly improves the performance of star sensors, and as such the book include the fundamentals of star sensors and celestial navigation, the processing of the star catalog and star images, star identification using modified triangle algorithms, star identification using star patterns and using neural networks, rapid star tracking using star matching between adjacent frames, as well as implementation hardware and using performance tests for star identification. It is not only valuable as a reference book for star sensor designers and researchers working in pattern recognition and othe...
A new adaptive blind channel identification algorithm
International Nuclear Information System (INIS)
Peng Dezhong; Xiang Yong; Yi Zhang
2009-01-01
This paper addresses the blind identification of single-input multiple-output (SIMO) finite-impulse-response (FIR) systems. We first propose a new adaptive algorithm for the blind identification of SIMO FIR systems. Then, its convergence property is analyzed systematically. It is shown that under some mild conditions, the proposed algorithm is guaranteed to converge in the mean to the true channel impulse responses in both noisy and noiseless cases. Simulations are carried out to demonstrate the theoretical results.
Experimental formulas and curves for estimating reactivity loss and radio-isotope yields on HWRR
International Nuclear Information System (INIS)
Liu Xi Zhi; Zhu HuanNan
1999-01-01
Based on the elemental conception of reactor physics and experiments on HWRR for years. A set of experimental formulas and curves has been got, which can be used to estimate reactivity loss and radio isotopes yield. (author)
A Source Identification Algorithm for INTEGRAL
Scaringi, Simone; Bird, Antony J.; Clark, David J.; Dean, Anthony J.; Hill, Adam B.; McBride, Vanessa A.; Shaw, Simon E.
2008-12-01
We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using Random Forests, is applied to the IBIS/ISGRI dataset in order to ease the production of unbiased future soft gamma-ray source catalogues. The key steps of candidate searching, filtering and feature extraction are described. Three training and testing sets are created in order to deal with the diverse timescales and diverse objects encountered when dealing with the gamma-ray sky. Three independent Random Forest are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the Transient Matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples.
Energy Technology Data Exchange (ETDEWEB)
Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
ISINA: INTEGRAL Source Identification Network Algorithm
Scaringi, S.; Bird, A. J.; Clark, D. J.; Dean, A. J.; Hill, A. B.; McBride, V. A.; Shaw, S. E.
2008-11-01
We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using random forests, is applied to the IBIS/ISGRI data set in order to ease the production of unbiased future soft gamma-ray source catalogues. First, we introduce the data set and the problems encountered when dealing with images obtained using the coded mask technique. The initial step of source candidate searching is introduced and an initial candidate list is created. A description of the feature extraction on the initial candidate list is then performed together with feature merging for these candidates. Three training and testing sets are created in order to deal with the diverse time-scales encountered when dealing with the gamma-ray sky. Three independent random forests are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the transient matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain), Czech Republic and Poland, and the participation of Russia and the USA. E-mail: simo@astro.soton.ac.uk
International Nuclear Information System (INIS)
Bunatyan, G.G.; Nikolenko, V.G.; Popov, A.V.
2010-01-01
We treat the production of desirable radio isotopes due to the 238 U photo-fission by the bremsstrahlung induced in converter by an initial electron beam provided by a linear electron accelerator. We consider as well the radio isotope production through the 238 U fission by the neutrons that stem in the 238 U sample irradiated by that bremsstrahlung. The yield of the most applicable radio isotope 99 Mo is calculated. We correlate the findings acquired in the work presented with those obtained by treating the nuclear photo-neutron reaction. Menace of the plutonium contamination of an irradiated uranium sample because of the neutron capture by 238 U is considered. As we get convinced, the photo-neutron production of radio isotopes proves to be more practicable than the production by the uranium photo- and neutron-fission. Both methods are certain to be brought into action due to usage of the electron beam provided by modern linear accelerators
The MAPLE-X concept dedicated to the production of radio-isotopes
International Nuclear Information System (INIS)
Heeds, W.
1985-06-01
MAPLE is a versatile new Canadian multi-purpose research reactor concept that meets the nuclear aspirations of developing countries. It is planned to convert the NRX reactor at Chalk River Nuclear Laboratories into MAPLE-X as a demonstration prototype of this concept and thereafter to dedicate its operation to the production of radio-isotopes. A description of MAPLE-X and details of molybdenum-99 production are given
Simple radio-isotopic method for the detection of bronchial inhalation during intensive care
Energy Technology Data Exchange (ETDEWEB)
Venot, J.; Veyriras, E.; Vandroux, J.C.; Bournaud, E.; Gastinne, H.; Beck, C.
1984-08-01
The high incidence of pneumopathy in intensive care units might be due to the pulmonary aspiration of gastric juice following gastro-oesophageal reflux. The paper describes a radio-isotopic method using material easy to install at the patient's bedside. This technique demonstrated aspiration of gastric juice in the lungs of 8 of 25 intensive care patients investigated. Such a method might be useful later to demonstrate that silent bronchial aspirations of gastric juice are responsible for pulmonary complications.
Glue-sniffing as a cause of a positive radio-isotope brain scan
Energy Technology Data Exchange (ETDEWEB)
Lamont, C M; Adams, F G
1982-08-01
Convulsions are a known complication of the acute intoxicant effects of solvent abuse. A radio-isotope brain scan done 9 months following status epilepticus secondary to toluene inhalation, in a previously normal school-boy, demonstrated several wedge-shaped areas of increased uptake, in both cerebral hemispheres, consistent with infarcts. It is worth remembering that a positive brain scan in a young person, with recent onset of epilepsy, may be due to glue-sniffing.
Glue-sniffing as a cause of a positive radio-isotope brain scan
International Nuclear Information System (INIS)
Lamont, C.M.; Adams, F.G.
1982-01-01
Convulsions are a known complication of the acute intoxicant effects of solvent abuse. A radio-isotope brain scan done 9 months following status epilepticus secondary to toluene inhalation, in a previously normal school-boy, demonstrated several wedge-shaped areas if increased uptake, in both cerebral hemispheres, consistent with infarcts. It is worth remembering that a positive brain scan in a young person, with recent onset of epilepsy, may be due to glue-sniffing. (orig.)
System parameter identification information criteria and algorithms
Chen, Badong; Hu, Jinchun; Principe, Jose C
2013-01-01
Recently, criterion functions based on information theoretic measures (entropy, mutual information, information divergence) have attracted attention and become an emerging area of study in signal processing and system identification domain. This book presents a systematic framework for system identification and information processing, investigating system identification from an information theory point of view. The book is divided into six chapters, which cover the information needed to understand the theory and application of system parameter identification. The authors' research pr
On flexible CAD of adaptive control and identification algorithms
DEFF Research Database (Denmark)
Christensen, Anders; Ravn, Ole
1988-01-01
a total redesign of the system within each sample. The necessary design parameters are evaluated and a decision vector is defined, from which the identification algorithm can be generated by the program. Using the decision vector, a decision-node tree structure is built up, where the nodes define......SLLAB is a MATLAB-family software package for solving control and identification problems. This paper concerns the planning of a general-purpose subroutine structure for solving identification and adaptive control problems. A general-purpose identification algorithm is suggested, which allows...
Time-Delay System Identification Using Genetic Algorithm
DEFF Research Database (Denmark)
Yang, Zhenyu; Seested, Glen Thane
2013-01-01
Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique. The qual......Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique...
A robust firearm identification algorithm of forensic ballistics specimens
Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.
2017-09-01
There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.
Parameter identification for structural dynamics based on interval analysis algorithm
Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke
2018-04-01
A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.
Radio-isotope uptake by tumours. Introduction to their clinical use
International Nuclear Information System (INIS)
Blanchon, P.
1975-01-01
Before each group of authors exposes his results, present tendencies and progress in research may be briefly recalled. The object is to make to clinicians, a technique which is both reliable and relatively easy to perform. The improvements are due to ease of recording with the gamma-camera, and, apart from Hg 197, the introduction of new radio-isotopes: Ga 67, bleomycin labelled with Co 57 and Cu 67 without a carrier. Tumours which escape detection have thus become rare and the specificity of the methods is reinforced. But the difficulties due to possible fixation on advanced inflammatory lesions have not yet been totally overcome [fr
International Nuclear Information System (INIS)
Ben-Zion, S.
1999-01-01
The section of radio-isotopes in the Ministry Of Environment, is responsible for preventing environmental hazards fi.om radio-isotopes ''from cradle to grave's'. The management and the supervision of radioactive materials, includes about 350 institutes in Israel. We are dealing with the implementation and the enforcement of the environmental regulations and safety standards, and licensing for each institution and installation. Among our tasks are the following: Follow-up of the import, transportation and distribution, usage and storage and disposal of radio-isotopes, as well as legislation, risk-assessments, inspection, , and ''education'. We are also participating in committees / working groups discussing specific topics: Radioactive stores, Low RW disposal, Y2K, GIS, penalties charging, transportation and more
Genetic Algorithm-Based Identification of Fractional-Order Systems
Directory of Open Access Journals (Sweden)
Shengxi Zhou
2013-05-01
Full Text Available Fractional calculus has become an increasingly popular tool for modeling the complex behaviors of physical systems from diverse domains. One of the key issues to apply fractional calculus to engineering problems is to achieve the parameter identification of fractional-order systems. A time-domain identification algorithm based on a genetic algorithm (GA is proposed in this paper. The multi-variable parameter identification is converted into a parameter optimization by applying GA to the identification of fractional-order systems. To evaluate the identification accuracy and stability, the time-domain output error considering the condition variation is designed as the fitness function for parameter optimization. The identification process is established under various noise levels and excitation levels. The effects of external excitation and the noise level on the identification accuracy are analyzed in detail. The simulation results show that the proposed method could identify the parameters of both commensurate rate and non-commensurate rate fractional-order systems from the data with noise. It is also observed that excitation signal is an important factor influencing the identification accuracy of fractional-order systems.
Merged Search Algorithms for Radio Frequency Identification Anticollision
Directory of Open Access Journals (Sweden)
Bih-Yaw Shih
2012-01-01
The arbitration algorithm for RFID system is used to arbitrate all the tags to avoid the collision problem with the existence of multiple tags in the interrogation field of a transponder. A splitting algorithm which is called Binary Search Tree (BST is well known for multitags arbitration. In the current study, a splitting-based schema called Merged Search Tree is proposed to capture identification codes correctly for anticollision. Performance of the proposed algorithm is compared with the original BST according to time and power consumed during the arbitration process. The results show that the proposed model can reduce searching time and power consumed to achieve a better performance arbitration.
Cover song identification by sequence alignment algorithms
Wang, Chih-Li; Zhong, Qian; Wang, Szu-Ying; Roychowdhury, Vwani
2011-10-01
Content-based music analysis has drawn much attention due to the rapidly growing digital music market. This paper describes a method that can be used to effectively identify cover songs. A cover song is a song that preserves only the crucial melody of its reference song but different in some other acoustic properties. Hence, the beat/chroma-synchronous chromagram, which is insensitive to the variation of the timber or rhythm of songs but sensitive to the melody, is chosen. The key transposition is achieved by cyclically shifting the chromatic domain of the chromagram. By using the Hidden Markov Model (HMM) to obtain the time sequences of songs, the system is made even more robust. Similar structure or length between the cover songs and its reference are not necessary by the Smith-Waterman Alignment Algorithm.
Channel Access Algorithm Design for Automatic Identification System
Institute of Scientific and Technical Information of China (English)
Oh Sang-heon; Kim Seung-pum; Hwang Dong-hwan; Park Chan-sik; Lee Sang-jeong
2003-01-01
The Automatic Identification System (AIS) is a maritime equipment to allow an efficient exchange of the navigational data between ships and between ships and shore stations. It utilizes a channel access algorithm which can quickly resolve conflicts without any intervention from control stations. In this paper, a design of channel access algorithm for the AIS is presented. The input/output relationship of each access algorithm module is defined by drawing the state transition diagram, dataflow diagram and flowchart based on the technical standard, ITU-R M.1371. In order to verify the designed channel access algorithm, the simulator was developed using the C/C++ programming language. The results show that the proposed channel access algorithm can properly allocate transmission slots and meet the operational performance requirements specified by the technical standard.
Time-Delay System Identification Using Genetic Algorithm
DEFF Research Database (Denmark)
Yang, Zhenyu; Seested, Glen Thane
2013-01-01
problem through an identification approach using the real coded Genetic Algorithm (GA). The desired FOPDT/SOPDT model is directly identified based on the measured system's input and output data. In order to evaluate the quality and performance of this GA-based approach, the proposed method is compared...
Proportionate Minimum Error Entropy Algorithm for Sparse System Identification
Directory of Open Access Journals (Sweden)
Zongze Wu
2015-08-01
Full Text Available Sparse system identification has received a great deal of attention due to its broad applicability. The proportionate normalized least mean square (PNLMS algorithm, as a popular tool, achieves excellent performance for sparse system identification. In previous studies, most of the cost functions used in proportionate-type sparse adaptive algorithms are based on the mean square error (MSE criterion, which is optimal only when the measurement noise is Gaussian. However, this condition does not hold in most real-world environments. In this work, we use the minimum error entropy (MEE criterion, an alternative to the conventional MSE criterion, to develop the proportionate minimum error entropy (PMEE algorithm for sparse system identification, which may achieve much better performance than the MSE based methods especially in heavy-tailed non-Gaussian situations. Moreover, we analyze the convergence of the proposed algorithm and derive a sufficient condition that ensures the mean square convergence. Simulation results confirm the excellent performance of the new algorithm.
Sahy, Diana; Condon, Daniel J.; Hilgen, Frederik J.|info:eu-repo/dai/nl/102639876; Kuiper, Klaudia F.|info:eu-repo/dai/nl/258125772
2017-01-01
A significant discrepancy of up to 0.6 Myr exists between radio-isotopically calibrated and astronomically tuned time scales of the late Eocene-Oligocene. We explore the possible causes of this discrepancy through the acquisition of “high-precision” 206Pb/238U dating of zircons from 11 volcanic ash
International Nuclear Information System (INIS)
Kushita, Kouhei
2001-12-01
This report reviews the recent studies on the stable- and radio-isotopes of chlorine from a viewpoint of environmental science, partly including historic references on this element. First, general properties, occurrence, and utilization of chlorine are described. Secondly, current status and research works on chlorine-compounds, which attract special attention in recent years as environmentally hazardous materials, are reported. Thirdly, research works on stable chlorine isotopes, 35 Cl and 37 Cl, are described with a focus laid on the newly-developed techniques; isotopic ratio mass spectrometry (IRMS) and thermal ionization mass spectrometry (TIMS). Fourthly, recent research works on chlorine radioisotopes, 36 Cl etc., are described, focusing on the development of accelerator mass spectrometry (AMS) and its application to geochemistry and others. Finally, taking account of the above-mentioned recent works on Cl isotopes, possible future research subjects are discussed. (author)
Energy Technology Data Exchange (ETDEWEB)
Kushita, Kouhei [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2001-12-01
This report reviews the recent studies on the stable- and radio-isotopes of chlorine from a viewpoint of environmental science, partly including historic references on this element. First, general properties, occurrence, and utilization of chlorine are described. Secondly, current status and research works on chlorine-compounds, which attract special attention in recent years as environmentally hazardous materials, are reported. Thirdly, research works on stable chlorine isotopes, {sup 35}Cl and {sup 37}Cl, are described with a focus laid on the newly-developed techniques; isotopic ratio mass spectrometry (IRMS) and thermal ionization mass spectrometry (TIMS). Fourthly, recent research works on chlorine radioisotopes, {sup 36}Cl etc., are described, focusing on the development of accelerator mass spectrometry (AMS) and its application to geochemistry and others. Finally, taking account of the above-mentioned recent works on Cl isotopes, possible future research subjects are discussed. (author)
Tritium in the environment and around the institution for the usage of radio-isotopes
International Nuclear Information System (INIS)
Matsunami, Tadao; Ishiyama, Toshio; Kobashigawa, Akira; Yamada, Osamu.
1986-01-01
The behavior of tritium in the environment and the tritium content of the liquid wastes were investigated at a facility for the usage of radio-isotopes from 1982 ∼ 1985. Rain water, tap water, well water and waste water samples were collected at the facility. River samples were collected three times from the four main rivers in the Southern Osaka. The results of monthly concentration of tritium were found to fall in the ranges, 0 ∼ 219 pCi/l, since January 1982. The increases in the concentration of tritium in July and August, 1982 are possibly ascribed to the 26-th Chinese nuclear explosion. The order of the concentration of tritium was as follows : waste water (an outlet of drainage) < tap water < rain water < river water < well water. (author)
The amount of radio isotope used and control of its use in our hospital
International Nuclear Information System (INIS)
Kawanishi, Yukio; Matsuura, Motoo
1981-01-01
The most difficult point in handling of radio isotope (RI) is that the operator is liable to be exposed to it before he is aware of it. In case of X-rays coming from the X-ray equipment, the risk can be totally eliminated just by switching-off. With RI, however, there is always the risk of contamination, and the operator often has to work in close contact with it. Of course, handling of and protective measures against RI are specified by the law, and the maximum permisible dose is never exceeded in our hospital. But it is not sufficient to keep the maximum level alone, but we ought to go farther to limit the exposure to the bare minimum and make studies on situations of exposure during work. The present report describes the amount of RI handled and situations of exposure suffered by the operators in RI room of our hospital together with discussion about various problems. (author)
A new algorithmic approach for fingers detection and identification
Mubashar Khan, Arslan; Umar, Waqas; Choudhary, Taimoor; Hussain, Fawad; Haroon Yousaf, Muhammad
2013-03-01
Gesture recognition is concerned with the goal of interpreting human gestures through mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Hand gesture detection in a real time environment, where the time and memory are important issues, is a critical operation. Hand gesture recognition largely depends on the accurate detection of the fingers. This paper presents a new algorithmic approach to detect and identify fingers of human hand. The proposed algorithm does not depend upon the prior knowledge of the scene. It detects the active fingers and Metacarpophalangeal (MCP) of the inactive fingers from an already detected hand. Dynamic thresholding technique and connected component labeling scheme are employed for background elimination and hand detection respectively. Algorithm proposed a new approach for finger identification in real time environment keeping the memory and time constraint as low as possible.
Automatic identification of otological drilling faults: an intelligent recognition algorithm.
Cao, Tianyang; Li, Xisheng; Gao, Zhiqiang; Feng, Guodong; Shen, Peng
2010-06-01
This article presents an intelligent recognition algorithm that can recognize milling states of the otological drill by fusing multi-sensor information. An otological drill was modified by the addition of sensors. The algorithm was designed according to features of the milling process and is composed of a characteristic curve, an adaptive filter and a rule base. The characteristic curve can weaken the impact of the unstable normal milling process and reserve the features of drilling faults. The adaptive filter is capable of suppressing interference in the characteristic curve by fusing multi-sensor information. The rule base can identify drilling faults through the filtering result data. The experiments were repeated on fresh porcine scapulas, including normal milling and two drilling faults. The algorithm has high rates of identification. This study shows that the intelligent recognition algorithm can identify drilling faults under interference conditions. (c) 2010 John Wiley & Sons, Ltd.
A robust star identification algorithm with star shortlisting
Mehta, Deval Samirbhai; Chen, Shoushun; Low, Kay Soon
2018-05-01
A star tracker provides the most accurate attitude solution in terms of arc seconds compared to the other existing attitude sensors. When no prior attitude information is available, it operates in "Lost-In-Space (LIS)" mode. Star pattern recognition, also known as star identification algorithm, forms the most crucial part of a star tracker in the LIS mode. Recognition reliability and speed are the two most important parameters of a star pattern recognition technique. In this paper, a novel star identification algorithm with star ID shortlisting is proposed. Firstly, the star IDs are shortlisted based on worst-case patch mismatch, and later stars are identified in the image by an initial match confirmed with a running sequential angular match technique. The proposed idea is tested on 16,200 simulated star images having magnitude uncertainty, noise stars, positional deviation, and varying size of the field of view. The proposed idea is also benchmarked with the state-of-the-art star pattern recognition techniques. Finally, the real-time performance of the proposed technique is tested on the 3104 real star images captured by a star tracker SST-20S currently mounted on a satellite. The proposed technique can achieve an identification accuracy of 98% and takes only 8.2 ms for identification on real images. Simulation and real-time results depict that the proposed technique is highly robust and achieves a high speed of identification suitable for actual space applications.
Algorithm of Dynamic Model Structural Identification of the Multivariable Plant
Directory of Open Access Journals (Sweden)
Л.М. Блохін
2004-02-01
Full Text Available The new algorithm of dynamic model structural identification of the multivariable stabilized plant with observable and unobservable disturbances in the regular operating modes is offered in this paper. With the help of the offered algorithm it is possible to define the “perturbed” models of dynamics not only of the plant, but also the dynamics characteristics of observable and unobservable casual disturbances taking into account the absence of correlation between themselves and control inputs with the unobservable perturbations.
Particle identification algorithms for the PANDA Endcap Disc DIRC
Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.
2017-12-01
The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.
Development of an automatic identification algorithm for antibiogram analysis
Costa, LFR; Eduardo Silva; Noronha, VT; Ivone Vaz-Moreira; Olga C Nunes; de Andrade, MM
2015-01-01
Routinely, diagnostic and microbiology laboratories perform antibiogram analysis which can present some difficulties leading to misreadings and intra and inter-reader deviations. An Automatic Identification Algorithm (AIA) has been proposed as a solution to overcome some issues associated with the disc diffusion method, which is the main goal of this work. ALA allows automatic scanning of inhibition zones obtained by antibiograms. More than 60 environmental isolates were tested using suscepti...
Malicious Cognitive User Identification Algorithm in Centralized Spectrum Sensing System
Directory of Open Access Journals (Sweden)
Jingbo Zhang
2017-11-01
Full Text Available Collaborative spectral sensing can fuse the perceived results of multiple cognitive users, and thus will improve the accuracy of perceived results. However, the multi-source features of the perceived results result in security problems in the system. When there is a high probability of a malicious user attack, the traditional algorithm can correctly identify the malicious users. However, when the probability of attack by malicious users is reduced, it is almost impossible to use the traditional algorithm to correctly distinguish between honest users and malicious users, which greatly reduces the perceived performance. To address the problem above, based on the β function and the feedback iteration mathematical method, this paper proposes a malicious user identification algorithm under multi-channel cooperative conditions (β-MIAMC, which involves comprehensively assessing the cognitive user’s performance on multiple sub-channels to identify the malicious user. Simulation results show under the same attack probability, compared with the traditional algorithm, the β-MIAMC algorithm can more accurately identify the malicious users, reducing the false alarm probability of malicious users by more than 20%. When the attack probability is greater than 7%, the proposed algorithm can identify the malicious users with 100% certainty.
LHCb - Novel Muon Identification Algorithms for the LHCb Upgrade
Cogoni, Violetta
2016-01-01
The present LHCb Muon Identification procedure was optimised to guarantee high muon detection efficiency at the istantaneous luminosity $\\mathcal{L}$ of $2\\cdot10^{32}$~cm$^{-2}$~s$^{-1}$. In the current data taking conditions, the luminosity is higher than foreseen and the low energy background contribution to the visible rate in the muon system is larger than expected. A worse situation is expected for Run III when LHCb will operate at $\\mathcal{L} = 2\\cdot10^{33}$~cm$^{-2}$~s$^{-1}$ causing the high particle fluxes to deteriorate the muon detection efficiency, because of the increased dead time of the electronics, and in particular to worsen the muon identification capabilities, due to the increased contribution of the background, with deleterious consequences especially for the analyses requiring high purity signal. In this context, possible new algorithms for the muon identification will be illustrated. In particular, the performance on combinatorial background rejection will be shown, together with the ...
Comparing Different Fault Identification Algorithms in Distributed Power System
Alkaabi, Salim
A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.
A simple algorithm for the identification of clinical COPD phenotypes
DEFF Research Database (Denmark)
Burgel, Pierre-Régis; Paillasseur, Jean-Louis; Janssens, Wim
2017-01-01
This study aimed to identify simple rules for allocating chronic obstructive pulmonary disease (COPD) patients to clinical phenotypes identified by cluster analyses. Data from 2409 COPD patients of French/Belgian COPD cohorts were analysed using cluster analysis resulting in the identification...... of subgroups, for which clinical relevance was determined by comparing 3-year all-cause mortality. Classification and regression trees (CARTs) were used to develop an algorithm for allocating patients to these subgroups. This algorithm was tested in 3651 patients from the COPD Cohorts Collaborative...... International Assessment (3CIA) initiative. Cluster analysis identified five subgroups of COPD patients with different clinical characteristics (especially regarding severity of respiratory disease and the presence of cardiovascular comorbidities and diabetes). The CART-based algorithm indicated...
Energy Efficient Distributed Fault Identification Algorithm in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Meenakshi Panda
2014-01-01
Full Text Available A distributed fault identification algorithm is proposed here to find both hard and soft faulty sensor nodes present in wireless sensor networks. The algorithm is distributed, self-detectable, and can detect the most common byzantine faults such as stuck at zero, stuck at one, and random data. In the proposed approach, each sensor node gathered the observed data from the neighbors and computed the mean to check whether faulty sensor node is present or not. If a node found the presence of faulty sensor node, then compares observed data with the data of the neighbors and predict probable fault status. The final fault status is determined by diffusing the fault information from the neighbors. The accuracy and completeness of the algorithm are verified with the help of statistical model of the sensors data. The performance is evaluated in terms of detection accuracy, false alarm rate, detection latency and message complexity.
An intelligent identification algorithm for the monoclonal picking instrument
Yan, Hua; Zhang, Rongfu; Yuan, Xujun; Wang, Qun
2017-11-01
The traditional colony selection is mainly operated by manual mode, which takes on low efficiency and strong subjectivity. Therefore, it is important to develop an automatic monoclonal-picking instrument. The critical stage of the automatic monoclonal-picking and intelligent optimal selection is intelligent identification algorithm. An auto-screening algorithm based on Support Vector Machine (SVM) is proposed in this paper, which uses the supervised learning method, which combined with the colony morphological characteristics to classify the colony accurately. Furthermore, through the basic morphological features of the colony, system can figure out a series of morphological parameters step by step. Through the establishment of maximal margin classifier, and based on the analysis of the growth trend of the colony, the selection of the monoclonal colony was carried out. The experimental results showed that the auto-screening algorithm could screen out the regular colony from the other, which meets the requirement of various parameters.
Induction Motor Parameter Identification Using a Gravitational Search Algorithm
Directory of Open Access Journals (Sweden)
Omar Avalos
2016-04-01
Full Text Available The efficient use of electrical energy is a topic that has attracted attention for its environmental consequences. On the other hand, induction motors represent the main component in most industries. They consume the highest energy percentages in industrial facilities. This energy consumption depends on the operation conditions of the induction motor imposed by its internal parameters. Since the internal parameters of an induction motor are not directly measurable, an identification process must be conducted to obtain them. In the identification process, the parameter estimation is transformed into a multidimensional optimization problem where the internal parameters of the induction motor are considered as decision variables. Under this approach, the complexity of the optimization problem tends to produce multimodal error surfaces for which their cost functions are significantly difficult to minimize. Several algorithms based on evolutionary computation principles have been successfully applied to identify the optimal parameters of induction motors. However, most of them maintain an important limitation: They frequently obtain sub-optimal solutions as a result of an improper equilibrium between exploitation and exploration in their search strategies. This paper presents an algorithm for the optimal parameter identification of induction motors. To determine the parameters, the proposed method uses a recent evolutionary method called the gravitational search algorithm (GSA. Different from most of the existent evolutionary algorithms, the GSA presents a better performance in multimodal problems, avoiding critical flaws such as the premature convergence to sub-optimal solutions. Numerical simulations have been conducted on several models to show the effectiveness of the proposed scheme.
WATERSHED ALGORITHM BASED SEGMENTATION FOR HANDWRITTEN TEXT IDENTIFICATION
Directory of Open Access Journals (Sweden)
P. Mathivanan
2014-02-01
Full Text Available In this paper we develop a system for writer identification which involves four processing steps like preprocessing, segmentation, feature extraction and writer identification using neural network. In the preprocessing phase the handwritten text is subjected to slant removal process for segmentation and feature extraction. After this step the text image enters into the process of noise removal and gray level conversion. The preprocessed image is further segmented by using morphological watershed algorithm, where the text lines are segmented into single words and then into single letters. The segmented image is feature extracted by Daubechies’5/3 integer wavelet transform to reduce training complexity [1, 6]. This process is lossless and reversible [10], [14]. These extracted features are given as input to our neural network for writer identification process and a target image is selected for each training process in the 2-layer neural network. With the several trained output data obtained from different target help in text identification. It is a multilingual text analysis which provides simple and efficient text segmentation.
On-Site Inspection RadioIsotopic Spectroscopy (Osiris) System Development
Energy Technology Data Exchange (ETDEWEB)
Caffrey, Gus J. [Idaho National Laboratory, Idaho Falls, ID (United States); Egger, Ann E. [Idaho National Laboratory, Idaho Falls, ID (United States); Krebs, Kenneth M. [Idaho National Laboratory, Idaho Falls, ID (United States); Milbrath, B. D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jordan, D. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Warren, G. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wilmer, N. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-09-01
We have designed and tested hardware and software for the acquisition and analysis of high-resolution gamma-ray spectra during on-site inspections under the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The On-Site Inspection RadioIsotopic Spectroscopy—Osiris—software filters the spectral data to display only radioisotopic information relevant to CTBT on-site inspections, e.g.,132I. A set of over 100 fission-product spectra was employed for Osiris testing. These spectra were measured, where possible, or generated by modeling. The synthetic test spectral compositions include non-nuclear-explosion scenarios, e.g., a severe nuclear reactor accident, and nuclear-explosion scenarios such as a vented underground nuclear test. Comparing its computer-based analyses to expert visual analyses of the test spectra, Osiris correctly identifies CTBT-relevant fission product isotopes at the 95% level or better.The Osiris gamma-ray spectrometer is a mechanically-cooled, battery-powered ORTEC Transpec-100, chosen to avoid the need for liquid nitrogen during on-site inspections. The spectrometer was used successfully during the recent 2014 CTBT Integrated Field Exercise in Jordan. The spectrometer is controlled and the spectral data analyzed by a Panasonic Toughbook notebook computer. To date, software development has been the main focus of the Osiris project. In FY2016-17, we plan to modify the Osiris hardware, integrate the Osiris software and hardware, and conduct rigorous field tests to ensure that the Osiris system will function correctly during CTBT on-site inspections. The planned development will raise Osiris to technology readiness level TRL-8; transfer the Osiris technology to a commercial manufacturer, and demonstrate Osiris to potential CTBT on-site inspectors.
On-Site Inspection RadioIsotopic Spectroscopy (Osiris) System Development
International Nuclear Information System (INIS)
Caffrey, Gus J.; Egger, Ann E.; Krebs, Kenneth M.; Milbrath, B. D.; Jordan, D. V.; Warren, G. A.; Wilmer, N. G.
2015-01-01
We have designed and tested hardware and software for the acquisition and analysis of high-resolution gamma-ray spectra during on-site inspections under the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The On-Site Inspection RadioIsotopic Spectroscopy-Osiris-software filters the spectral data to display only radioisotopic information relevant to CTBT on-site inspections, e.g.,132I. A set of over 100 fission-product spectra was employed for Osiris testing. These spectra were measured, where possible, or generated by modeling. The synthetic test spectral compositions include non-nuclear-explosion scenarios, e.g., a severe nuclear reactor accident, and nuclear-explosion scenarios such as a vented underground nuclear test. Comparing its computer-based analyses to expert visual analyses of the test spectra, Osiris correctly identifies CTBT-relevant fission product isotopes at the 95% level or better.The Osiris gamma-ray spectrometer is a mechanically-cooled, battery-powered ORTEC Transpec-100, chosen to avoid the need for liquid nitrogen during on-site inspections. The spectrometer was used successfully during the recent 2014 CTBT Integrated Field Exercise in Jordan. The spectrometer is controlled and the spectral data analyzed by a Panasonic Toughbook notebook computer. To date, software development has been the main focus of the Osiris project. In FY2016-17, we plan to modify the Osiris hardware, integrate the Osiris software and hardware, and conduct rigorous field tests to ensure that the Osiris system will function correctly during CTBT on-site inspections. The planned development will raise Osiris to technology readiness level TRL-8; transfer the Osiris technology to a commercial manufacturer, and demonstrate Osiris to potential CTBT on-site inspectors.
Improved chemical identification from sensor arrays using intelligent algorithms
Roppel, Thaddeus A.; Wilson, Denise M.
2001-02-01
Intelligent signal processing algorithms are shown to improve identification rates significantly in chemical sensor arrays. This paper focuses on the use of independently derived sensor status information to modify the processing of sensor array data by using a fast, easily-implemented "best-match" approach to filling in missing sensor data. Most fault conditions of interest (e.g., stuck high, stuck low, sudden jumps, excess noise, etc.) can be detected relatively simply by adjunct data processing, or by on-board circuitry. The objective then is to devise, implement, and test methods for using this information to improve the identification rates in the presence of faulted sensors. In one typical example studied, utilizing separately derived, a-priori knowledge about the health of the sensors in the array improved the chemical identification rate by an artificial neural network from below 10 percent correct to over 99 percent correct. While this study focuses experimentally on chemical sensor arrays, the results are readily extensible to other types of sensor platforms.
Applying Intelligent Algorithms to Automate the Identification of Error Factors.
Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han
2018-05-03
Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.
Identification tibia and fibula bone fracture location using scanline algorithm
Muchtar, M. A.; Simanjuntak, S. E.; Rahmat, R. F.; Mawengkang, H.; Zarlis, M.; Sitompul, O. S.; Winanto, I. D.; Andayani, U.; Syahputra, M. F.; Siregar, I.; Nasution, T. H.
2018-03-01
Fracture is a condition that there is a damage in the continuity of the bone, usually caused by stress, trauma or weak bones. The tibia and fibula are two separated-long bones in the lower leg, closely linked at the knee and ankle. Tibia/fibula fracture often happen when there is too much force applied to the bone that it can withstand. One of the way to identify the location of tibia/fibula fracture is to read X-ray image manually. Visual examination requires more time and allows for errors in identification due to the noise in image. In addition, reading X-ray needs highlighting background to make the objects in X-ray image appear more clearly. Therefore, a method is required to help radiologist to identify the location of tibia/fibula fracture. We propose some image-processing techniques for processing cruris image and Scan line algorithm for the identification of fracture location. The result shows that our proposed method is able to identify it and reach up to 87.5% of accuracy.
TADtool: visual parameter identification for TAD-calling algorithms.
Kruse, Kai; Hug, Clemens B; Hernández-Rodríguez, Benjamín; Vaquerizas, Juan M
2016-10-15
Eukaryotic genomes are hierarchically organized into topologically associating domains (TADs). The computational identification of these domains and their associated properties critically depends on the choice of suitable parameters of TAD-calling algorithms. To reduce the element of trial-and-error in parameter selection, we have developed TADtool: an interactive plot to find robust TAD-calling parameters with immediate visual feedback. TADtool allows the direct export of TADs called with a chosen set of parameters for two of the most common TAD calling algorithms: directionality and insulation index. It can be used as an intuitive, standalone application or as a Python package for maximum flexibility. TADtool is available as a Python package from GitHub (https://github.com/vaquerizaslab/tadtool) or can be installed directly via PyPI, the Python package index (tadtool). kai.kruse@mpi-muenster.mpg.de, jmv@mpi-muenster.mpg.deSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Quantifying time in sedimentary successions by radio-isotopic dating of ash beds
Schaltegger, Urs
2014-05-01
Sedimentary rock sequences are an accurate record of geological, chemical and biological processes throughout the history of our planet. If we want to know more about the duration or the rates of some of these processes, we can apply methods of absolute age determination, i.e. of radio-isotopic dating. Data of highest precision and accuracy, and therefore of highest degree of confidence, are obtained by chemical abrasion, isotope-dilution, thermal ionization mass spectrometry (CA-ID-TIMS) 238U-206Pb dating techniques, applied to magmatic zircon from ash beds that are interbedded with the sediments. This techniques allows high-precision estimates of age at the 0.1% uncertainty for single analyses, and down to 0.03% uncertainty for groups of statistically equivalent 206Pb/238U dates. Such high precision is needed, since we would like the precision to be approximately equivalent or better than the (interpolated) duration of ammonoid zones in the Mesozoic (e.g., Ovtcharova et al. 2006), or to match short feedback rates of biological, climatic, or geochemical cycles after giant volcanic eruptions in large igneous provinces (LIP's), e.g., at the Permian/Triassic or the Triassic/Jurassic boundaries. We also wish to establish as precisely as possible temporal coincidence between the sedimentary record and short-lived volcanic events within the LIP's. Precision and accuracy of the U-Pb data has to be traceable and quantifiable in absolute terms, achieved by direct reference to the international kilogram, via an absolute calibration of the standard and isotopic tracer solutions. Only with a perfect control on precision and accuracy of radio-isotopic data, we can confidently determine whether two ages of geological events are really different, and avoid mistaking interlaboratory or interchronometer biases for age difference. The development of unprecedented precision of CA-ID-TIMS 238U-206Pb dates led to the recognition of protracted growth of zircon in a magmatic liquid (see
Algorithms and tools for system identification using prior knowledge
International Nuclear Information System (INIS)
Lindskog, P.
1994-01-01
One of the hardest problems in system identification is that of model structure selection. In this thesis two different kinds of a priori process knowledge are used to address this fundamental problem. Concentrating on linear model structures, the first prior advantage of is knowledge about the systems' dominating time constants and resonance frequencies. The idea is to generalize FIR modelling by replacing the usual delay operator with discrete so-called Laguerre or Kautz filters. The generalization is such that stability, the linear regression structure and the approximation ability of the FIR model structure is retained, whereas the prior is used to reduce the number of parameters needed to arrive at a reasonable model. Tailorized and efficient system identification algorithms for these model structures are detailed in this work. The usefulness of the proposed methods is demonstrated through concrete simulation and application studies. The other approach is referred to as semi-physical modelling. The main idea is to use simple physical insight into the application, often in terms of a set of unstructured equations, in order to come up with suitable nonlinear transformation of the raw measurements, so as to allow for a good model structure. Semi-physical modelling is less ''ambitious'' than physical modelling in that no complete physical structure is sought, just combinations of inputs and outputs that can be subjected to more or less standard model structures, such as linear regressions. The suggested modelling procedure shows a first step where symbolic computations are employed to determine a suitable model structure - a set of regressors. We show how constructive methods from commutative and differential algebra can be applied for this. Subsequently, different numerical schemes for finding a subset of ''good'' regressors and for estimating the corresponding linear-in-the-parameters model are discussed. 107 refs, figs, tabs
A voting-based star identification algorithm utilizing local and global distribution
Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua
2018-03-01
A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.
A Low Delay and Fast Converging Improved Proportionate Algorithm for Sparse System Identification
Directory of Open Access Journals (Sweden)
Benesty Jacob
2007-01-01
Full Text Available A sparse system identification algorithm for network echo cancellation is presented. This new approach exploits both the fast convergence of the improved proportionate normalized least mean square (IPNLMS algorithm and the efficient implementation of the multidelay adaptive filtering (MDF algorithm inheriting the beneficial properties of both. The proposed IPMDF algorithm is evaluated using impulse responses with various degrees of sparseness. Simulation results are also presented for both speech and white Gaussian noise input sequences. It has been shown that the IPMDF algorithm outperforms the MDF and IPNLMS algorithms for both sparse and dispersive echo path impulse responses. Computational complexity of the proposed algorithm is also discussed.
Research on Palmprint Identification Method Based on Quantum Algorithms
Directory of Open Access Journals (Sweden)
Hui Li
2014-01-01
Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.
Radioactivity nuclide identification based on BP and LM algorithm neural network
International Nuclear Information System (INIS)
Wang Jihong; Sun Jian; Wang Lianghou
2012-01-01
The paper provides the method which can identify radioactive nuclide based on the BP and LM algorithm neural network. Then, this paper compares the above-mentioned method with FR algorithm. Through the result of the Matlab simulation, the method of radioactivity nuclide identification based on the BP and LM algorithm neural network is superior to the FR algorithm. With the better effect and the higher accuracy, it will be the best choice. (authors)
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-07-07
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.
Directory of Open Access Journals (Sweden)
Wei Huang
2013-01-01
Full Text Available We introduce a new category of fuzzy inference systems with the aid of a multiobjective opposition-based space search algorithm (MOSSA. The proposed MOSSA is essentially a multiobjective space search algorithm improved by using an opposition-based learning that employs a so-called opposite numbers mechanism to speed up the convergence of the optimization algorithm. In the identification of fuzzy inference system, the MOSSA is exploited to carry out the parametric identification of the fuzzy model as well as to realize its structural identification. Experimental results demonstrate the effectiveness of the proposed fuzzy models.
Biofilms and Wounds: An Identification Algorithm and Potential Treatment Options
Percival, Steven L.; Vuotto, Claudia; Donelli, Gianfranco; Lipsky, Benjamin A.
2015-01-01
Significance: The presence of a “pathogenic” or “highly virulent” biofilm is a fundamental risk factor that prevents a chronic wound from healing and increases the risk of the wound becoming clinically infected. There is presently no unequivocal gold standard method available for clinicians to confirm the presence of biofilms in a wound. Thus, to help support clinician practice, we devised an algorithm intended to demonstrate evidence of the presence of a biofilm in a wound to assist with wound management. Recent Advances: A variety of histological and microscopic methods applied to tissue biopsies are currently the most informative techniques available for demonstrating the presence of generic (not classified as pathogenic or commensal) biofilms and the effect they are having in promoting inflammation and downregulating cellular functions. Critical Issues: Even as we rely on microscopic techniques to visualize biofilms, they are entities which are patchy and dispersed rather than confluent, particularly on biotic surfaces. Consequently, detection of biofilms by microscopic techniques alone can lead to frequent false-negative results. Furthermore, visual identification using the naked eye of a pathogenic biofilm on a macroscopic level on the wound will not be possible, unlike with biofilms on abiotic surfaces. Future Direction: Lacking specific biomarkers to demonstrate microscopic, nonconfluent, virulent biofilms in wounds, the present focus on biofilm research should be placed on changing clinical practice. This is best done by utilizing an anti-biofilm toolbox approach, rather than speculating on unscientific approaches to identifying biofilms, with or without staining, in wounds with the naked eye. The approach to controlling biofilm should include initial wound cleansing, periodic debridement, followed by the application of appropriate antimicrobial wound dressings. This approach appears to be effective in removing pathogenic biofilms. PMID:26155381
DEFF Research Database (Denmark)
Rivera, Tiffany; Storey, Michael
calibration. Although this geomagnetic event is not part of the most recent geologic timescale, refined ages on short-lived excursions could hold importance to understanding time scales for the wavering nature of Earth’s magnetic field. We propose a new 40Ar/39Ar age for the Quaternary mineral dating standard......The foundation of the EARTHTIME/GTSnext initiative seeks to construct an internally consistent geologic timescale based on astronomical and radio-isotopic geochronology. American west tephras offer a prime opportunity to integrate these two independent timescales with the geomagnetic timescale....... Using an astronomically calibrated age for the monitor mineral Fish Canyon sanidine (FCs;28.201 ± 0.046 Ma, Kuiper, et al., 2008), ages of Pleistocene geomagnetic polarity events are reexamined. Of particular interest, the Quaternary mineral dating standard Alder Creek sandine (ACs) is the type locality...
Bouc–Wen hysteresis model identification using Modified Firefly Algorithm
International Nuclear Information System (INIS)
Zaman, Mohammad Asif; Sikder, Urmita
2015-01-01
The parameters of Bouc–Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc–Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc–Wen model parameters. Finally, the proposed method is used to find the Bouc–Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data. - Highlights: • We describe a new method to find the Bouc–Wen hysteresis model parameters. • We propose a Modified Firefly Algorithm. • We compare our method with existing methods to find that the proposed method performs better. • We use our model to fit experimental results. Good agreement is found
Bouc–Wen hysteresis model identification using Modified Firefly Algorithm
Energy Technology Data Exchange (ETDEWEB)
Zaman, Mohammad Asif, E-mail: zaman@stanford.edu [Department of Electrical Engineering, Stanford University (United States); Sikder, Urmita [Department of Electrical Engineering and Computer Sciences, University of California, Berkeley (United States)
2015-12-01
The parameters of Bouc–Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc–Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc–Wen model parameters. Finally, the proposed method is used to find the Bouc–Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data. - Highlights: • We describe a new method to find the Bouc–Wen hysteresis model parameters. • We propose a Modified Firefly Algorithm. • We compare our method with existing methods to find that the proposed method performs better. • We use our model to fit experimental results. Good agreement is found.
Energy Technology Data Exchange (ETDEWEB)
Chaoshun Li; Jianzhong Zhou [College of Hydroelectric Digitization Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China)
2011-01-15
Parameter identification of hydraulic turbine governing system (HTGS) is crucial in precise modeling of hydropower plant and provides support for the analysis of stability of power system. In this paper, a newly developed optimization algorithm, called gravitational search algorithm (GSA), is introduced and applied in parameter identification of HTGS, and the GSA is improved by combination of the search strategy of particle swarm optimization. Furthermore, a new weighted objective function is proposed in the identification frame. The improved gravitational search algorithm (IGSA), together with genetic algorithm, particle swarm optimization and GSA, is employed in parameter identification experiments and the procedure is validated by comparing experimental and simulated results. Consequently, IGSA is shown to locate more precise parameter values than the compared methods with higher efficiency. (author)
International Nuclear Information System (INIS)
Li Chaoshun; Zhou Jianzhong
2011-01-01
Parameter identification of hydraulic turbine governing system (HTGS) is crucial in precise modeling of hydropower plant and provides support for the analysis of stability of power system. In this paper, a newly developed optimization algorithm, called gravitational search algorithm (GSA), is introduced and applied in parameter identification of HTGS, and the GSA is improved by combination of the search strategy of particle swarm optimization. Furthermore, a new weighted objective function is proposed in the identification frame. The improved gravitational search algorithm (IGSA), together with genetic algorithm, particle swarm optimization and GSA, is employed in parameter identification experiments and the procedure is validated by comparing experimental and simulated results. Consequently, IGSA is shown to locate more precise parameter values than the compared methods with higher efficiency.
Particle Identification algorithm for the CLIC ILD and CLIC SiD detectors
Nardulli, J
2011-01-01
This note describes the algorithm presently used to determine the particle identification performance for single particles for the CLIC ILD and CLIC SiD detector concepts as prepared in the CLIC Conceptual Design Report.
National Research Council Canada - National Science Library
Rohrbough, James G; Breci, Linda; Merchant, Nirav; Miller, Susan; Haynes, Paul A
2005-01-01
.... One such technique, known as the Multi-Dimensional Protein Identification Technique, or MudPIT, involves the use of computer search algorithms that automate the process of identifying proteins...
Particle mis-identification rate algorithm for the CLIC ILD and CLIC SiD detectors
Nardulli, J
2011-01-01
This note describes the algorithm presently used to determine the particle mis- identification rate and gives results for single particles for the CLIC ILD and CLIC SiD detector concepts as prepared for the CLIC Conceptual Design Report.
International Nuclear Information System (INIS)
Chen, Zhihuan; Yuan, Xiaohui; Tian, Hao; Ji, Bin
2014-01-01
Highlights: • We propose an improved gravitational search algorithm (IGSA). • IGSA is applied to parameter identification of water turbine regulation system (WTRS). • WTRS is modeled by considering the impact of turbine speed on torque and water flow. • Weighted objective function strategy is applied to parameter identification of WTRS. - Abstract: Parameter identification of water turbine regulation system (WTRS) is crucial in precise modeling hydropower generating unit (HGU) and provides support for the adaptive control and stability analysis of power system. In this paper, an improved gravitational search algorithm (IGSA) is proposed and applied to solve the identification problem for WTRS system under load and no-load running conditions. This newly algorithm which is based on standard gravitational search algorithm (GSA) accelerates convergence speed with combination of the search strategy of particle swarm optimization and elastic-ball method. Chaotic mutation which is devised to stepping out the local optimal with a certain probability is also added into the algorithm to avoid premature. Furthermore, a new kind of model associated to the engineering practices is built and analyzed in the simulation tests. An illustrative example for parameter identification of WTRS is used to verify the feasibility and effectiveness of the proposed IGSA, as compared with standard GSA and particle swarm optimization in terms of parameter identification accuracy and convergence speed. The simulation results show that IGSA performs best for all identification indicators
DNA evolutionary algorithm (DNAEA) for source term identification in convection-diffusion equation
International Nuclear Information System (INIS)
Yang, X-H; Hu, X-X; Shen, Z-Y
2008-01-01
The source identification problem is changed into an optimization problem in this paper. This is a complicated nonlinear optimization problem. It is very intractable with traditional optimization methods. So DNA evolutionary algorithm (DNAEA) is presented to solve the discussed problem. In this algorithm, an initial population is generated by a chaos algorithm. With the shrinking of searching range, DNAEA gradually directs to an optimal result with excellent individuals obtained by DNAEA. The position and intensity of pollution source are well found with DNAEA. Compared with Gray-coded genetic algorithm and pure random search algorithm, DNAEA has rapider convergent speed and higher calculation precision
Enhanced backpropagation training algorithm for transient event identification
International Nuclear Information System (INIS)
Vitela, J.; Reifman, J.
1993-01-01
We present an enhanced backpropagation (BP) algorithm for training feedforward neural networks that avoids the undesirable premature saturation of the network output nodes and accelerates the training process even in cases where premature saturation is not present. When the standard BP algorithm is applied to train patterns of nuclear power plant (NPP) transients, the network output nodes often become prematurely saturated causing the already slow rate of convergence of the algorithm to become even slower. When premature saturation occurs, the gradient of the prediction error becomes very small, although the prediction error itself is still large, yielding negligible weight updates and hence no significant decrease in the prediction error until the eventual recovery of the output nodes from saturation. By defining the onset of premature saturation and systematically modifying the gradient of the prediction error at saturation, we developed an enhanced BP algorithm that is compared with the standard BP algorithm in training a network to identify NPP transients
Identification of chaotic systems by neural network with hybrid learning algorithm
International Nuclear Information System (INIS)
Pan, S.-T.; Lai, C.-C.
2008-01-01
Based on the genetic algorithm (GA) and steepest descent method (SDM), this paper proposes a hybrid algorithm for the learning of neural networks to identify chaotic systems. The systems in question are the logistic map and the Duffing equation. Different identification schemes are used to identify both the logistic map and the Duffing equation, respectively. Simulation results show that our hybrid algorithm is more efficient than that of other methods
International Nuclear Information System (INIS)
Hinestroza Gutierrez, D.
2006-08-01
In this work a new and promising algorithm based on the minimization of especial functional that depends on two regularization parameters is considered for the identification of the heat conduction coefficient in the parabolic equation. This algorithm uses the adjoint and sensibility equations. One of the regularization parameters is associated with the heat-coefficient (as in conventional Tikhonov algorithms) but the other is associated with the calculated solution. (author)
International Nuclear Information System (INIS)
Hinestroza Gutierrez, D.
2006-12-01
In this work a new and promising algorithm based in the minimization of especial functional that depends on two regularization parameters is considered for identification of the heat conduction coefficient in the parabolic equation. This algorithm uses the adjoint and sensibility equations. One of the regularization parameters is associated with the heat-coefficient (as in conventional Tikhonov algorithms) but the other is associated with the calculated solution. (author)
An efficient attack identification and risk prediction algorithm for ...
African Journals Online (AJOL)
The social media is highly utilized cloud for storing huge amount of data. ... However, the adversarial scenario did not design properly to maintain the privacy of the ... Information Retrieval, Security Evaluation, Efficient Attack Identification and ...
Identification of nuclear power plant transients using the Particle Swarm Optimization algorithm
International Nuclear Information System (INIS)
Canedo Medeiros, Jose Antonio Carlos; Schirru, Roberto
2008-01-01
In order to help nuclear power plant operator reduce his cognitive load and increase his available time to maintain the plant operating in a safe condition, transient identification systems have been devised to help operators identify possible plant transients and take fast and right corrective actions in due time. In the design of classification systems for identification of nuclear power plants transients, several artificial intelligence techniques, involving expert systems, neuro-fuzzy and genetic algorithms have been used. In this work we explore the ability of the Particle Swarm Optimization algorithm (PSO) as a tool for optimizing a distance-based discrimination transient classification method, giving also an innovative solution for searching the best set of prototypes for identification of transients. The Particle Swarm Optimization algorithm was successfully applied to the optimization of a nuclear power plant transient identification problem. Comparing the PSO to similar methods found in literature it has shown better results
Directory of Open Access Journals (Sweden)
Ignacio Santamaría
2008-04-01
Full Text Available This paper treats the identification of nonlinear systems that consist of a cascade of a linear channel and a nonlinearity, such as the well-known Wiener and Hammerstein systems. In particular, we follow a supervised identification approach that simultaneously identifies both parts of the nonlinear system. Given the correct restrictions on the identification problem, we show how kernel canonical correlation analysis (KCCA emerges as the logical solution to this problem. We then extend the proposed identification algorithm to an adaptive version allowing to deal with time-varying systems. In order to avoid overfitting problems, we discuss and compare three possible regularization techniques for both the batch and the adaptive versions of the proposed algorithm. Simulations are included to demonstrate the effectiveness of the presented algorithm.
Identification of nuclear power plant transients using the Particle Swarm Optimization algorithm
Energy Technology Data Exchange (ETDEWEB)
Canedo Medeiros, Jose Antonio Carlos [Universidade Federal do Rio de Janeiro, PEN/COPPE, UFRJ, Ilha do Fundao s/n, CEP 21945-970 Rio de Janeiro (Brazil)], E-mail: canedo@lmp.ufrj.br; Schirru, Roberto [Universidade Federal do Rio de Janeiro, PEN/COPPE, UFRJ, Ilha do Fundao s/n, CEP 21945-970 Rio de Janeiro (Brazil)], E-mail: schirru@lmp.ufrj.br
2008-04-15
In order to help nuclear power plant operator reduce his cognitive load and increase his available time to maintain the plant operating in a safe condition, transient identification systems have been devised to help operators identify possible plant transients and take fast and right corrective actions in due time. In the design of classification systems for identification of nuclear power plants transients, several artificial intelligence techniques, involving expert systems, neuro-fuzzy and genetic algorithms have been used. In this work we explore the ability of the Particle Swarm Optimization algorithm (PSO) as a tool for optimizing a distance-based discrimination transient classification method, giving also an innovative solution for searching the best set of prototypes for identification of transients. The Particle Swarm Optimization algorithm was successfully applied to the optimization of a nuclear power plant transient identification problem. Comparing the PSO to similar methods found in literature it has shown better results.
Word-length algorithm for language identification of under-resourced languages
Directory of Open Access Journals (Sweden)
Ali Selamat
2016-10-01
Full Text Available Language identification is widely used in machine learning, text mining, information retrieval, and speech processing. Available techniques for solving the problem of language identification do require large amount of training text that are not available for under-resourced languages which form the bulk of the World’s languages. The primary objective of this study is to propose a lexicon based algorithm which is able to perform language identification using minimal training data. Because language identification is often the first step in many natural language processing tasks, it is necessary to explore techniques that will perform language identification in the shortest possible time. Hence, the second objective of this research is to study the effect of the proposed algorithm on the run-time performance of language identification. Precision, recall, and F1 measures were used to determine the effectiveness of the proposed word length algorithm using datasets drawn from the Universal Declaration of Human Rights Act in 15 languages. The experimental results show good accuracy on language identification at the document level and at the sentence level based on the available dataset. The improved algorithm also showed significant improvement in run time performance compared with the spelling checker approach.
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
An AUTONOMOUS STAR IDENTIFICATION ALGORITHM BASED ON THE DIRECTED CIRCULARITY PATTERN
Directory of Open Access Journals (Sweden)
J. Xie
2012-07-01
Full Text Available The accuracy of the angular distance may decrease due to lots of factors, such as the parameters of the stellar camera aren't calibrated on-orbit, or the location accuracy of the star image points is low, and so on, which can cause the low success rates of star identification. A robust directed circularity pattern algorithm is proposed in this paper, which is developed on basis of the matching probability algorithm. The improved algorithm retains the matching probability strategy to identify master star, and constructs a directed circularity pattern with the adjacent stars for unitary matching. The candidate matching group which has the longest chain will be selected as the final result. Simulation experiments indicate that the improved algorithm has high successful identification and reliability etc, compared with the original algorithm. The experiments with real data are used to verify it.
Neuro-diffuse algorithm for neutronic power identification of TRIGA Mark III reactor
International Nuclear Information System (INIS)
Rojas R, E.; Benitez R, J. S.; Segovia de los Rios, J. A.; Rivero G, T.
2009-10-01
In this work are presented the results of design and implementation of an algorithm based on diffuse logic systems and neural networks like method of neutronic power identification of TRIGA Mark III reactor. This algorithm uses the punctual kinetics equation as data generator of training, a cost function and a learning stage based on the descending gradient algorithm allow to optimize the parameters of membership functions of a diffuse system. Also, a series of criteria like part of the initial conditions of training algorithm are established. These criteria according to the carried out simulations show a quick convergence of neutronic power estimated from the first iterations. (Author)
Lost-in-Space Star Identification Using Planar Triangle Principal Component Analysis Algorithm
Directory of Open Access Journals (Sweden)
Fuqiang Zhou
2015-01-01
Full Text Available It is a challenging task for a star sensor to implement star identification and determine the attitude of a spacecraft in the lost-in-space mode. Several algorithms based on triangle method are proposed for star identification in this mode. However, these methods hold great time consumption and large guide star catalog memory size. The star identification performance of these methods requires improvements. To address these problems, a star identification algorithm using planar triangle principal component analysis is presented here. A star pattern is generated based on the planar triangle created by stars within the field of view of a star sensor and the projection of the triangle. Since a projection can determine an index for a unique triangle in the catalog, the adoption of the k-vector range search technique makes this algorithm very fast. In addition, a sharing star validation method is constructed to verify the identification results. Simulation results show that the proposed algorithm is more robust than the planar triangle and P-vector algorithms under the same conditions.
A simple algorithm for the identification of clinical COPD phenotypes
Burgel, Pierre-Régis; Paillasseur, Jean-Louis; Janssens, Wim; Piquet, Jacques; ter Riet, Gerben; Garcia-Aymerich, Judith; Cosio, Borja; Bakke, Per; Puhan, Milo A.; Langhammer, Arnulf; Alfageme, Inmaculada; Almagro, Pere; Ancochea, Julio; Celli, Bartolome R.; Casanova, Ciro; de-Torres, Juan P.; Decramer, Marc; Echazarreta, Andrés; Esteban, Cristobal; Gomez Punter, Rosa Mar; Han, MeiLan K.; Johannessen, Ane; Kaiser, Bernhard; Lamprecht, Bernd; Lange, Peter; Leivseth, Linda; Marin, Jose M.; Martin, Francis; Martinez-Camblor, Pablo; Miravitlles, Marc; Oga, Toru; Sofia Ramírez, Ana; Sin, Don D.; Sobradillo, Patricia; Soler-Cataluña, Juan J.; Turner, Alice M.; Verdu Rivera, Francisco Javier; Soriano, Joan B.; Roche, Nicolas
2017-01-01
This study aimed to identify simple rules for allocating chronic obstructive pulmonary disease (COPD) patients to clinical phenotypes identified by cluster analyses. Data from 2409 COPD patients of French/Belgian COPD cohorts were analysed using cluster analysis resulting in the identification of
Tau Reconstruction, Identification Algorithms and Performance in ATLAS
DEFF Research Database (Denmark)
Simonyan, M.
2013-01-01
identification of hadronically decaying tau leptons is achieved by using detailed information from tracking and calorimeter detector components. Variables describing the properties of calorimeter energy deposits and track reconstruction within tau candidates are combined in multi-variate discriminants...... by investigating single hadron calorimeter response, as well as kinematic distributions in Z¿ tt events....
Mode Identification of Guided Ultrasonic Wave using Time- Frequency Algorithm
International Nuclear Information System (INIS)
Yoon, Byung Sik; Yang, Seung Han; Cho, Yong Sang; Kim, Yong Sik; Lee, Hee Jong
2007-01-01
The ultrasonic guided waves are waves whose propagation characteristics depend on structural thickness and shape such as those in plates, tubes, rods, and embedded layers. If the angle of incidence or the frequency of sound is adjusted properly, the reflected and refracted energy within the structure will constructively interfere, thereby launching the guided wave. Because these waves penetrate the entire thickness of the tube and propagate parallel to the surface, a large portion of the material can be examined from a single transducer location. The guided ultrasonic wave has various merits like above. But various kind of modes are propagating through the entire thickness, so we don't know the which mode is received. Most of applications are limited from mode selection and mode identification. So the mode identification is very important process for guided ultrasonic inspection application. In this study, various time-frequency analysis methodologies are developed and compared for mode identification tool of guided ultrasonic signal. For this study, a high power tone-burst ultrasonic system set up for the generation and receive of guided waves. And artificial notches were fabricated on the Aluminum plate for the experiment on the mode identification
Identification of partial blockages in pipelines using genetic algorithms
Indian Academy of Sciences (India)
A methodology to identify the partial blockages in a simple pipeline using genetic algorithms for non-harmonic flows is presented in this paper. A sinusoidal flow generated by the periodic on-and-off operation of a valve at the outlet is investigated in the time domain and it is observed that pressure variation at the valve is ...
Computer vision algorithm for diabetic foot injury identification and evaluation
Energy Technology Data Exchange (ETDEWEB)
Castaneda M, C. L.; Solis S, L. O.; Martinez B, M. R.; Ortiz R, J. M.; Garza V, I.; Martinez F, M.; Castaneda M, R.; Vega C, H. R., E-mail: lsolis@uaz.edu.mx [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico)
2016-10-15
Diabetic foot is one of the most devastating consequences related to diabetes. It is relevant because of its incidence and the elevated percentage of amputations and deaths that the disease implies. Given the fact that the existing tests and laboratories designed to diagnose it are limited and expensive, the most common evaluation is still based on signs and symptoms. This means that the specialist completes a questionnaire based solely on observation and an invasive wound measurement. Using the questionnaire, the physician issues a diagnosis. In the sense, the diagnosis relies only on the criteria and the specialists experience. For some variables such as the lesions area or their location, this dependency is not acceptable. Currently bio-engineering has played a key role on the diagnose of different chronic degenerative diseases. A timely diagnose has proven to be the best tool against diabetic foot. The diabetics foot clinical evaluation, increases the possibility to identify risks and further complications. The main goal of this paper is to present the development of an algorithm based on digital image processing techniques, which enables to optimize the results on the diabetics foot lesion evaluation. Using advanced techniques for object segmentation and adjusting the sensibility parameter, allows the correlation between the algorithms identified wounds and those observed by the physician. Using the developed algorithm it is possible to identify and assess the wounds, their size, and location, in a non-invasive way. (Author)
Computer vision algorithm for diabetic foot injury identification and evaluation
International Nuclear Information System (INIS)
Castaneda M, C. L.; Solis S, L. O.; Martinez B, M. R.; Ortiz R, J. M.; Garza V, I.; Martinez F, M.; Castaneda M, R.; Vega C, H. R.
2016-10-01
Diabetic foot is one of the most devastating consequences related to diabetes. It is relevant because of its incidence and the elevated percentage of amputations and deaths that the disease implies. Given the fact that the existing tests and laboratories designed to diagnose it are limited and expensive, the most common evaluation is still based on signs and symptoms. This means that the specialist completes a questionnaire based solely on observation and an invasive wound measurement. Using the questionnaire, the physician issues a diagnosis. In the sense, the diagnosis relies only on the criteria and the specialists experience. For some variables such as the lesions area or their location, this dependency is not acceptable. Currently bio-engineering has played a key role on the diagnose of different chronic degenerative diseases. A timely diagnose has proven to be the best tool against diabetic foot. The diabetics foot clinical evaluation, increases the possibility to identify risks and further complications. The main goal of this paper is to present the development of an algorithm based on digital image processing techniques, which enables to optimize the results on the diabetics foot lesion evaluation. Using advanced techniques for object segmentation and adjusting the sensibility parameter, allows the correlation between the algorithms identified wounds and those observed by the physician. Using the developed algorithm it is possible to identify and assess the wounds, their size, and location, in a non-invasive way. (Author)
Performance study of LMS based adaptive algorithms for unknown system identification
International Nuclear Information System (INIS)
Javed, Shazia; Ahmad, Noor Atinah
2014-01-01
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment
Performance study of LMS based adaptive algorithms for unknown system identification
Energy Technology Data Exchange (ETDEWEB)
Javed, Shazia; Ahmad, Noor Atinah [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 Penang (Malaysia)
2014-07-10
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Directory of Open Access Journals (Sweden)
Li Ding
2015-01-01
Full Text Available The purpose of this paper is devoted to developing a chaotic artificial bee colony algorithm (CABC for the system identification of a small-scale unmanned helicopter state-space model in hover condition. In order to avoid the premature of traditional artificial bee colony algorithm (ABC, which is stuck in local optimum and can not reach the global optimum, a novel chaotic operator with the characteristics of ergodicity and irregularity was introduced to enhance its performance. With input-output data collected from actual flight experiments, the identification results showed the superiority of CABC over the ABC and the genetic algorithm (GA. Simulations are presented to demonstrate the effectiveness of our proposed algorithm and the accuracy of the identified helicopter model.
A gradient based algorithm to solve inverse plane bimodular problems of identification
Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing
2018-02-01
This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.
Identification of chaotic systems with hidden variables (modified Bock's algorithm)
International Nuclear Information System (INIS)
Bezruchko, Boris P.; Smirnov, Dmitry A.; Sysoev, Ilya V.
2006-01-01
We address the problem of estimating parameters of chaotic dynamical systems from a time series in a situation when some of state variables are not observed and/or the data are very noisy. Using specially developed quantitative criteria, we compare performance of the original multiple shooting approach (Bock's algorithm) and its modified version. The latter is shown to be significantly superior for long chaotic time series. In particular, it allows to obtain accurate estimates for much worse starting guesses for the estimated parameters
ALGORITHMS FOR IDENTIFICATION OF CUES WITH AUTHORS’ TEXT INSERTIONS IN BELARUSIAN ELECTRONIC BOOKS
Directory of Open Access Journals (Sweden)
Y. S. Hetsevich
2014-01-01
Full Text Available The main stages of algorithms for characters’ gender identification in Belarusian electronic texts are described. The algorithms are based on punctuation marking and gender indicators detection, such as past tense verbs and nouns with gender attributes. For indicators, special dictionaries are developed, thus making the algorithms more language-independent and allowing to create dictionaries for cognate languages. Testing showed the following results: the mean harmonic quantity for masculine gender detection makes up 92,2 %, and for feminine gender detection – 90,4%.
E-Waste recycling: new algorithm for hyper spectral identification
International Nuclear Information System (INIS)
Picon-Ruiz, A.; Echazarra-Higuet, J.; Bereciartua-Perez, A.
2010-01-01
Waste electrical and Electronic Equipment (WEEE) constitutes 4% of the municipal waste in Europe, being increased by 16-28% every five years. Nowadays, Europe produces 6,5 million tonnes of WEEE per year and currently 90% goes to landfill. WEEE waste is growing 3 times faster than municipal waste and this figure is expected to be increased up to 12 million tones by 2015. Applying a new technology to separate non-ferrous metal Waste from WEEE is the aim of this paper, by identifying multi-and hyper-spectral materials and inserting them in a recycling plant. This technology will overcome the shortcomings passed by current methods, which are unable to separate valuable materials very similar in colour, size or shape. For this reason, it is necessary to develop new algorithms able to distinguish among these materials and to face the timing requirements. (Author). 22 refs.
Cloud identification using genetic algorithms and massively parallel computation
Buckles, Bill P.; Petry, Frederick E.
1996-01-01
As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user
International Nuclear Information System (INIS)
Mirsaidov, U.; Kamalov, D.
2010-01-01
One of peaceful uses of the nuclear energy is the production of electrical energy by using the phenomenon of fission of radioactive strontium in the radio-isotope thermo-electrical generators (RITEGs) to supply with energy lighthouses, radio-lighthouses and radio meteorological stations. They are installed in the remote territories far from the people’s dwellings and do not require presence of the personnel to maintain them. Republic of Tajikistan as other republics of the ex-Soviet Union used the radio isotope thermo- electrical generators (RITEGs) as sources for autonomous hydro- and meteorological navigational equipment, which was placed in the hard-to-reach mountainous regions. In the ex-Soviet Union, the RITEGs were under constant surveillance. But, after the breakup of the Soviet Union, hundreds of these small devices equipped with powerful sources of radiation remained out of control. Radioactive substance contained in them may be easily used as a source of radiation dispersion. By applying Strontium-90 as a material for a bomb one can disperse this radioactive substance after exploding the bomb. Having exploded one of such “dirty bombs” a terrorist may contaminate several cities by the radioactive materials. It was determined that there are around 1 000 RITEGs on the territory of the Russian Federation and approximately 30- on the territory of other states. It is presumed that approximately 1500 RITEGs were manufactured in the USSR. The exploitation period of all the RITEGs is around 10 years. At present, all the RITEGs which were in circulation have finalized their functionality period and should be withdrawn from the utilization. In Tajikistan, Tajikhydromet is the user of the RITEGs. The manufacturer of the RITEGs, according to the documentation, was the All-Russian Institute of Technological Physics and Automation in Moscow. The documents were sent to the plant-producer. According to the unofficial sources, during the times of the Soviet Union 15
Idehara, Kenji; Yamagishi, Gaku; Yamashita, Kunihiko; Ito, Michio
2008-01-01
The murine local lymph node assay (LLNA) is an accepted and widely used method for assessing the skin-sensitizing potential of chemicals. Here, we describe a non-radio isotopic modified LLNA in which adenosine triphosphate (ATP) content is used as an endpoint instead of radioisotope (RI); the method is termed LLNA modified by Daicel based on ATP content (LLNA-DA). Groups of female CBA/JNCrlj mice were treated topically on the dorsum of both ears with test chemicals or a vehicle control on days 1, 2, and 3; an additional fourth application was conducted on day 7. Pretreatment with 1% sodium lauryl sulfate solution was performed 1 h before each application. On day 8, the amount of ATP in the draining auricular lymph nodes was measured as an alternative endpoint by the luciferin-luciferase assay in terms of bioluminescence (relative light units, RLU). A stimulation index (SI) relative to the concurrent vehicle control was derived based on the RLU value, and an SI of 3 was set as the cut-off value. Using the LLNA-DA method, 31 chemicals were tested and the results were compared with those of other test methods. The accuracy of LLNA-DA vs LLNA, guinea pig tests, and human tests was 93% (28/30), 80% (20/25), and 79% (15/19), respectively. The estimated concentration (EC) 3 value was calculated and compared with that of the original LLNA. It was found that the EC3 values obtained by LLNA-DA were almost equal to those obtained by the original LLNA. The SI value based on ATP content is similar to that of the original LLNA as a result of the modifications in the chemical treatment procedure, which contribute to improving the SI value. It is concluded that LLNA-DA is a promising non-RI alternative method for evaluating the skin-sensitizing potential of chemicals.
Coherency Identification of Generators Using a PAM Algorithm for Dynamic Reduction of Power Systems
Directory of Open Access Journals (Sweden)
Seung-Il Moon
2012-11-01
Full Text Available This paper presents a new coherency identification method for dynamic reduction of a power system. To achieve dynamic reduction, coherency-based equivalence techniques divide generators into groups according to coherency, and then aggregate them. In order to minimize the changes in the dynamic response of the reduced equivalent system, coherency identification of the generators should be clearly defined. The objective of the proposed coherency identification method is to determine the optimal coherent groups of generators with respect to the dynamic response, using the Partitioning Around Medoids (PAM algorithm. For this purpose, the coherency between generators is first evaluated from the dynamic simulation time response, and in the proposed method this result is then used to define a dissimilarity index. Based on the PAM algorithm, the coherent generator groups are then determined so that the sum of the index in each group is minimized. This approach ensures that the dynamic characteristics of the original system are preserved, by providing the optimized coherency identification. To validate the effectiveness of the technique, simulated cases with an IEEE 39-bus test system are evaluated using PSS/E. The proposed method is compared with an existing coherency identification method, which uses the K-means algorithm, and is found to provide a better estimate of the original system.
Radionuclide identification algorithm for organic scintillator-based radiation portal monitor
Energy Technology Data Exchange (ETDEWEB)
Paff, Marc Gerrit, E-mail: mpaff@umich.edu; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.
2017-03-21
We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.
Radionuclide identification algorithm for organic scintillator-based radiation portal monitor
Paff, Marc Gerrit; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.
2017-03-01
We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.
International Nuclear Information System (INIS)
Jung, B. K.; Cho, J. R.; Jeong, W. B.
2015-01-01
The position of vibration sensors influences the modal identification quality of flexible structures for a given number of sensors, and the quality of modal identification is usually estimated in terms of correlation between the natural modes using the modal assurance criterion (MAC). The sensor placement optimization is characterized by the fact that the design variables are not continuous but discrete, implying that the conventional sensitivity-driven optimization methods are not applicable. In this context, this paper presents the application of genetic algorithm to the sensor placement optimization for improving the modal identification quality of flexible structures. A discrete-type optimization problem using genetic algorithm is formulated by defining the sensor positions and the MAC as the design variables and the objective function, respectively. The proposed GA-based evolutionary optimization method is validated through the numerical experiment with a rectangular plate, and its excellence is verified from the comparison with the cases using different modal correlation measures.
A Genetic Algorithms Based Approach for Identification of Escherichia coli Fed-batch Fermentation
Directory of Open Access Journals (Sweden)
Olympia Roeva
2004-10-01
Full Text Available This paper presents the use of genetic algorithms for identification of Escherichia coli fed-batch fermentation process. Genetic algorithms are a directed random search technique, based on the mechanics of natural selection and natural genetics, which can find the global optimal solution in complex multidimensional search space. The dynamic behavior of considered process has known nonlinear structure, described with a system of deterministic nonlinear differential equations according to the mass balance. The parameters of the model are estimated using genetic algorithms. Simulation examples for demonstration of the effectiveness and robustness of the proposed identification scheme are included. As a result, the model accurately predicts the process of cultivation of E. coli.
Performance Comparison of Different System Identification Algorithms for FACET and ATF2
Pfingstner, J; Schulte, D
2013-01-01
Good system knowledge is an essential ingredient for the operation of modern accelerator facilities. For example, beam-based alignment algorithms and orbit feedbacks rely strongly on a precise measurement of the orbit response matrix. The quality of the measurement of this matrix can be improved over time by statistically combining the effects of small system excitations with the help of system identification algorithms. These small excitations can be applied in a parasitic mode without stopping the accelerator operation (on-line). In this work, different system identification algorithms are used in simulation studies for the response matrix measurement at ATF2. The results for ATF2 are finally compared with the results for FACET, latter originating from an earlier work.
Parameter identification of PEMFC model based on hybrid adaptive differential evolution algorithm
International Nuclear Information System (INIS)
Sun, Zhe; Wang, Ning; Bi, Yunrui; Srinivasan, Dipti
2015-01-01
In this paper, a HADE (hybrid adaptive differential evolution) algorithm is proposed for the identification problem of PEMFC (proton exchange membrane fuel cell). Inspired by biological genetic strategy, a novel adaptive scaling factor and a dynamic crossover probability are presented to improve the adaptive and dynamic performance of differential evolution algorithm. Moreover, two kinds of neighborhood search operations based on the bee colony foraging mechanism are introduced for enhancing local search efficiency. Through testing the benchmark functions, the proposed algorithm exhibits better performance in convergent accuracy and speed. Finally, the HADE algorithm is applied to identify the nonlinear parameters of PEMFC stack model. Through experimental comparison with other identified methods, the PEMFC model based on the HADE algorithm shows better performance. - Highlights: • We propose a hybrid adaptive differential evolution algorithm (HADE). • The search efficiency is enhanced in low and high dimension search space. • The effectiveness is confirmed by testing benchmark functions. • The identification of the PEMFC model is conducted by adopting HADE.
Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani
2015-03-01
In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Identification of time-varying nonlinear systems using differential evolution algorithm
DEFF Research Database (Denmark)
Perisic, Nevena; Green, Peter L; Worden, Keith
2013-01-01
(DE) algorithm for the identification of time-varying systems. DE is an evolutionary optimisation method developed to perform direct search in a continuous space without requiring any derivative estimation. DE is modified so that the objective function changes with time to account for the continuing......, thus identification of time-varying systems with nonlinearities can be a very challenging task. In order to avoid conventional least squares and gradient identification methods which require uni-modal and double differentiable objective functions, this work proposes a modified differential evolution...... inclusion of new data within an error metric. This paper presents results of identification of a time-varying SDOF system with Coulomb friction using simulated noise-free and noisy data for the case of time-varying friction coefficient, stiffness and damping. The obtained results are promising and the focus...
Comparison of Clustering Algorithms for the Identification of Topics on Twitter
Directory of Open Access Journals (Sweden)
Marjori N. M. Klinczak
2016-05-01
Full Text Available Topic Identification in Social Networks has become an important task when dealing with event detection, particularly when global communities are affected. In order to attack this problem, text processing techniques and machine learning algorithms have been extensively used. In this paper we compare four clustering algorithms – k-means, k-medoids, DBSCAN and NMF (Non-negative Matrix Factorization – in order to detect topics related to textual messages obtained from Twitter. The algorithms were applied to a database initially composed by tweets having hashtags related to the recent Nepal earthquake as initial context. Obtained results suggest that the NMF clustering algorithm presents superior results, providing simpler clusters that are also easier to interpret.
Hong, Xia
2006-07-01
In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.
Gao, Xiaohui; Liu, Yongguang
2018-01-01
There is a serious nonlinear relationship between input and output in the giant magnetostrictive actuator (GMA) and how to establish mathematical model and identify its parameters is very important to study characteristics and improve control accuracy. The current-displacement model is firstly built based on Jiles-Atherton (J-A) model theory, Ampere loop theorem and stress-magnetism coupling model. And then laws between unknown parameters and hysteresis loops are studied to determine the data-taking scope. The modified simulated annealing differential evolution algorithm (MSADEA) is proposed by taking full advantage of differential evolution algorithm's fast convergence and simulated annealing algorithm's jumping property to enhance the convergence speed and performance. Simulation and experiment results shows that this algorithm is not only simple and efficient, but also has fast convergence speed and high identification accuracy.
International Nuclear Information System (INIS)
Lin, Chang Sheng; Tseng, Tse Chuan
2014-01-01
Modal Identification from response data only is studied for structural systems under nonstationary ambient vibration. The topic of this paper is the estimation of modal parameters from nonstationary ambient vibration data by applying the random decrement algorithm with time-varying threshold level. In the conventional random decrement algorithm, the threshold level for evaluating random dec signatures is defined as the standard deviation value of response data of the reference channel. The distortion of random dec signatures may be, however, induced by the error involved in noise from the original response data in practice. To improve the accuracy of identification, a modification of the sampling procedure in random decrement algorithm is proposed for modal-parameter identification from the nonstationary ambient response data. The time-varying threshold level is presented for the acquisition of available sample time history to perform averaging analysis, and defined as the temporal root-mean-square function of structural response, which can appropriately describe a wide variety of nonstationary behaviors in reality, such as the time-varying amplitude (variance) of a nonstationary process in a seismic record. Numerical simulations confirm the validity and robustness of the proposed modal-identification method from nonstationary ambient response data under noisy conditions.
A Brightness-Referenced Star Identification Algorithm for APS Star Trackers
Zhang, Peng; Zhao, Qile; Liu, Jingnan; Liu, Ning
2014-01-01
Star trackers are currently the most accurate spacecraft attitude sensors. As a result, they are widely used in remote sensing satellites. Since traditional charge-coupled device (CCD)-based star trackers have a limited sensitivity range and dynamic range, the matching process for a star tracker is typically not very sensitive to star brightness. For active pixel sensor (APS) star trackers, the intensity of an imaged star is valuable information that can be used in star identification process. In this paper an improved brightness referenced star identification algorithm is presented. This algorithm utilizes the k-vector search theory and adds imaged stars' intensities to narrow the search scope and therefore increase the efficiency of the matching process. Based on different imaging conditions (slew, bright bodies, etc.) the developed matching algorithm operates in one of two identification modes: a three-star mode, and a four-star mode. If the reference bright stars (the stars brighter than three magnitude) show up, the algorithm runs the three-star mode and efficiency is further improved. The proposed method was compared with other two distinctive methods the pyramid and geometric voting methods. All three methods were tested with simulation data and actual in orbit data from the APS star tracker of ZY-3. Using a catalog composed of 1500 stars, the results show that without false stars the efficiency of this new method is 4∼5 times that of the pyramid method and 35∼37 times that of the geometric method. PMID:25299950
Multivariate algorithms for initiating event detection and identification in nuclear power plants
International Nuclear Information System (INIS)
Wu, Shun-Chi; Chen, Kuang-You; Lin, Ting-Han; Chou, Hwai-Pwu
2018-01-01
Highlights: •Multivariate algorithms for NPP initiating event detection and identification. •Recordings from multiple sensors are simultaneously considered for detection. •Both spatial and temporal information is used for event identification. •Untrained event isolation avoids falsely relating an untrained event. •Efficacy of the algorithms is verified with data from the Maanshan NPP simulator. -- Abstract: To prevent escalation of an initiating event into a severe accident, promptly detecting its occurrence and precisely identifying its type are essential. In this study, several multivariate algorithms for initiating event detection and identification are proposed to help maintain safe operations of nuclear power plants (NPPs). By monitoring changes in the NPP sensing variables, an event is detected when the preset thresholds are exceeded. Unlike existing approaches, recordings from sensors of the same type are simultaneously considered for detection, and no subjective reasoning is involved in setting these thresholds. To facilitate efficient event identification, a spatiotemporal feature extractor is proposed. The extracted features consist of the temporal traits used by existing techniques and the spatial signature of an event. Through an F-score-based feature ranking, only those that are most discriminant in classifying the events under consideration will be retained for identification. Moreover, an untrained event isolation scheme is introduced to avoid relating an untrained event to those in the event dataset so that improper recovery actions can be prevented. Results from experiments containing data of 12 event classes and a total of 125 events generated using a Taiwan’s Maanshan NPP simulator are provided to illustrate the efficacy of the proposed algorithms.
A New Algorithm for Radioisotope Identification of Shielded and Masked SNM/RDD Materials
International Nuclear Information System (INIS)
Jeffcoat, R.
2012-01-01
Detection and identification of shielded and masked nuclear materials is crucial to national security, but vast borders and high volumes of traffic impose stringent requirements for practical detection systems. Such tools must be be mobile, and hence low power, provide a low false alarm rate, and be sufficiently robust to be operable by non-technical personnel. Currently fielded systems have not achieved all of these requirements simultaneously. Transport modeling such as that done in GADRAS is able to predict observed spectra to a high degree of fidelity; our research is focusing on a radionuclide identification algorithm that inverts this modeling within the constraints imposed by a handheld device. Key components of this work include incorporation of uncertainty as a function of both the background radiation estimate and the hypothesized sources, dimensionality reduction, and nonnegative matrix factorization. We have partially evaluated performance of our algorithm on a third-party data collection made with two different sodium iodide detection devices. Initial results indicate, with caveats, that our algorithm performs as good as or better than the on-board identification algorithms. The system developed was based on a probabilistic approach with an improved approach to variance modeling relative to past work. This system was chosen based on technical innovation and system performance over algorithms developed at two competing research institutions. One key outcome of this probabilistic approach was the development of an intuitive measure of confidence which was indeed useful enough that a classification algorithm was developed based around alarming on high confidence targets. This paper will present and discuss results of this novel approach to accurately identifying shielded or masked radioisotopes with radiation detection systems.
Online Identification of Photovoltaic Source Parameters by Using a Genetic Algorithm
Directory of Open Access Journals (Sweden)
Giovanni Petrone
2017-12-01
Full Text Available In this paper, an efficient method for the online identification of the photovoltaic single-diode model parameters is proposed. The combination of a genetic algorithm with explicit equations allows obtaining precise results without the direct measurement of short circuit current and open circuit voltage that is typically used in offline identification methods. Since the proposed method requires only voltage and current values close to the maximum power point, it can be easily integrated into any photovoltaic system, and it operates online without compromising the power production. The proposed approach has been implemented and tested on an embedded system, and it exhibits a good performance for monitoring/diagnosis applications.
Parameter identification of Rossler's chaotic system by an evolutionary algorithm
Energy Technology Data Exchange (ETDEWEB)
Chang, W.-D. [Department of Computer and Communication, Shu-Te University, Kaohsiung 824, Taiwan (China)]. E-mail: wdchang@mail.stu.edu.tw
2006-09-15
In this paper, a differential evolution (DE) algorithm is applied to parameter identification of Rossler's chaotic system. The differential evolution has been shown to possess a powerful searching capability for finding the solutions for a given optimization problem, and it allows for parameter solution to appear directly in the form of floating point without further numerical coding or decoding. Three unknown parameters of Rossler's Chaotic system are optimally estimated by using the DE algorithm. Finally, a numerical example is given to verify the effectiveness of the proposed method.
International Nuclear Information System (INIS)
Benjamins, H.M.
1983-01-01
A device is claimed for interrupting an elution process in a radioisotope generator before an elution vial is entirely filled. The generator is simultaneously exposed to sterile air both in the direction of the generator column and of the elution vial
A Novel Algorithm for Power Flow Transferring Identification Based on WAMS
Directory of Open Access Journals (Sweden)
Xu Yan
2015-01-01
Full Text Available After a faulted transmission line is removed, power flow on it will be transferred to other lines in the network. If those lines are heavily loaded beforehand, the transferred flow may cause the nonfault overload and the incorrect operation of far-ranging backup relays, which are considered as the key factors leading to cascading trips. In this paper, a novel algorithm for power flow transferring identification based on wide area measurement system (WAMS is proposed, through which the possible incorrect tripping of backup relays will be blocked in time. A new concept of Transferred Flow Characteristic Ratio (TFCR is presented and is applied to the identification criteria. Mathematical derivation of TFCR is carried out in detail by utilization of power system short circuit fault modeling. The feasibility and effectiveness of the proposed algorithm to prevent the malfunction of backup relays are demonstrated by a large number of simulations.
Akkaş, Efe; Çubukçu, H. Evren; Artuner, Harun
2014-05-01
C5.0 Decision Tree algorithm. The predictions of the decision tree classifier, namely the matching of the test data with the appropriate mineral group, yield an overall accuracy of >90%. Besides, the algorithm successfully discriminated some mineral (groups) despite their similar elemental composition such as orthopyroxene ((Mg,Fe)2[SiO6]) and olivine ((Mg,Fe)2[SiO4]). Furthermore, the effects of various operating conditions have been insignificant for the classifier. These results demonstrate that decision tree algorithm stands as an accurate, rapid and automated method for mineral classification/identification. Hence, decision tree algorithm would be a promising component of an expert system focused on real-time, automated mineral identification using energy dispersive spectrometers without being affected from the operating conditions. Keywords: mineral identification, energy dispersive spectrometry, decision tree algorithm.
Load power device and system for real-time execution of hierarchical load identification algorithms
Yang, Yi; Madane, Mayura Arun; Zambare, Prachi Suresh
2017-11-14
A load power device includes a power input; at least one power output for at least one load; and a plurality of sensors structured to sense voltage and current at the at least one power output. A processor is structured to provide real-time execution of: (a) a plurality of load identification algorithms, and (b) event detection and operating mode detection for the at least one load.
International Nuclear Information System (INIS)
Hoffstadt, Thorben; Griese, Martin; Maas, Jürgen
2014-01-01
Transducers based on dielectric electroactive polymers (DEAP) use electrostatic pressure to convert electric energy into strain energy or vice versa. Besides this, they are also designed for sensor applications in monitoring the actual stretch state on the basis of the deformation dependent capacitive–resistive behavior of the DEAP. In order to enable an efficient and proper closed loop control operation of these transducers, e.g. in positioning or energy harvesting applications, on the one hand, sensors based on DEAP material can be integrated into the transducers and evaluated externally, and on the other hand, the transducer itself can be used as a sensor, also in terms of self-sensing. For this purpose the characteristic electrical behavior of the transducer has to be evaluated in order to determine the mechanical state. Also, adequate online identification algorithms with sufficient accuracy and dynamics are required, independent from the sensor concept utilized, in order to determine the electrical DEAP parameters in real time. Therefore, in this contribution, algorithms are developed in the frequency domain for identifications of the capacitance as well as the electrode and polymer resistance of a DEAP, which are validated by measurements. These algorithms are designed for self-sensing applications, especially if the power electronics utilized is operated at a constant switching frequency, and parasitic harmonic oscillations are induced besides the desired DC value. These oscillations can be used for the online identification, so an additional superimposed excitation is no longer necessary. For this purpose a dual active bridge (DAB) is introduced to drive the DEAP transducer. The capabilities of the real-time identification algorithm in combination with the DAB are presented in detail and discussed, finally. (paper)
An integer optimization algorithm for robust identification of non-linear gene regulatory networks
Directory of Open Access Journals (Sweden)
Chemmangattuvalappil Nishanth
2012-09-01
Full Text Available Abstract Background Reverse engineering gene networks and identifying regulatory interactions are integral to understanding cellular decision making processes. Advancement in high throughput experimental techniques has initiated innovative data driven analysis of gene regulatory networks. However, inherent noise associated with biological systems requires numerous experimental replicates for reliable conclusions. Furthermore, evidence of robust algorithms directly exploiting basic biological traits are few. Such algorithms are expected to be efficient in their performance and robust in their prediction. Results We have developed a network identification algorithm to accurately infer both the topology and strength of regulatory interactions from time series gene expression data in the presence of significant experimental noise and non-linear behavior. In this novel formulism, we have addressed data variability in biological systems by integrating network identification with the bootstrap resampling technique, hence predicting robust interactions from limited experimental replicates subjected to noise. Furthermore, we have incorporated non-linearity in gene dynamics using the S-system formulation. The basic network identification formulation exploits the trait of sparsity of biological interactions. Towards that, the identification algorithm is formulated as an integer-programming problem by introducing binary variables for each network component. The objective function is targeted to minimize the network connections subjected to the constraint of maximal agreement between the experimental and predicted gene dynamics. The developed algorithm is validated using both in silico and experimental data-sets. These studies show that the algorithm can accurately predict the topology and connection strength of the in silico networks, as quantified by high precision and recall, and small discrepancy between the actual and predicted kinetic parameters
Directory of Open Access Journals (Sweden)
Yanbin Gao
2015-01-01
Full Text Available Artificial fish swarm algorithm (AFSA is one of the state-of-the-art swarm intelligence techniques, which is widely utilized for optimization purposes. Triaxial accelerometer error coefficients are relatively unstable with the environmental disturbances and aging of the instrument. Therefore, identifying triaxial accelerometer error coefficients accurately and being with lower costs are of great importance to improve the overall performance of triaxial accelerometer-based strapdown inertial navigation system (SINS. In this study, a novel artificial fish swarm algorithm (NAFSA that eliminated the demerits (lack of using artificial fishes’ previous experiences, lack of existing balance between exploration and exploitation, and high computational cost of AFSA is introduced at first. In NAFSA, functional behaviors and overall procedure of AFSA have been improved with some parameters variations. Second, a hybrid accelerometer error coefficients identification algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS approaches. This combination leads to maximum utilization of the involved approaches for triaxial accelerometer error coefficients identification. Furthermore, the NAFSA-identified coefficients are testified with 24-position verification experiment and triaxial accelerometer-based SINS navigation experiment. The priorities of MCS-NAFSA are compared with that of conventional calibration method and optimal AFSA. Finally, both experiments results demonstrate high efficiency of MCS-NAFSA on triaxial accelerometer error coefficients identification.
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
Lee, Sangkyu
Illicit trafficking and smuggling of radioactive materials and special nuclear materials (SNM) are considered as one of the most important recent global nuclear threats. Monitoring the transport and safety of radioisotopes and SNM are challenging due to their weak signals and easy shielding. Great efforts worldwide are focused at developing and improving the detection technologies and algorithms, for accurate and reliable detection of radioisotopes of interest in thus better securing the borders against nuclear threats. In general, radiation portal monitors enable detection of gamma and neutron emitting radioisotopes. Passive or active interrogation techniques, present and/or under the development, are all aimed at increasing accuracy, reliability, and in shortening the time of interrogation as well as the cost of the equipment. Equally important efforts are aimed at advancing algorithms to process the imaging data in an efficient manner providing reliable "readings" of the interiors of the examined volumes of various sizes, ranging from cargos to suitcases. The main objective of this thesis is to develop two synergistic algorithms with the goal to provide highly reliable - low noise identification of radioisotope signatures. These algorithms combine analysis of passive radioactive detection technique with active interrogation imaging techniques such as gamma radiography or muon tomography. One algorithm consists of gamma spectroscopy and cosmic muon tomography, and the other algorithm is based on gamma spectroscopy and gamma radiography. The purpose of fusing two detection methodologies per algorithm is to find both heavy-Z radioisotopes and shielding materials, since radionuclides can be identified with gamma spectroscopy, and shielding materials can be detected using muon tomography or gamma radiography. These combined algorithms are created and analyzed based on numerically generated images of various cargo sizes and materials. In summary, the three detection
International Nuclear Information System (INIS)
Rawool-Sullivan, Mohini; Bounds, John Alan; Brumby, Steven P.; Prasad, Lakshman; Sullivan, John P.
2012-01-01
This is the final report of the project titled, 'Isotope Identification Algorithm for Rapid and Accurate Determination of Radioisotopes,' PMIS project number LA10-HUMANID-PD03. The goal of the work was to demonstrate principles of emulating a human analysis approach towards the data collected using radiation isotope identification devices (RIIDs). It summarizes work performed over the FY10 time period. The goal of the work was to demonstrate principles of emulating a human analysis approach towards the data collected using radiation isotope identification devices (RIIDs). Human analysts begin analyzing a spectrum based on features in the spectrum - lines and shapes that are present in a given spectrum. The proposed work was to carry out a feasibility study that will pick out all gamma ray peaks and other features such as Compton edges, bremsstrahlung, presence/absence of shielding and presence of neutrons and escape peaks. Ultimately success of this feasibility study will allow us to collectively explain identified features and form a realistic scenario that produced a given spectrum in the future. We wanted to develop and demonstrate machine learning algorithms that will qualitatively enhance the automated identification capabilities of portable radiological sensors that are currently being used in the field.
[Algorithm of toxigenic genetically altered Vibrio cholerae El Tor biovar strain identification].
Smirnova, N I; Agafonov, D A; Zadnova, S P; Cherkasov, A V; Kutyrev, V V
2014-01-01
Development of an algorithm of genetically altered Vibrio cholerae biovar El Tor strai identification that ensures determination of serogroup, serovar and biovar of the studied isolate based on pheno- and genotypic properties, detection of genetically altered cholera El Tor causative agents, their differentiation by epidemic potential as well as evaluation of variability of key pathogenicity genes. Complex analysis of 28 natural V. cholerae strains was carried out by using traditional microbiological methods, PCR and fragmentary sequencing. An algorithm of toxigenic genetically altered V. cholerae biovar El Tor strain identification was developed that includes 4 stages: determination of serogroup, serovar and biovar based on phenotypic properties, confirmation of serogroup and biovar based on molecular-genetic properties determination of strains as genetically altered, differentiation of genetically altered strains by their epidemic potential and detection of ctxB and tcpA key pathogenicity gene polymorphism. The algorithm is based on the use of traditional microbiological methods, PCR and sequencing of gene fragments. The use of the developed algorithm will increase the effectiveness of detection of genetically altered variants of the cholera El Tor causative agent, their differentiation by epidemic potential and will ensure establishment of polymorphism of genes that code key pathogenicity factors for determination of origins of the strains and possible routes of introduction of the infection.
Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P
2015-11-01
This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Guang-zhou Chen
2015-01-01
Full Text Available Parameter identification plays a crucial role for simulating and using model. This paper firstly carried out the sensitivity analysis of the 2-chlorophenol oxidation model in supercritical water using the Monte Carlo method. Then, to address the nonlinearity of the model, two improved differential search (DS algorithms were proposed to carry out the parameter identification of the model. One strategy is to adopt the Latin hypercube sampling method to replace the uniform distribution of initial population; the other is to combine DS with simplex method. The results of sensitivity analysis reveal the sensitivity and the degree of difficulty identified for every model parameter. Furthermore, the posteriori probability distribution of parameters and the collaborative relationship between any two parameters can be obtained. To verify the effectiveness of the improved algorithms, the optimization performance of improved DS in kinetic parameter estimation is studied and compared with that of the basic DS algorithm, differential evolution, artificial bee colony optimization, and quantum-behaved particle swarm optimization. And the experimental results demonstrate that the DS with the Latin hypercube sampling method does not present better performance, while the hybrid methods have the advantages of strong global search ability and local search ability and are more effective than the other algorithms.
Implementation of an algorithm for cylindrical object identification using range data
Bozeman, Sylvia T.; Martin, Benjamin J.
1989-01-01
One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.
Wang, Geng; Zhou, Kexin; Zhang, Yeming
2018-04-01
The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.
Lai, Fu-Jou; Chang, Hong-Tsun; Huang, Yueh-Min; Wu, Wei-Sheng
2014-01-01
Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to measure the performance of new
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms
Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan
2017-01-01
Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance. PMID:28874909
Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms.
Liu, Rensong; Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan
2017-01-01
Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K -nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance.
Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation
An, Lu; Guo, Baolong
2018-03-01
Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).
Directory of Open Access Journals (Sweden)
Tautz Diethard
2002-03-01
Full Text Available Abstract Background The identification of species or species groups with specific oligo-nucleotides as molecular signatures is becoming increasingly popular for bacterial samples. However, it shows also great promise for other small organisms that are taxonomically difficult to tract. Results We have devised here an algorithm that aims to find the optimal probes for any given set of sequences. The program requires only a crude alignment of these sequences as input and is optimized for performance to deal also with very large datasets. The algorithm is designed such that the position of mismatches in the probes influences the selection and makes provision of single nucleotide outloops. Program implementations are available for Linux and Windows.
Qin, Jin; Tang, Siqi; Han, Congying; Guo, Tiande
2018-04-01
Partial fingerprint identification technology which is mainly used in device with small sensor area like cellphone, U disk and computer, has taken more attention in recent years with its unique advantages. However, owing to the lack of sufficient minutiae points, the conventional method do not perform well in the above situation. We propose a new fingerprint matching technique which utilizes ridges as features to deal with partial fingerprint images and combines the modified generalized Hough transform and scoring strategy based on machine learning. The algorithm can effectively meet the real-time and space-saving requirements of the resource constrained devices. Experiments on in-house database indicate that the proposed algorithm have an excellent performance.
Nikitin, P. V.; Savinov, A. N.; Bazhenov, R. I.; Sivandaev, S. V.
2018-05-01
The article describes the method of identifying a person in distance learning systems based on a keyboard rhythm. An algorithm for the organization of access control is proposed, which implements authentication, identification and verification of a person using the keyboard rhythm. Authentication methods based on biometric personal parameters, including those based on the keyboard rhythm, due to the inexistence of biometric characteristics without a particular person, are able to provide an advanced accuracy and inability to refuse authorship and convenience for operators of automated systems, in comparison with other methods of conformity checking. Methods of permanent hidden keyboard monitoring allow detecting the substitution of a student and blocking the key system.
Method of transient identification based on a possibilistic approach, optimized by genetic algorithm
International Nuclear Information System (INIS)
Almeida, Jose Carlos Soares de
2001-02-01
This work develops a method for transient identification based on a possible approach, optimized by Genetic Algorithm to optimize the number of the centroids of the classes that represent the transients. The basic idea of the proposed method is to optimize the partition of the search space, generating subsets in the classes within a partition, defined as subclasses, whose centroids are able to distinguish the classes with the maximum correct classifications. The interpretation of the subclasses as fuzzy sets and the possible approach provided a heuristic to establish influence zones of the centroids, allowing to achieve the 'don't know' answer for unknown transients, that is, outside the training set. (author)
Adaptive Algorithm For Identification Of The Environment Parameters In Contact Tasks
International Nuclear Information System (INIS)
Tuneski, Atanasko; Babunski, Darko
2003-01-01
An adaptive algorithm for identification of the unknown parameters of the dynamic environment in contact tasks is proposed in this paper using the augmented least square estimation method. An approximate environment digital simulator for the continuous environment dynamics is derived, i.e. a discrete transfer function which has the approximately the same characteristics as the continuous environment dynamics is found. For solving this task a method named hold equivalence is used. The general model of the environment dynamics is given and the case when the environment dynamics is represented by second order models with parameter uncertainties is considered. (Author)
Adaptive Algorithm For Identification Of The Environment Parameters In Contact Tasks
Energy Technology Data Exchange (ETDEWEB)
Tuneski, Atanasko; Babunski, Darko [Faculty of Mechanical Engineering, ' St. Cyril and Methodius' University, Skopje (Macedonia, The Former Yugoslav Republic of)
2003-07-01
An adaptive algorithm for identification of the unknown parameters of the dynamic environment in contact tasks is proposed in this paper using the augmented least square estimation method. An approximate environment digital simulator for the continuous environment dynamics is derived, i.e. a discrete transfer function which has the approximately the same characteristics as the continuous environment dynamics is found. For solving this task a method named hold equivalence is used. The general model of the environment dynamics is given and the case when the environment dynamics is represented by second order models with parameter uncertainties is considered. (Author)
Adams, Bradley J; Aschheim, Kenneth W
2016-01-01
Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.
MIDAS: a database-searching algorithm for metabolite identification in metabolomics.
Wang, Yingfeng; Kora, Guruprasad; Bowen, Benjamin P; Pan, Chongle
2014-10-07
A database searching approach can be used for metabolite identification in metabolomics by matching measured tandem mass spectra (MS/MS) against the predicted fragments of metabolites in a database. Here, we present the open-source MIDAS algorithm (Metabolite Identification via Database Searching). To evaluate a metabolite-spectrum match (MSM), MIDAS first enumerates possible fragments from a metabolite by systematic bond dissociation, then calculates the plausibility of the fragments based on their fragmentation pathways, and finally scores the MSM to assess how well the experimental MS/MS spectrum from collision-induced dissociation (CID) is explained by the metabolite's predicted CID MS/MS spectrum. MIDAS was designed to search high-resolution tandem mass spectra acquired on time-of-flight or Orbitrap mass spectrometer against a metabolite database in an automated and high-throughput manner. The accuracy of metabolite identification by MIDAS was benchmarked using four sets of standard tandem mass spectra from MassBank. On average, for 77% of original spectra and 84% of composite spectra, MIDAS correctly ranked the true compounds as the first MSMs out of all MetaCyc metabolites as decoys. MIDAS correctly identified 46% more original spectra and 59% more composite spectra at the first MSMs than an existing database-searching algorithm, MetFrag. MIDAS was showcased by searching a published real-world measurement of a metabolome from Synechococcus sp. PCC 7002 against the MetaCyc metabolite database. MIDAS identified many metabolites missed in the previous study. MIDAS identifications should be considered only as candidate metabolites, which need to be confirmed using standard compounds. To facilitate manual validation, MIDAS provides annotated spectra for MSMs and labels observed mass spectral peaks with predicted fragments. The database searching and manual validation can be performed online at http://midas.omicsbio.org.
Directory of Open Access Journals (Sweden)
Chang Kung-Yen
2011-07-01
Full Text Available Abstract Background Database searching is the most frequently used approach for automated peptide assignment and protein inference of tandem mass spectra. The results, however, depend on the sequences in target databases and on search algorithms. Recently by using an alternative splicing database, we identified more proteins than with the annotated proteins in Aspergillus flavus. In this study, we aimed at finding a greater number of eligible splice variants based on newly available transcript sequences and the latest genome annotation. The improved database was then used to compare four search algorithms: Mascot, OMSSA, X! Tandem, and InsPecT. Results The updated alternative splicing database predicted 15833 putative protein variants, 61% more than the previous results. There was transcript evidence for 50% of the updated genes compared to the previous 35% coverage. Database searches were conducted using the same set of spectral data, search parameters, and protein database but with different algorithms. The false discovery rates of the peptide-spectrum matches were estimated Conclusions We were able to detect dozens of new peptides using the improved alternative splicing database with the recently updated annotation of the A. flavus genome. Unlike the identifications of the peptides and the RefSeq proteins, large variations existed between the putative splice variants identified by different algorithms. 12 candidates of putative isoforms were reported based on the consensus peptide-spectrum matches. This suggests that applications of multiple search engines effectively reduced the possible false positive results and validated the protein identifications from tandem mass spectra using an alternative splicing database.
Parameters identification of photovoltaic models using an improved JAYA optimization algorithm
International Nuclear Information System (INIS)
Yu, Kunjie; Liang, J.J.; Qu, B.Y.; Chen, Xu; Wang, Heshan
2017-01-01
Highlights: • IJAYA algorithm is proposed to identify the PV model parameters efficiently. • A self-adaptive weight is introduced to purposefully adjust the search process. • Experience-based learning strategy is developed to enhance the population diversity. • Chaotic learning method is proposed to refine the quality of the best solution. • IJAYA features the superior performance in identifying parameters of PV models. - Abstract: Parameters identification of photovoltaic (PV) models based on measured current-voltage characteristic curves is significant for the simulation, evaluation, and control of PV systems. To accurately and reliably identify the parameters of different PV models, an improved JAYA (IJAYA) optimization algorithm is proposed in the paper. In IJAYA, a self-adaptive weight is introduced to adjust the tendency of approaching the best solution and avoiding the worst solution at different search stages, which enables the algorithm to approach the promising area at the early stage and implement the local search at the later stage. Furthermore, an experience-based learning strategy is developed and employed randomly to maintain the population diversity and enhance the exploration ability. A chaotic elite learning method is proposed to refine the quality of the best solution in each generation. The proposed IJAYA is used to solve the parameters identification problems of different PV models, i.e., single diode, double diode, and PV module. Comprehensive experiment results and analyses indicate that IJAYA can obtain a highly competitive performance compared with other state-of-the-state algorithms, especially in terms of accuracy and reliability.
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
Xia, Youshen; Kamel, Mohamed S
2007-06-01
Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.
Evaluation of sensor placement algorithms for on-orbit identification of space platforms
Glassburn, Robin S.; Smith, Suzanne Weaver
1994-01-01
Anticipating the construction of the international space station, on-orbit modal identification of space platforms through optimally placed accelerometers is an area of recent activity. Unwanted vibrations in the platform could affect the results of experiments which are planned. Therefore, it is important that sensors (accelerometers) be strategically placed to identify the amount and extent of these unwanted vibrations, and to validate the mathematical models used to predict the loads and dynamic response. Due to cost, installation, and data management issues, only a limited number of sensors will be available for placement. This work evaluates and compares four representative sensor placement algorithms for modal identification. Most of the sensor placement work to date has employed only numerical simulations for comparison. This work uses experimental data from a fully-instrumented truss structure which was one of a series of structures designed for research in dynamic scale model ground testing of large space structures at NASA Langley Research Center. Results from this comparison show that for this cantilevered structure, the algorithm based on Guyan reduction is rated slightly better than that based on Effective Independence.
Multi-Scale Parameter Identification of Lithium-Ion Battery Electric Models Using a PSO-LM Algorithm
Directory of Open Access Journals (Sweden)
Wen-Jing Shen
2017-03-01
Full Text Available This paper proposes a multi-scale parameter identification algorithm for the lithium-ion battery (LIB electric model by using a combination of particle swarm optimization (PSO and Levenberg-Marquardt (LM algorithms. Two-dimensional Poisson equations with unknown parameters are used to describe the potential and current density distribution (PDD of the positive and negative electrodes in the LIB electric model. The model parameters are difficult to determine in the simulation due to the nonlinear complexity of the model. In the proposed identification algorithm, PSO is used for the coarse-scale parameter identification and the LM algorithm is applied for the fine-scale parameter identification. The experiment results show that the multi-scale identification not only improves the convergence rate and effectively escapes from the stagnation of PSO, but also overcomes the local minimum entrapment drawback of the LM algorithm. The terminal voltage curves from the PDD model with the identified parameter values are in good agreement with those from the experiments at different discharge/charge rates.
Muon identification algorithms in ATLAS Poster for EPS-HEP 2009
Resende, B; The ATLAS collaboration
2009-01-01
In the midst of the intense activity that will arise from the proton-proton collisions at the LHC, muons will be very useful to spot rare events of interest. The good resolution expected for their momentum measurement shall also make them powerful tools in event reconstruction. Muon identification will thus be a crucial issue in the ATLAS experiment at the LHC. Their charged tracks can be reconstructed in the external spectrometer only, but the combination of such "stand-alone" tracks with tracks from the inner detector shall increase the precision and reliablilty of the reconstructed muon. This is particularly true in the lower part of the pT spectrum, where the inner detector is more performant. We will present here the various strategies for combined muon identification in the ATLAS experiment. The main algorithms, called Staco and Muid, perform the combination of existing tracks in the inner detector and in the muon spectrometer, allowing the best identification of muon tracks. Their efficiency is complet...
A novel algorithm for validating peptide identification from a shotgun proteomics search engine.
Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J
2013-03-01
Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.
GPR identification of voids inside concrete based on the support vector machine algorithm
International Nuclear Information System (INIS)
Xie, Xiongyao; Li, Pan; Qin, Hui; Liu, Lanbo; Nobes, David C
2013-01-01
Voids inside reinforced concrete, which affect structural safety, are identified from ground penetrating radar (GPR) images using a completely automatic method based on the support vector machine (SVM) algorithm. The entire process can be characterized into four steps: (1) the original SVM model is built by training synthetic GPR data generated by finite difference time domain simulation and after data preprocessing, segmentation and feature extraction. (2) The classification accuracy of different kernel functions is compared with the cross-validation method and the penalty factor (c) of the SVM and the coefficient (σ2) of kernel functions are optimized by using the grid algorithm and the genetic algorithm. (3) To test the success of classification, this model is then verified and validated by applying it to another set of synthetic GPR data. The result shows a high success rate for classification. (4) This original classifier model is finally applied to a set of real GPR data to identify and classify voids. The result is less than ideal when compared with its application to synthetic data before the original model is improved. In general, this study shows that the SVM exhibits promising performance in the GPR identification of voids inside reinforced concrete. Nevertheless, the recognition of shape and distribution of voids may need further improvement. (paper)
Multiple Sclerosis Identification Based on Fractional Fourier Entropy and a Modified Jaya Algorithm
Directory of Open Access Journals (Sweden)
Shui-Hua Wang
2018-04-01
Full Text Available Aim: Currently, identifying multiple sclerosis (MS by human experts may come across the problem of “normal-appearing white matter”, which causes a low sensitivity. Methods: In this study, we presented a computer vision based approached to identify MS in an automatic way. This proposed method first extracted the fractional Fourier entropy map from a specified brain image. Afterwards, it sent the features to a multilayer perceptron trained by a proposed improved parameter-free Jaya algorithm. We used cost-sensitivity learning to handle the imbalanced data problem. Results: The 10 × 10-fold cross validation showed our method yielded a sensitivity of 97.40 ± 0.60%, a specificity of 97.39 ± 0.65%, and an accuracy of 97.39 ± 0.59%. Conclusions: We validated by experiments that the proposed improved Jaya performs better than plain Jaya algorithm and other latest bioinspired algorithms in terms of classification performance and training speed. In addition, our method is superior to four state-of-the-art MS identification approaches.
Fault Identification Algorithm Based on Zone-Division Wide Area Protection System
Directory of Open Access Journals (Sweden)
Xiaojun Liu
2014-04-01
Full Text Available As the power grid becomes more magnified and complicated, wide-area protection system in the practical engineering application is more and more restricted by the communication level. Based on the concept of limitedness of wide-area protection system, the grid with complex structure is divided orderly in this paper, and fault identification and protection action are executed in each divided zone to reduce the pressure of the communication system. In protection zone, a new wide-area protection algorithm based on positive sequence fault components directional comparison principle is proposed. The special associated intelligent electronic devices (IEDs zones which contain buses and transmission lines are created according to the installation location of the IEDs. When a fault occurs, with the help of the fault information collecting and sharing from associated zones with the fault discrimination principle defined in this paper, the IEDs can identify the fault location and remove the fault according to the predetermined action strategy. The algorithm will not be impacted by the load changes and transition resistance and also has good adaptability in open phase running power system. It can be used as a main protection, and it also can be taken into account for the back-up protection function. The results of cases study show that, the division method of the wide-area protection system and the proposed algorithm are effective.
Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits
Directory of Open Access Journals (Sweden)
Lieberman Rebecca M
2008-04-01
Full Text Available Abstract Background Accurate identification of hypoglycemia cases by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM codes will help to describe epidemiology, monitor trends, and propose interventions for this important complication in patients with diabetes. Prior hypoglycemia studies utilized incomplete search strategies and may be methodologically flawed. We sought to validate a new ICD-9-CM coding algorithm for accurate identification of hypoglycemia visits. Methods This was a multicenter, retrospective cohort study using a structured medical record review at three academic emergency departments from July 1, 2005 to June 30, 2006. We prospectively derived a coding algorithm to identify hypoglycemia visits using ICD-9-CM codes (250.3, 250.8, 251.0, 251.1, 251.2, 270.3, 775.0, 775.6, and 962.3. We confirmed hypoglycemia cases by chart review identified by candidate ICD-9-CM codes during the study period. The case definition for hypoglycemia was documented blood glucose 3.9 mmol/l or emergency physician charted diagnosis of hypoglycemia. We evaluated individual components and calculated the positive predictive value. Results We reviewed 636 charts identified by the candidate ICD-9-CM codes and confirmed 436 (64% cases of hypoglycemia by chart review. Diabetes with other specified manifestations (250.8, often excluded in prior hypoglycemia analyses, identified 83% of hypoglycemia visits, and unspecified hypoglycemia (251.2 identified 13% of hypoglycemia visits. The absence of any predetermined co-diagnosis codes improved the positive predictive value of code 250.8 from 62% to 92%, while excluding only 10 (2% true hypoglycemia visits. Although prior analyses included only the first-listed ICD-9 code, more than one-quarter of identified hypoglycemia visits were outside this primary diagnosis field. Overall, the proposed algorithm had 89% positive predictive value (95% confidence interval, 86–92 for
Directory of Open Access Journals (Sweden)
Ming Yang
2018-03-01
Full Text Available In this paper, an on-line parameter identification algorithm to iteratively compute the numerical values of inertia and load torque is proposed. Since inertia and load torque are strongly coupled variables due to the degenerate-rank problem, it is hard to estimate relatively accurate values for them in the cases such as when load torque variation presents or one cannot obtain a relatively accurate priori knowledge of inertia. This paper eliminates this problem and realizes ideal online inertia identification regardless of load condition and initial error. The algorithm in this paper integrates a full-order Kalman Observer and Recursive Least Squares, and introduces adaptive controllers to enhance the robustness. It has a better performance when iteratively computing load torque and moment of inertia. Theoretical sensitivity analysis of the proposed algorithm is conducted. Compared to traditional methods, the validity of the proposed algorithm is proved by simulation and experiment results.
WH-EA: An Evolutionary Algorithm for Wiener-Hammerstein System Identification
Directory of Open Access Journals (Sweden)
J. Zambrano
2018-01-01
Full Text Available Current methods to identify Wiener-Hammerstein systems using Best Linear Approximation (BLA involve at least two steps. First, BLA is divided into obtaining front and back linear dynamics of the Wiener-Hammerstein model. Second, a refitting procedure of all parameters is carried out to reduce modelling errors. In this paper, a novel approach to identify Wiener-Hammerstein systems in a single step is proposed. This approach is based on a customized evolutionary algorithm (WH-EA able to look for the best BLA split, capturing at the same time the process static nonlinearity with high precision. Furthermore, to correct possible errors in BLA estimation, the locations of poles and zeros are subtly modified within an adequate search space to allow a fine-tuning of the model. The performance of the proposed approach is analysed by using a demonstration example and a nonlinear system identification benchmark.
A MUSIC-Based Algorithm for Blind User Identification in Multiuser DS-CDMA
Directory of Open Access Journals (Sweden)
M. Reza Soleymani
2005-04-01
Full Text Available A blind scheme based on multiple-signal classification (MUSIC algorithm for user identification in a synchronous multiuser code-division multiple-access (CDMA system is suggested. The scheme is blind in the sense that it does not require prior knowledge of the spreading codes. Spreading codes and users' power are acquired by the scheme. Eigenvalue decomposition (EVD is performed on the received signal, and then all the valid possible signature sequences are projected onto the subspaces. However, as a result of this process, some false solutions are also produced and the ambiguity seems unresolvable. Our approach is to apply a transformation derived from the results of the subspace decomposition on the received signal and then to inspect their statistics. It is shown that the second-order statistics of the transformed signal provides a reliable means for removing the false solutions.
Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai
2017-07-01
Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.
Damage identification on spatial Timoshenko arches by means of genetic algorithms
Greco, A.; D'Urso, D.; Cannizzaro, F.; Pluchino, A.
2018-05-01
In this paper a procedure for the dynamic identification of damage in spatial Timoshenko arches is presented. The proposed approach is based on the calculation of an arbitrary number of exact eigen-properties of a damaged spatial arch by means of the Wittrick and Williams algorithm. The proposed damage model considers a reduction of the volume in a part of the arch, and is therefore suitable, differently than what is commonly proposed in the main part of the dedicated literature, not only for concentrated cracks but also for diffused damaged zones which may involve a loss of mass. Different damage scenarios can be taken into account with variable location, intensity and extension of the damage as well as number of damaged segments. An optimization procedure, aiming at identifying which damage configuration minimizes the difference between its eigen-properties and a set of measured modal quantities for the structure, is implemented making use of genetic algorithms. In this context, an initial random population of chromosomes, representing different damage distributions along the arch, is forced to evolve towards the fittest solution. Several applications with different, single or multiple, damaged zones and boundary conditions confirm the validity and the applicability of the proposed procedure even in presence of instrumental errors on the measured data.
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Directory of Open Access Journals (Sweden)
Irina Rudeva
2014-12-01
Full Text Available The IMILAST project (‘Intercomparison of Mid-Latitude Storm Diagnostics’ was set up to compare low-level cyclone climatologies derived from a number of objective identification algorithms. This paper is a contribution to that effort where we determine the sensitivity of three key aspects of Northern Hemisphere cyclone behaviour [namely the number of cyclones, their intensity (defined here in terms of the central pressure and their deepening rates] to specific features in the automatic cyclone identification. The sensitivity is assessed with respect to three such features which may be thought to influence the ultimate climatology produced (namely performance in areas of complicated orography, time of the detection of a cyclone, and the representation of rapidly propagating cyclones. We make use of 13 tracking methods in this analysis. We find that the filtering of cyclones in regions where the topography exceeds 1500 m can significantly change the total number of cyclones detected by a scheme, but has little impact on the cyclone intensity distribution. More dramatically, late identification of cyclones (simulated by the truncation of the first 12 hours of cyclone life cycle leads to a large reduction in cyclone numbers over the both continents and oceans (up to 80 and 40%, respectively. Finally, the potential splitting of the trajectories at times of the fastest propagation has a negligible climatological effect on geographical distribution of cyclone numbers. Overall, it has been found that the averaged deepening rates and averaged cyclone central pressure are rather insensitive to the specifics of the tracking procedure, being more sensitive to the data set used (as shown in previous studies and the geographical location of a cyclone.
Energy Technology Data Exchange (ETDEWEB)
Stavrov, Andrei; Yamamoto, Eugene [Rapiscan Systems, Inc., 14000 Mead Street, Longmont, CO, 80504 (United States)
2015-07-01
Radiation Portal Monitors (RPM) with plastic detectors represent the main instruments used for primary border (customs) radiation control. RPM are widely used because they are simple, reliable, relatively inexpensive and have a high sensitivity. However, experience using the RPM in various countries has revealed the systems have some grave shortcomings. There is a dramatic decrease of the probability of detection of radioactive sources under high suppression of the natural gamma background (radiation control of heavy cargoes, containers and, especially, trains). NORM (Naturally Occurring Radioactive Material) existing in objects under control trigger the so-called 'nuisance alarms', requiring a secondary inspection for source verification. At a number of sites, the rate of such alarms is so high it significantly complicates the work of customs and border officers. This paper presents a brief description of new variant of algorithm ASIA-New (New Advanced Source Identification Algorithm), which was developed by the authors and based on some experimental test results. It also demonstrates results of different tests and the capability of a new system to overcome the shortcomings stated above. New electronics and ASIA-New enables RPM to detect radioactive sources under a high background suppression (tested at 15-30%) and to verify the detected NORM (KCl) and the artificial isotopes (Co-57, Ba-133 and other). New variant of ASIA is based on physical principles and does not require a lot of special tests to attain statistical data for its parameters. That is why this system can be easily installed into any RPM with plastic detectors. This algorithm was tested for 1,395 passages of different transports (cars, trucks and trailers) without radioactive sources. It also was tested for 4,015 passages of these transports with radioactive sources of different activity (Co-57, Ba-133, Cs-137, Co-60, Ra-226, Th-232) and these sources masked by NORM (K-40) as well
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-04
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
Directory of Open Access Journals (Sweden)
Che-Ting Kuo
2015-02-01
Full Text Available This paper introduces a network-based interval type-2 fuzzy inference system (NT2FIS with a dynamic solution agent algorithm water flow like algorithm (WFA, for nonlinear system identification and blind source separation (BSS problem. The NT2FIS consists of interval type-2 asymmetric fuzzy membership functions and TSK-type consequent parts to enhance the performance. The proposed scheme is optimized by a new heuristic learning algorithm, WFA, with dynamic solution agents. The proposed WFA is inspired by the natural behavior of water flow. Splitting, moving, merging, evaporation, and precipitation have all been introduced for optimization. Some modifications, including new moving strategies, such as the application of tabu searching and gradient-descent techniques, are proposed to enhance the performance of the WFA in training the NT2FIS systems. Simulation and comparison results for nonlinear system identification and blind signal separation are presented to illustrate the performance and effectiveness of the proposed approach.
Directory of Open Access Journals (Sweden)
D. S. Odnolko
2013-01-01
Full Text Available Synthesized algorithm for electromagnetic rotor time constant, active resistance and equivalent leakage inductance of stator induction motor for free rotating rotor. The problem is solved for induction motor model in the stationary stator frame α-β. The algorithm is based on the use of recursive least squares method, which ensures high accuracy of the parameter estimates for the minimum time. The observer does not assume prior information about the technical data machine and individual parameters of its equivalent circuit. Results of simulation demonstrated how effective of the proposed method of identification. The flexible structure of the algorithm allows it to be used for preliminary identification of an induction motor, and in the process operative work induction motor in the frequency-controlled electric drive with vector control.
Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry
2018-06-01
This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Directory of Open Access Journals (Sweden)
Sajad Sabzi
2018-03-01
Full Text Available Accurate classification of fruit varieties in processing factories and during post-harvesting applications is a challenge that has been widely studied. This paper presents a novel approach to automatic fruit identification applied to three common varieties of oranges (Citrus sinensis L., namely Bam, Payvandi and Thomson. A total of 300 color images were used for the experiments, 100 samples for each orange variety, which are publicly available. After segmentation, 263 parameters, including texture, color and shape features, were extracted from each sample using image processing. Among them, the 6 most effective features were automatically selected by using a hybrid approach consisting of an artificial neural network and particle swarm optimization algorithm (ANN-PSO. Then, three different classifiers were applied and compared: hybrid artificial neural network – artificial bee colony (ANN-ABC; hybrid artificial neural network – harmony search (ANN-HS; and k-nearest neighbors (kNN. The experimental results show that the hybrid approaches outperform the results of kNN. The average correct classification rate of ANN-HS was 94.28%, while ANN-ABS achieved 96.70% accuracy with the available data, contrasting with the 70.9% baseline accuracy of kNN. Thus, this new proposed methodology provides a fast and accurate way to classify multiple fruits varieties, which can be easily implemented in processing factories. The main contribution of this work is that the method can be directly adapted to other use cases, since the selection of the optimal features and the configuration of the neural network are performed automatically using metaheuristic algorithms.
Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid
2017-10-01
Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.
Li, Hongkun; He, Changbo; Malekian, Reza; Li, Zhixiong
2018-04-19
pulsation signal. Genetic algorithm (GA) is used to obtain optimal parameters for this SR system to improve its feature enhancement performance. The analysis result of experimental signal shows the validity of the proposed method for the enhancement and identification of weak defect characteristic. In the end, strain test is carried out to further verify the accuracy and reliability of the analysis result obtained by pressure pulsation signal.
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
Damage Identification of Trusses with Elastic Supports Using FEM and Genetic Algorithm
Directory of Open Access Journals (Sweden)
Nam-Il Kim
2013-01-01
Full Text Available The computationally efficient damage identification technique for truss structures with elastic supports is proposed based on the force method. To transform the truss with supports into the equivalent free-standing model without supports, the novel zero-length dummy members are employed. General equilibrium equations and kinematic relations, in which the reaction forces and the displacements at the elastic supports are taken into account, are clearly formulated. The compatibility equations, in terms of forces in which the flexibilities of elastic supports are considered, are explicitly presented using the singular value decomposition (SVD technique. Both member and reaction forces are simultaneously and directly obtained. Then, all nodal displacements including constrained nodes are back calculated from the member and reaction forces. Next, the microgenetic algorithm (MGA is used to properly identify the site and the extent of multiple damages in truss structures. In order to verify the superiority of the current study, the numerical solutions are presented for the planar and space truss models with and without elastic supports. The numerical results indicate that the computational effort required by this study is found to be significantly lower than that of the displacement method.
Directory of Open Access Journals (Sweden)
Guangjun Wang
2012-01-01
Full Text Available Background. Acupoints (belonging to 12 meridians which have the same names are symmetrically distributed on the body. It has been proved that acupoints have certain biological specificities different from the normal parts of the body. However, there is little evidence that acupoints which have the same name and are located bilaterally and symmetrically have lateralized specificity. Thus, researching the lateralized specificity and the relationship between left-side and right-side acupuncture is of special importance. Methodology and Principal Findings. The mean blood flux (MBF in both Hegu acupoints was measured by Moor full-field laser perfusion imager. With the method of system identification algorithm, the output distribution in different groups was acquired, based on different acupoint stimulation and standard signal input. It is demonstrated that after stimulation of the right Hegu acupoint by needle, the output value of MBF in contralateral Hegu acupoint was strongly amplified, while after acupuncturing the left Hegu acupoint, the output value of MBF in either side Hegu acupoint was amplified moderately. Conclusions and Significance. This paper indicates that the Hegu acupoint has lateralized specificity. After stimulating the ipsilateral Hegu acupoint, symmetry breaking will be produced in contrast to contralateral Hegu acupoint stimulation.
International Nuclear Information System (INIS)
Perez, L; Autrique, L; Gillet, M
2008-01-01
The aim of this paper is to investigate the thermal diffusivity identification of a multilayered material dedicated to fire protection. In a military framework, fire protection needs to meet specific requirements, and operational protective systems must be constantly improved in order to keep up with the development of new weapons. In the specific domain of passive fire protections, intumescent coatings can be an effective solution on the battlefield. Intumescent materials have the ability to swell up when they are heated, building a thick multi-layered coating which provides efficient thermal insulation to the underlying material. Due to the heat aggressions (fire or explosion) leading to the intumescent phenomena, high temperatures are considered and prevent from linearization of the mathematical model describing the system state evolution. Previous sensitivity analysis has shown that the thermal diffusivity of the multilayered intumescent coating is a key parameter in order to validate the predictive numerical tool and therefore for thermal protection optimisation. A conjugate gradient method is implemented in order to minimise the quadratic cost function related to the error between predicted temperature and measured temperature. This regularisation algorithm is well adapted for a large number of unknown parameters.
Directory of Open Access Journals (Sweden)
Dong-Sup Lee
2015-01-01
Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
Directory of Open Access Journals (Sweden)
Madlung Johannes
2010-05-01
Full Text Available Abstract Background Often high-quality MS/MS spectra of tryptic peptides do not match to any database entry because of only partially sequenced genomes and therefore, protein identification requires de novo peptide sequencing. To achieve protein identification of the economically important but still unsequenced plant pathogenic oomycete Plasmopara halstedii, we first evaluated the performance of three different de novo peptide sequencing algorithms applied to a protein digests of standard proteins using a quadrupole TOF (QStar Pulsar i. Results The performance order of the algorithms was PEAKS online > PepNovo > CompNovo. In summary, PEAKS online correctly predicted 45% of measured peptides for a protein test data set. All three de novo peptide sequencing algorithms were used to identify MS/MS spectra of tryptic peptides of an unknown 57 kDa protein of P. halstedii. We found ten de novo sequenced peptides that showed homology to a Phytophthora infestans protein, a closely related organism of P. halstedii. Employing a second complementary approach, verification of peptide prediction and protein identification was performed by creation of degenerate primers for RACE-PCR and led to an ORF of 1,589 bp for a hypothetical phosphoenolpyruvate carboxykinase. Conclusions Our study demonstrated that identification of proteins within minute amounts of sample material improved significantly by combining sensitive LC-MS methods with different de novo peptide sequencing algorithms. In addition, this is the first study that verified protein prediction from MS data by also employing a second complementary approach, in which RACE-PCR led to identification of a novel elicitor protein in P. halstedii.
Directory of Open Access Journals (Sweden)
Chiu-Kuo LIANG
2015-06-01
Full Text Available One of the research areas in RFID systems is a tag anti-collision protocol; how to reduce identification time with a given number of tags in the field of an RFID reader. There are two types of tag anti-collision protocols for RFID systems: tree based algorithms and slotted aloha based algorithms. Many anti-collision algorithms have been proposed in recent years, especially in tree based protocols. However, there still have challenges on enhancing the system throughput and stability due to the underlying technologies had faced different limitation in system performance when network density is high. Particularly, the tree based protocols had faced the long identification delay. Recently, a Hybrid Hyper Query Tree (H2QT protocol, which is a tree based approach, was proposed and aiming to speedup tag identification in large scale RFID systems. The main idea of H2QT is to track the tag response and try to predict the distribution of tag IDs in order to reduce collisions. In this paper, we propose a pre-detection tree based algorithm, called the Adaptive Pre-Detection Broadcasting Query Tree algorithm (APDBQT, to avoid those unnecessary queries. Our proposed APDBQT protocol can reduce not only the collisions but the idle cycles as well by using pre-detection scheme and adjustable slot size mechanism. The simulation results show that our proposed technique provides superior performance in high density environments. It is shown that the APDBQT is effective in terms of increasing system throughput and minimizing identification delay.
Squara, Fabien; Chik, William W; Benhayon, Daniel; Maeda, Shingo; Latcu, Decebal Gabriel; Lacaze-Gadonneix, Jonathan; Tibi, Thierry; Thomas, Olivier; Cooper, Joshua M; Duthoit, Guillaume
2014-08-01
Pacemaker (PM) interrogation requires correct manufacturer identification. However, an unidentified PM is a frequent occurrence, requiring time-consuming steps to identify the device. The purpose of this study was to develop and validate a novel algorithm for PM manufacturer identification, using the ECG response to magnet application. Data on the magnet responses of all recent PM models (≤15 years) from the 5 major manufacturers were collected. An algorithm based on the ECG response to magnet application to identify the PM manufacturer was subsequently developed. Patients undergoing ECG during magnet application in various clinical situations were prospectively recruited in 7 centers. The algorithm was applied in the analysis of every ECG by a cardiologist blinded to PM information. A second blinded cardiologist analyzed a sample of randomly selected ECGs in order to assess the reproducibility of the results. A total of 250 ECGs were analyzed during magnet application. The algorithm led to the correct single manufacturer choice in 242 ECGs (96.8%), whereas 7 (2.8%) could only be narrowed to either 1 of 2 manufacturer possibilities. Only 2 (0.4%) incorrect manufacturer identifications occurred. The algorithm identified Medtronic and Sorin Group PMs with 100% sensitivity and specificity, Biotronik PMs with 100% sensitivity and 99.5% specificity, and St. Jude and Boston Scientific PMs with 92% sensitivity and 100% specificity. The results were reproducible between the 2 blinded cardiologists with 92% concordant findings. Unknown PM manufacturers can be accurately identified by analyzing the ECG magnet response using this newly developed algorithm. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Plassard, C.; Ladeyn, I.; Staunton, S.
2004-01-01
Mycorrhizal infection is known to improve phosphate nutrition and water supply of higher plants. It has been reported to both increase the uptake of potentially toxic pollutant elements and to protect plants against toxic effects. Little is known about the effect of mycorrhizal infection on the dynamics of radioactive pollutants in soil-plant systems. The aim of this study was to compare the root uptake and root-shoot transfer of three radio-isotopes with contrasting chemical properties ( 85 Sr, 95m Tc and 137 Cs) in mycorrhizal and control, non mycorrhizal plants. The plant studied was Pinus pinaster and the associated ecto-mycorrhizal fungus was Rhizopogon roseolus (strain R18-2). Plants were grown under anoxic conditions for 3 months then transferred to thin layers of autoclaved soil and allowed to grow for four months. After this period, the rhizotrons were dismantled, and plant tissue analysed. Biomass, nutrient content (K, P, N, Ca) and activities of each isotope in roots, shoots and stems were measured, and the degree of mycorrhizal infection assessed. The transfer factors decreased in the order Tc>Sr>Cs as expected from the degree of immobilisation by soil. No effect of mycorrhizal infection on root uptake was observed for Sr. Shoot activity concentration of Tc was decreased by mycorrhizal infection but root uptake correlated well with mycelial soil surface area. In contrast, Cs shoot activity was greater in mycorrhizal than control plants. The uptake and root to shoot distribution shall be discussed in relation to nutrient dynamics. (author)
Taralla, David
2013-01-01
The field of reinforcement learning recently received the contribution by Ernst et al. (2013) "Monte carlo search algorithm discovery for one player games" who introduced a new way to conceive completely new algorithms. Moreover, it brought an automatic method to find the best algorithm to use in a particular situation using a multi-arm bandit approach. We address here the problem of best arm identification. The main problem is that the generated algorithm space (ie. the arm space) can be qui...
Zero-G experimental validation of a robotics-based inertia identification algorithm
Bruggemann, Jeremy J.; Ferrel, Ivann; Martinez, Gerardo; Xie, Pu; Ma, Ou
2010-04-01
The need to efficiently identify the changing inertial properties of on-orbit spacecraft is becoming more critical as satellite on-orbit services, such as refueling and repairing, become increasingly aggressive and complex. This need stems from the fact that a spacecraft's control system relies on the knowledge of the spacecraft's inertia parameters. However, the inertia parameters may change during flight for reasons such as fuel usage, payload deployment or retrieval, and docking/capturing operations. New Mexico State University's Dynamics, Controls, and Robotics Research Group has proposed a robotics-based method of identifying unknown spacecraft inertia properties1. Previous methods require firing known thrusts then measuring the thrust, and the velocity and acceleration changes. The new method utilizes the concept of momentum conservation, while employing a robotic device powered by renewable energy to excite the state of the satellite. Thus, it requires no fuel usage or force and acceleration measurements. The method has been well studied in theory and demonstrated by simulation. However its experimental validation is challenging because a 6- degree-of-freedom motion in a zero-gravity condition is required. This paper presents an on-going effort to test the inertia identification method onboard the NASA zero-G aircraft. The design and capability of the test unit will be discussed in addition to the flight data. This paper also introduces the design and development of an airbearing based test used to partially validate the method, in addition to the approach used to obtain reference value for the test system's inertia parameters that can be used for comparison with the algorithm results.
Nikolic, Dejan; Stojkovic, Nikola; Lekic, Nikola
2018-04-09
To obtain the complete operational picture of the maritime situation in the Exclusive Economic Zone (EEZ) which lies over the horizon (OTH) requires the integration of data obtained from various sensors. These sensors include: high frequency surface-wave-radar (HFSWR), satellite automatic identification system (SAIS) and land automatic identification system (LAIS). The algorithm proposed in this paper utilizes radar tracks obtained from the network of HFSWRs, which are already processed by a multi-target tracking algorithm and associates SAIS and LAIS data to the corresponding radar tracks, thus forming an integrated data pair. During the integration process, all HFSWR targets in the vicinity of AIS data are evaluated and the one which has the highest matching factor is used for data association. On the other hand, if there is multiple AIS data in the vicinity of a single HFSWR track, the algorithm still makes only one data pair which consists of AIS and HFSWR data with the highest mutual matching factor. During the design and testing, special attention is given to the latency of AIS data, which could be very high in the EEZs of developing countries. The algorithm is designed, implemented and tested in a real working environment. The testing environment is located in the Gulf of Guinea and includes a network of HFSWRs consisting of two HFSWRs, several coastal sites with LAIS receivers and SAIS data provided by provider of SAIS data.
Genovese, Mariangela; Napoli, Ettore
2013-05-01
The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.
Energy Technology Data Exchange (ETDEWEB)
Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)
2010-03-01
This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using
Energy Technology Data Exchange (ETDEWEB)
Grogan, Brandon R [ORNL
2010-05-01
This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the
Directory of Open Access Journals (Sweden)
Qing Guo
2015-04-01
Full Text Available A gait identification method for a lower extremity exoskeleton is presented in order to identify the gait sub-phases in human-machine coordinated motion. First, a sensor layout for the exoskeleton is introduced. Taking the difference between human lower limb motion and human-machine coordinated motion into account, the walking gait is divided into five sub-phases, which are ‘double standing’, ‘right leg swing and left leg stance’, ‘double stance with right leg front and left leg back’, ‘right leg stance and left leg swing’, and ‘double stance with left leg front and right leg back’. The sensors include shoe pressure sensors, knee encoders, and thigh and calf gyroscopes, and are used to measure the contact force of the foot, and the knee joint angle and its angular velocity. Then, five sub-phases of walking gait are identified by a C4.5 decision tree algorithm according to the data fusion of the sensors' information. Based on the simulation results for the gait division, identification accuracy can be guaranteed by the proposed algorithm. Through the exoskeleton control experiment, a division of five sub-phases for the human-machine coordinated walk is proposed. The experimental results verify this gait division and identification method. They can make hydraulic cylinders retract ahead of time and improve the maximal walking velocity when the exoskeleton follows the person's motion.
Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation
Directory of Open Access Journals (Sweden)
Suk-Ju Kang
2016-12-01
Full Text Available This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F1 score by up to 0.490, compared with benchmark algorithms.
Finch, Holmes W; Davis, Andrew; Dean, Raymond S
2015-03-01
The accurate and early identification of individuals with pervasive conditions such as attention deficit hyperactivity disorder (ADHD) is crucial to ensuring that they receive appropriate and timely assistance and treatment. Heretofore, identification of such individuals has proven somewhat difficult, typically involving clinical decision making based on descriptions and observations of behavior, in conjunction with the administration of cognitive assessments. The present study reports on the use of a sensory motor battery in conjunction with a recursive partitioning computer algorithm, boosted trees, to develop a prediction heuristic for identifying individuals with ADHD. Results of the study demonstrate that this method is able to do so with accuracy rates of over 95 %, much higher than the popular logistic regression model against which it was compared. Implications of these results for practice are provided.
Astafiev, A.; Orlov, A.; Privezencev, D.
2018-01-01
The article is devoted to the development of technology and software for the construction of positioning and control systems for small mechanization in industrial plants based on radio frequency identification methods, which will be the basis for creating highly efficient intelligent systems for controlling the product movement in industrial enterprises. The main standards that are applied in the field of product movement control automation and radio frequency identification are considered. The article reviews modern publications and automation systems for the control of product movement developed by domestic and foreign manufacturers. It describes the developed algorithm for positioning of small-scale mechanization means in an industrial enterprise. Experimental studies in laboratory and production conditions have been conducted and described in the article.
International Nuclear Information System (INIS)
Svatek, J.
1999-12-01
During the development and implementation of supporting software for the control room and emergency control centre at the Dukovany nuclear power plant it appeared necessary to validate the input quantities in order to assure operating reliability of the software tools. Therefore, the development of software for validation of the measured quantities of the plant data sources was initiated, and the software had to be debugged and verified. The report contains the proposal for and description of the verification tests for testing the algorithms of automatic identification of errors on the observed quantities of the NPP by means of homemade validation software. In particular, the algorithms treated serve the validation of the hot leg temperature at primary circuit loop no. 2 or 4 at the Dukovany-2 reactor unit using data from the URAN and VK3 information systems, recorded during 3 different days. (author)
Kaleebu, Pontiano; Kitandwe, Paul Kato; Lutalo, Tom; Kigozi, Aminah; Watera, Christine; Nanteza, Mary Bridget; Hughes, Peter; Musinguzi, Joshua; Opio, Alex; Downing, Robert; Mbidde, Edward Katongole
2018-02-27
The World Health Organization recommends that countries conduct two phase evaluations of HIV rapid tests (RTs) in order to come up with the best algorithms. In this report, we present the first ever such evaluation in Uganda, involving both blood and oral based RTs. The role of weak positive (WP) bands on the accuracy of the individual RT and on the algorithms was also investigated. In total 11 blood based and 3 oral transudate kits were evaluated. All together 2746 participants from seven sites, covering the four different regions of Uganda participated. Two enzyme immunoassays (EIAs) run in parallel were used as the gold standard. The performance and cost of the different algorithms was calculated, with a pre-determined price cut-off of either cheaper or within 20% price of the current algorithm of Determine + Statpak + Unigold. In the second phase, the three best algorithms selected in phase I were used at the point of care for purposes of quality control using finger stick whole blood. We identified three algorithms; Determine + SD Bioline + Statpak; Determine + Statpak + SD Bioline, both with the same sensitivity and specificity of 99.2% and 99.1% respectively and Determine + Statpak + Insti, with sensitivity and specificity of 99.1% and 99% respectively as having performed better and met the cost requirements. There were 15 other algorithms that performed better than the current one but rated more than the 20% price. None of the 3 oral mucosal transudate kits were suitable for inclusion in an algorithm because of their low sensitivities. Band intensity affected the performance of individual RTs but not the final algorithms. We have come up with three algorithms we recommend for public or Government procurement based on accuracy and cost. In case one algorithm is preferred, we recommend to replace Unigold, the current tie breaker with SD Bioline. We further recommend that all the 18 algorithms that have shown better performance than the current one are made
A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor
Directory of Open Access Journals (Sweden)
Liang Zhang
2015-08-01
Full Text Available Internet of Things (IoT is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN k-Nearest Neighbor (KNN algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University’s datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.
A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.
Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing
2015-08-14
Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.
Directory of Open Access Journals (Sweden)
Ujwalla Gawande
2013-01-01
Full Text Available Recent times witnessed many advancements in the field of biometric and ultimodal biometric fields. This is typically observed in the area, of security, privacy, and forensics. Even for the best of unimodal biometric systems, it is often not possible to achieve a higher recognition rate. Multimodal biometric systems overcome various limitations of unimodal biometric systems, such as nonuniversality, lower false acceptance, and higher genuine acceptance rates. More reliable recognition performance is achievable as multiple pieces of evidence of the same identity are available. The work presented in this paper is focused on multimodal biometric system using fingerprint and iris. Distinct textual features of the iris and fingerprint are extracted using the Haar wavelet-based technique. A novel feature level fusion algorithm is developed to combine these unimodal features using the Mahalanobis distance technique. A support-vector-machine-based learning algorithm is used to train the system using the feature extracted. The performance of the proposed algorithms is validated and compared with other algorithms using the CASIA iris database and real fingerprint database. From the simulation results, it is evident that our algorithm has higher recognition rate and very less false rejection rate compared to existing approaches.
International Nuclear Information System (INIS)
Victoria R, M.A.; Morales S, J.B.
2005-01-01
Presently work is applied the modified algorithm of the ellipsoid of optimal volume (MOVE) to a reduced order model of 5 differential equations of the core of a boiling water reactor (BWR) with the purpose of estimating the parameters that model the dynamics. The viability is analyzed of carrying out an analysis that calculates the global dynamic parameters that determine the stability of the system and the uncertainty of the estimate. The modified algorithm of the ellipsoid of optimal volume (MOVE), is a method applied to the parametric identification of systems, in particular to the estimate of groups of parameters (PSE for their initials in English). It is looked for to obtain the ellipsoid of smaller volume that guarantees to contain the real value of the parameters of the model. The PSE MOVE is a recursive identification method that can manage the sign of noise and to ponder it, the ellipsoid represents an advantage due to its easy mathematical handling in the computer, the results that surrender are very useful for the design of Robust Control since to smaller volume of the ellipsoid, better is in general the performance of the system to control. The comparison with other methods presented in the literature to estimate the reason of decline (DR) of a BWR is presented. (Author)
Directory of Open Access Journals (Sweden)
E. Hetmaniok
2012-12-01
Full Text Available A procedure based on the Artificial Bee Colony algorithm for solving the two-phase axisymmetric one-dimensional inverse Stefanproblem with the third kind boundary condition is presented in this paper. Solving of the considered problem consists in reconstruction of the function describing the heat transfer coefficient appearing in boundary condition of the third kind in such a way that the reconstructed values of temperature would be as closed as possible to the measurements of temperature given in selected points of the solid. A crucial part of the solution method consists in minimizing some functional which will be executed with the aid of one of the swarm intelligence algorithms - the ABC algorithm.
Directory of Open Access Journals (Sweden)
Xianju Li
2015-07-01
Full Text Available For identification of forested landslides, most studies focus on knowledge-based and pixel-based analysis (PBA of LiDar data, while few studies have examined (semi- automated methods and object-based image analysis (OBIA. Moreover, most of them are focused on soil-covered areas with gentle hillslopes. In bedrock-covered mountains with steep and rugged terrain, it is so difficult to identify landslides that there is currently no research on whether combining semi-automated methods and OBIA with only LiDar derivatives could be more effective. In this study, a semi-automatic object-based landslide identification approach was developed and implemented in a forested area, the Three Gorges of China. Comparisons of OBIA and PBA, two different machine learning algorithms and their respective sensitivity to feature selection (FS, were first investigated. Based on the classification result, the landslide inventory was finally obtained according to (1 inclusion of holes encircled by the landslide body; (2 removal of isolated segments, and (3 delineation of closed envelope curves for landslide objects by manual digitizing operation. The proposed method achieved the following: (1 the filter features of surface roughness were first applied for calculating object features, and proved useful; (2 FS improved classification accuracy and reduced features; (3 the random forest algorithm achieved higher accuracy and was less sensitive to FS than a support vector machine; (4 compared to PBA, OBIA was more sensitive to FS, remarkably reduced computing time, and depicted more contiguous terrain segments; (5 based on the classification result with an overall accuracy of 89.11% ± 0.03%, the obtained inventory map was consistent with the referenced landslide inventory map, with a position mismatch value of 9%. The outlined approach would be helpful for forested landslide identification in steep and rugged terrain.
Li, Nan; Zhu, Xiufang
2017-04-01
Cultivated land resources is the key to ensure food security. Timely and accurate access to cultivated land information is conducive to a scientific planning of food production and management policies. The GaoFen 1 (GF-1) images have high spatial resolution and abundant texture information and thus can be used to identify fragmentized cultivated land. In this paper, an object-oriented artificial bee colony algorithm was proposed for extracting cultivated land from GF-1 images. Firstly, the GF-1 image was segmented by eCognition software and some samples from the segments were manually identified into 2 types (cultivated land and non-cultivated land). Secondly, the artificial bee colony (ABC) algorithm was used to search for classification rules based on the spectral and texture information extracted from the image objects. Finally, the extracted classification rules were used to identify the cultivated land area on the image. The experiment was carried out in Hongze area, Jiangsu Province using wide field-of-view sensor on the GF-1 satellite image. The total precision of classification result was 94.95%, and the precision of cultivated land was 92.85%. The results show that the object-oriented ABC algorithm can overcome the defect of insufficient spectral information in GF-1 images and obtain high precision in cultivated identification.
International Nuclear Information System (INIS)
Almeida, Jose Carlos S. de; Schirru, Roberto; Pereira, Claudio M.N.A.; Universidade Federal, Rio de Janeiro, RJ
2002-01-01
This work describes a possibilistic approach for transient identification based on the minimum centroids set method, proposed in previous work, optimized by genetic algorithm. The idea behind this method is to split the complex classification problem into small and simple ones, so that the performance in the classification can be increased. In order to accomplish that, a genetic algorithm is used to learn, from realistic simulated data, the optimized time partitions, which the robustness and correctness in the classification are maximized. The use of a possibilistic classification approach propitiates natural and consistent classification rules, leading naturally to a good heuristic to handle the 'don't know 'response, in case of unrecognized transient, which is fairly desirable in transient classification systems where safety is critical. Application of the proposed approach to a nuclear transient indentification problem reveals good capability of the genetic algorithm in learning optimized possibilistic classification rules for efficient diagnosis including 'don't know' response. Obtained results are shown and commented. (author)
Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.
2017-12-01
Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.
Peng, Wei; Wang, Jianxin; Zhao, Bihai; Wang, Lusheng
2015-01-01
Protein complexes play a significant role in understanding the underlying mechanism of most cellular functions. Recently, many researchers have explored computational methods to identify protein complexes from protein-protein interaction (PPI) networks. One group of researchers focus on detecting local dense subgraphs which correspond to protein complexes by considering local neighbors. The drawback of this kind of approach is that the global information of the networks is ignored. Some methods such as Markov Clustering algorithm (MCL), PageRank-Nibble are proposed to find protein complexes based on random walk technique which can exploit the global structure of networks. However, these methods ignore the inherent core-attachment structure of protein complexes and treat adjacent node equally. In this paper, we design a weighted PageRank-Nibble algorithm which assigns each adjacent node with different probability, and propose a novel method named WPNCA to detect protein complex from PPI networks by using weighted PageRank-Nibble algorithm and core-attachment structure. Firstly, WPNCA partitions the PPI networks into multiple dense clusters by using weighted PageRank-Nibble algorithm. Then the cores of these clusters are detected and the rest of proteins in the clusters will be selected as attachments to form the final predicted protein complexes. The experiments on yeast data show that WPNCA outperforms the existing methods in terms of both accuracy and p-value. The software for WPNCA is available at "http://netlab.csu.edu.cn/bioinfomatics/weipeng/WPNCA/download.html".
A Level-2 trigger algorithm for the identification of muons in the ATLAS Muon Spectrometer
Di Mattia, A; Dos Anjos, A; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, J A C; Boisvert, V; Bosman, M; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Conde-Muíño, P; De Santo, A; Díaz-Gómez, M; Dosil, M; Ellis, Nick; Emeliyanov, D; Epp, B; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kabana, S; Khomich, A; Kilvington, G; Konstantinidis, N P; Kootz, A; Lowe, A; Luminari, L; Maeno, T; Masik, J; Meessen, C; Mello, A G; Merino, G; Moore, R; Morettini, P; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Panikashvili, N; Parodi, F; Pasqualucci, E; Pérez-Réale, V; Pinfold, J L; Pinto, P; Qian, Z; Resconi, S; Rosati, S; Sánchez, C; Santamarina-Rios, C; Scannicchio, D A; Schiavi, C; Segura, E; De Seixas, J M; Sivoklokov, S Yu; Soluk, R A; Stefanidis, E; Sushkov, S S; Sutton, M; Tapprogge, Stefan; Thomas, E; Touchard, F; Venda-Pinto, B; Vercesi, V; Werner, P; Wheeler, S; Wickens, F J; Wiedenmann, W; Wielers, M; Zobernig, G; Computing In High Energy Physics
2005-01-01
The ATLAS Level-2 trigger provides a software-based event selection after the initial Level-1 hardware trigger. For the muon events, the selection is decomposed in a number of broad steps: first, the Muon Spectrometer data are processed to give physics quantities associated to the muon track (standalone feature extraction) then, other detector data are used to refine the extracted features. The “µFast” algorithm performs the standalone feature extraction, providing a first reduction of the muon event rate from Level-1. It confirms muon track candidates with a precise measurement of the muon momentum. The algorithm is designed to be both conceptually simple and fast so as to be readily implemented in the demanding online environment in which the Level-2 selection code will run. Never-the-less its physics performance approaches, in some cases, that of the offline reconstruction algorithms. This paper describes the implemented algorithm together with the software techniques employed to increase its timing p...
Identification of Water Diffusivity of Inorganic Porous Materials Using Evolutionary Algorithms
Czech Academy of Sciences Publication Activity Database
Kočí, J.; Maděra, J.; Jerman, M.; Keppert, M.; Svora, Petr; Černý, R.
2016-01-01
Roč. 113, č. 1 (2016), s. 51-66 ISSN 0169-3913 Institutional support: RVO:61388980 Keywords : Evolutionary algorithms * Water transport * Inorganic porous materials * Inverse analysis Subject RIV: CA - Inorganic Chemistry Impact factor: 2.205, year: 2016
Directory of Open Access Journals (Sweden)
Žigić Aleksandar D.
2005-01-01
Full Text Available Experimental verifications of two optimized adaptive digital signal processing algorithms implemented in two pre set time count rate meters were per formed ac cording to appropriate standards. The random pulse generator realized using a personal computer, was used as an artificial radiation source for preliminary system tests and performance evaluations of the pro posed algorithms. Then measurement results for background radiation levels were obtained. Finally, measurements with a natural radiation source radioisotope 90Sr-90Y, were carried out. Measurement results, con ducted without and with radio isotopes for the specified errors of 10% and 5% showed to agree well with theoretical predictions.
Fault Identification Algorithm Based on Zone-Division Wide Area Protection System
Xiaojun Liu; Youcheng Wang; Hub Hu
2014-01-01
As the power grid becomes more magnified and complicated, wide-area protection system in the practical engineering application is more and more restricted by the communication level. Based on the concept of limitedness of wide-area protection system, the grid with complex structure is divided orderly in this paper, and fault identification and protection action are executed in each divided zone to reduce the pressure of the communication system. In protection zone, a new wide-area...
International Nuclear Information System (INIS)
Murari, A.; Camplani, M.; Cannas, B.; Usai, P.; Mazon, D.; Delaunay, F.
2010-01-01
MARFE instabilities and UFOs leave clear signatures in JET fast visible camera videos. Given the potential harmful consequences of these events, particularly as triggers of disruptions, it would be important to have the means of detecting them automatically. In this paper, the results of various algorithms to identify automatically the MARFEs and UFOs in JET visible videos are reported. The objective is to retrieve the videos, which have captured these events, exploring the whole JET database of images, as a preliminary step to the development of real-time identifiers in the future. For the detection of MARFEs, a complete identifier has been finalized, using morphological operators and Hu moments. The final algorithm manages to identify the videos with MARFEs with a success rate exceeding 80%. Due to the lack of a complete statistics of examples, the UFO identifier is less developed, but a preliminary code can detect UFOs quite reliably. (authors)
Mochnacki, Bohdan; Majchrzak, Ewa; Paruch, Marek
2018-01-01
In the paper the soft tissue freezing process is considered. The tissue sub-domain is subjected to the action of cylindrical cryoprobe. Thermal processes proceeding in the domain considered are described using the dual-phase lag equation (DPLE) supplemented by the appropriate boundary and initial conditions. DPLE results from the generalization of the Fourier law in which two lag times are introduced (relaxation and thermalization times). The aim of research is the identification of these parameters on the basis of measured cooling curves at the set of points selected from the tissue domain. To solve the problem the evolutionary algorithms are used. The paper contains the mathematical model of the tissue freezing process, the very short information concerning the numerical solution of the basic problem, the description of the inverse problem solution and the results of computations.
International Nuclear Information System (INIS)
Chen Zhou; Qiu-Nan Tong; Zhang Cong-Cong; Hu Zhan
2015-01-01
Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are performed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Compared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible. (paper)
CERN. Geneva
2018-01-01
Particle identification (PID) plays a crucial role in LHCb analyses. Combining information from LHCb subdetectors allows one to distinguish between various species of long-lived charged and neutral particles. PID performance directly affects the sensitivity of most LHCb measurements. Advanced multivariate approaches are used at LHCb to obtain the best PID performance and control systematic uncertainties. This talk highlights recent developments in PID that use innovative machine learning techniques, as well as novel data-driven approaches which ensure that PID performance is well reproduced in simulation.
Pang, Jack X Q; Ross, Erin; Borman, Meredith A; Zimmer, Scott; Kaplan, Gilaad G; Heitman, Steven J; Swain, Mark G; Burak, Kelly W; Quan, Hude; Myers, Robert P
2015-09-11
Epidemiologic studies of alcoholic hepatitis (AH) have been hindered by the lack of a validated International Classification of Disease (ICD) coding algorithm for use with administrative data. Our objective was to validate coding algorithms for AH using a hospitalization database. The Hospital Discharge Abstract Database (DAD) was used to identify consecutive adults (≥18 years) hospitalized in the Calgary region with a diagnosis code for AH (ICD-10, K70.1) between 01/2008 and 08/2012. Medical records were reviewed to confirm the diagnosis of AH, defined as a history of heavy alcohol consumption, elevated AST and/or ALT (34 μmol/L, and elevated INR. Subgroup analyses were performed according to the diagnosis field in which the code was recorded (primary vs. secondary) and AH severity. Algorithms that incorporated ICD-10 codes for cirrhosis and its complications were also examined. Of 228 potential AH cases, 122 patients had confirmed AH, corresponding to a positive predictive value (PPV) of 54% (95% CI 47-60%). PPV improved when AH was the primary versus a secondary diagnosis (67% vs. 21%; P codes for ascites (PPV 75%; 95% CI 63-86%), cirrhosis (PPV 60%; 47-73%), and gastrointestinal hemorrhage (PPV 62%; 51-73%) had improved performance, however, the prevalence of these diagnoses in confirmed AH cases was low (29-39%). In conclusion the low PPV of the diagnosis code for AH suggests that caution is necessary if this hospitalization database is used in large-scale epidemiologic studies of this condition.
Energy Technology Data Exchange (ETDEWEB)
Ulbrich, Uwe; Grieger, Jens [Freie Univ. Berlin (Germany). Inst. of Meteorology; Leckebusch, Gregor C. [Birmingham Univ. (United Kingdom). School of Geography, Earth and Environmental Sciences] [and others
2013-02-15
For Northern Hemisphere extra-tropical cyclone activity, the dependency of a potential anthropogenic climate change signal on the identification method applied is analysed. This study investigates the impact of the used algorithm on the changing signal, not the robustness of the climate change signal itself. Using one single transient AOGCM simulation as standard input for eleven state-of-the-art identification methods, the patterns of model simulated present day climatologies are found to be close to those computed from re-analysis, independent of the method applied. Although differences in the total number of cyclones identified exist, the climate change signals (IPCC SRES A1B) in the model run considered are largely similar between methods for all cyclones. Taking into account all tracks, decreasing numbers are found in the Mediterranean, the Arctic in the Barents and Greenland Seas, the mid-latitude Pacific and North America. Changing patterns are even more similar, if only the most severe systems are considered: the methods reveal a coherent statistically significant increase in frequency over the eastern North Atlantic and North Pacific. We found that the differences between the methods considered are largely due to the different role of weaker systems in the specific methods. (orig.)
Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou
2017-07-01
In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.
Avvaru, Akshay Kumar; Sowpati, Divya Tej; Mishra, Rakesh Kumar
2018-03-15
Microsatellites or Simple Sequence Repeats (SSRs) are short tandem repeats of DNA motifs present in all genomes. They have long been used for a variety of purposes in the areas of population genetics, genotyping, marker-assisted selection and forensics. Numerous studies have highlighted their functional roles in genome organization and gene regulation. Though several tools are currently available to identify SSRs from genomic sequences, they have significant limitations. We present a novel algorithm called PERF for extremely fast and comprehensive identification of microsatellites from DNA sequences of any size. PERF is several fold faster than existing algorithms and uses up to 5-fold lesser memory. It provides a clean and flexible command-line interface to change the default settings, and produces output in an easily-parseable tab-separated format. In addition, PERF generates an interactive and stand-alone HTML report with charts and tables for easy downstream analysis. PERF is implemented in the Python programming language. It is freely available on PyPI under the package name perf_ssr, and can be installed directly using pip or easy_install. The documentation of PERF is available at https://github.com/rkmlab/perf. The source code of PERF is deposited in GitHub at https://github.com/rkmlab/perf under an MIT license. tej@ccmb.res.in. Supplementary data are available at Bioinformatics online.
Rosenfeld, D.; Hu, J.; Zhang, P.; Snyder, J.; Orville, R. E.; Ryzhkov, A.; Zrnic, D.; Williams, E.; Zhang, R.
2017-12-01
A methodology to track the evolution of the hydrometeors and electrification of convective cells is presented and applied to various convective clouds from warm showers to super-cells. The input radar data are obtained from the polarimetric NEXRAD weather radars, The information on cloud electrification is obtained from Lightning Mapping Arrays (LMA). The development time and height of the hydrometeors and electrification requires tracking the evolution and lifecycle of convective cells. A new methodology for Multi-Cell Identification and Tracking (MCIT) is presented in this study. This new algorithm is applied to time series of radar volume scans. A cell is defined as a local maximum in the Vertical Integrated Liquid (VIL), and the echo area is divided between cells using a watershed algorithm. The tracking of the cells between radar volume scans is done by identifying the two cells in consecutive radar scans that have maximum common VIL. The vertical profile of the polarimetric radar properties are used for constructing the time-height cross section of the cell properties around the peak reflectivity as a function of height. The LMA sources that occur within the cell area are integrated as a function of height as well for each time step, as determined by the radar volume scans. The result of the tracking can provide insights to the evolution of storms, hydrometer types, precipitation initiation and cloud electrification under different thermodynamic, aerosol and geographic conditions. The details of the MCIT algorithm, its products and their performance for different types of storm are described in this poster.
Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun
2017-03-05
In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission
Carlton, A.; Cahoy, K.
2015-12-01
Reliability of geostationary communication satellites (GEO ComSats) is critical to many industries worldwide. The space radiation environment poses a significant threat and manufacturers and operators expend considerable effort to maintain reliability for users. Knowledge of the space radiation environment at the orbital location of a satellite is of critical importance for diagnosing and resolving issues resulting from space weather, for optimizing cost and reliability, and for space situational awareness. For decades, operators and manufacturers have collected large amounts of telemetry from geostationary (GEO) communications satellites to monitor system health and performance, yet this data is rarely mined for scientific purposes. The goal of this work is to acquire and analyze archived data from commercial operators using new algorithms that can detect when a space weather (or non-space weather) event of interest has occurred or is in progress. We have developed algorithms, collectively called SEER (System Event Evaluation Routine), to statistically analyze power amplifier current and temperature telemetry by identifying deviations from nominal operations or other events and trends of interest. This paper focuses on our work in progress, which currently includes methods for detection of jumps ("spikes", outliers) and step changes (changes in the local mean) in the telemetry. We then examine available space weather data from the NOAA GOES and the NOAA-computed Kp index and sunspot numbers to see what role, if any, it might have played. By combining the results of the algorithm for many components, the spacecraft can be used as a "sensor" for the space radiation environment. Similar events occurring at one time across many component telemetry streams may be indicative of a space radiation event or system-wide health and safety concern. Using SEER on representative datasets of telemetry from Inmarsat and Intelsat, we find events that occur across all or many of
Directory of Open Access Journals (Sweden)
Saïda Bedoui
2013-01-01
Full Text Available This paper addresses the problem of simultaneous identification of linear discrete time delay multivariable systems. This problem involves both the estimation of the time delays and the dynamic parameters matrices. In fact, we suggest a new formulation of this problem allowing defining the time delay and the dynamic parameters in the same estimated vector and building the corresponding observation vector. Then, we use this formulation to propose a new method to identify the time delays and the parameters of these systems using the least square approach. Convergence conditions and statistics properties of the proposed method are also developed. Simulation results are presented to illustrate the performance of the proposed method. An application of the developed approach to compact disc player arm is also suggested in order to validate simulation results.
Fuel spill identification by gas chromatography -- genetic algorithms/pattern recognition techniques
International Nuclear Information System (INIS)
Lavine, B.K.; Moores, A.J.; Faruque, A.
1998-01-01
Gas chromatography and pattern recognition methods were used to develop a potential method for typing jet fuels so a spill sample in the environment can be traced to its source. The test data consisted of 256 gas chromatograms of neat jet fuels. 31 fuels that have undergone weathering in a subsurface environment were correctly identified by type using discriminants developed from the gas chromatograms of the neat jet fuels. Coalescing poorly resolved peaks, which occurred during preprocessing, diminished the resolution and hence information content of the GC profiles. Nevertheless a genetic algorithm was able to extract enough information from these profiles to correctly classify the chromatograms of weathered fuels. This suggests that cheaper and simpler GC instruments ca be used to type jet fuels
Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.
2015-08-01
A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
population and many generations, which essentially turns the problem into a series of related static problems. To our surprise, the control problem could easily be solved when optimized like this. To further examine this, we compared the EA with a particle swarm and a local search approach, which we...... simulate an evolutionary process where the goal is to evolve solutions by means of crossover, mutation, and selection based on their quality (fitness) with respect to the optimization problem at hand. Evolutionary algorithms (EAs) are highly relevant for industrial applications, because they are capable...... of handling problems with non-linear constraints, multiple objectives, and dynamic components – properties that frequently appear in real-world problems. This thesis presents research in three fundamental areas of EC; fitness function design, methods for parameter control, and techniques for multimodal...
Parameter identification of ZnO surge arrester models based on genetic algorithms
Energy Technology Data Exchange (ETDEWEB)
Bayadi, Abdelhafid [Laboratoire d' Automatique de Setif, Departement d' Electrotechnique, Faculte des Sciences de l' Ingenieur, Universite Ferhat ABBAS de Setif, Route de Bejaia Setif 19000 (Algeria)
2008-07-15
The correct and adequate modelling of ZnO surge arresters characteristics is very important for insulation coordination studies and systems reliability. In this context many researchers addressed considerable efforts to the development of surge arresters models to reproduce the dynamic characteristics observed in their behaviour when subjected to fast front impulse currents. The difficulties with these models reside essentially in the calculation and the adjustment of their parameters. This paper proposes a new technique based on genetic algorithm to obtain the best possible series of parameter values of ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the predicted results with the experimental results available in the literature. Using the ATP-EMTP package, an application of the arrester model on network system studies is presented and discussed. (author)
Directory of Open Access Journals (Sweden)
Deling Wang
2018-03-01
Full Text Available Breast cancer is one of the most common malignancies in women. Patient-derived tumor xenograft (PDX model is a cutting-edge approach for drug research on breast cancer. However, PDX still exhibits differences from original human tumors, thereby challenging the molecular understanding of tumorigenesis. In particular, gene expression changes after tissues are transplanted from human to mouse model. In this study, we propose a novel computational method by incorporating several machine learning algorithms, including Monte Carlo feature selection (MCFS, random forest (RF, and rough set-based rule learning, to identify genes with significant expression differences between PDX and original human tumors. First, 831 breast tumors, including 657 PDX and 174 human tumors, were collected. Based on MCFS and RF, 32 genes were then identified to be informative for the prediction of PDX and human tumors and can be used to construct a prediction model. The prediction model exhibits a Matthews coefficient correlation value of 0.777. Seven interpretable interactions within the informative gene were detected based on the rough set-based rule learning. Furthermore, the seven interpretable interactions can be well supported by previous experimental studies. Our study not only presents a method for identifying informative genes with differential expression but also provides insights into the mechanism through which gene expression changes after being transplanted from human tumor into mouse model. This work would be helpful for research and drug development for breast cancer.
Wang, Deling; Li, Jia-Rui; Zhang, Yu-Hang; Chen, Lei; Huang, Tao; Cai, Yu-Dong
2018-03-12
Breast cancer is one of the most common malignancies in women. Patient-derived tumor xenograft (PDX) model is a cutting-edge approach for drug research on breast cancer. However, PDX still exhibits differences from original human tumors, thereby challenging the molecular understanding of tumorigenesis. In particular, gene expression changes after tissues are transplanted from human to mouse model. In this study, we propose a novel computational method by incorporating several machine learning algorithms, including Monte Carlo feature selection (MCFS), random forest (RF), and rough set-based rule learning, to identify genes with significant expression differences between PDX and original human tumors. First, 831 breast tumors, including 657 PDX and 174 human tumors, were collected. Based on MCFS and RF, 32 genes were then identified to be informative for the prediction of PDX and human tumors and can be used to construct a prediction model. The prediction model exhibits a Matthews coefficient correlation value of 0.777. Seven interpretable interactions within the informative gene were detected based on the rough set-based rule learning. Furthermore, the seven interpretable interactions can be well supported by previous experimental studies. Our study not only presents a method for identifying informative genes with differential expression but also provides insights into the mechanism through which gene expression changes after being transplanted from human tumor into mouse model. This work would be helpful for research and drug development for breast cancer.
Dalton, J.B.; Bove, D.J.; Mladinich, C.S.; Rockwell, B.W.
2004-01-01
A scheme to discriminate and identify materials having overlapping spectral absorption features has been developed and tested based on the U.S. Geological Survey (USGS) Tetracorder system. The scheme has been applied to remotely sensed imaging spectroscopy data acquired by the Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) instrument. This approach was used to identify the minerals calcite, epidote, and chlorite in the upper Animas River watershed, Colorado. The study was motivated by the need to characterize the distribution of calcite in the watershed and assess its acid-neutralizing potential with regard to acidic mine drainage. Identification of these three minerals is difficult because their diagnostic spectral features are all centered at 2.3 ??m, and have similar shapes and widths. Previous studies overestimated calcite abundance as a result of these spectral overlaps. The use of a reference library containing synthetic mixtures of the three minerals in varying proportions was found to simplify the task of identifying these minerals when used in conjunction with a rule-based expert system. Some inaccuracies in the mineral distribution maps remain, however, due to the influence of a fourth spectral component, sericite, which exhibits spectral absorption features at 2.2 and 2.4 ??m that overlap the 2.3-??m absorption features of the other three minerals. Whereas the endmember minerals calcite, epidote, chlorite, and sericite can be identified by the method presented here, discrepancies occur in areas where all four occur together as intimate mixtures. It is expected that future work will be able to reduce these discrepancies by including reference mixtures containing sericite. ?? 2004 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Houy, P
1999-10-15
The aim of this work is to propose a real-time control of the current profile in order to achieve reproducible operating modes with improved energetic confinement in tokamaks. The determination of the profile is based on measurements given by interferometry and polarimetry diagnostics. Different ways to evaluate and improve the accuracy of these measurements are exposed. The position and the shape of a plasma are controlled by the poloidal system that forces them to cope with standard values. Gas or neutral ions or ice pellet or extra power injection are technical means used to control other plasma parameters. These controls are performed by servo-controlled loops. The poloidal system of Tore-supra is presented. The main obstacle to a reliable determination of the current profile is the fact that slightly different Faraday angles lead to very different profiles. The direct identification method that is exposed in this work, gives the profile that minimizes the square of the margin between measured and computed values. The different algorithms proposed to control current profiles on Tore-supra have been validated by using a plasma simulation. The code Cronos that solves the resistive diffusion equation of current has been used. (A.C.)
Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J
2015-01-01
A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.
Wang, ShaoPeng; Zhang, Yu-Hang; Lu, Jing; Cui, Weiren; Hu, Jerry; Cai, Yu-Dong
2016-01-01
The development of biochemistry and molecular biology has revealed an increasingly important role of compounds in several biological processes. Like the aptamer-protein interaction, aptamer-compound interaction attracts increasing attention. However, it is time-consuming to select proper aptamers against compounds using traditional methods, such as exponential enrichment. Thus, there is an urgent need to design effective computational methods for searching effective aptamers against compounds. This study attempted to extract important features for aptamer-compound interactions using feature selection methods, such as Maximum Relevance Minimum Redundancy, as well as incremental feature selection. Each aptamer-compound pair was represented by properties derived from the aptamer and compound, including frequencies of single nucleotides and dinucleotides for the aptamer, as well as the constitutional, electrostatic, quantum-chemical, and space conformational descriptors of the compounds. As a result, some important features were obtained. To confirm the importance of the obtained features, we further discussed the associations between them and aptamer-compound interactions. Simultaneously, an optimal prediction model based on the nearest neighbor algorithm was built to identify aptamer-compound interactions, which has the potential to be a useful tool for the identification of novel aptamer-compound interactions. The program is available upon the request.
Directory of Open Access Journals (Sweden)
Abdul Rahim Siti Rafidah
2018-01-01
Full Text Available This paper presents the effect of load model prior to the distributed generation (DG planning in distribution system. In achieving optimal allocation and placement of DG, a ranking identification technique was proposed in order to study the DG planning using pre-developed Embedded Meta Evolutionary Programming–Firefly Algorithm. The aim of this study is to analyze the effect of different type of DG in order to reduce the total losses considering load factor. To realize the effectiveness of the proposed technique, the IEEE 33 bus test systems was utilized as the test specimen. In this study, the proposed techniques were used to determine the DG sizing and the suitable location for DG planning. The results produced are utilized for the optimization process of DG for the benefit of power system operators and planners in the utility. The power system planner can choose the suitable size and location from the result obtained in this study with the appropriate company’s budget. The modeling of voltage dependent loads has been presented and the results show the voltage dependent load models have a significant effect on total losses of a distribution system for different DG type.
Ragan, Elizabeth J; Johnson, Courtney; Milton, Jacqueline N; Gill, Christopher J
2016-11-02
One of the greatest public health challenges in low- and middle-income countries (LMICs) is identifying people over time and space. Recent years have seen an explosion of interest in developing electronic approaches to addressing this problem, with mobile technology at the forefront of these efforts. We investigate the possibility of biometrics as a simple, cost-efficient, and portable solution. Common biometrics approaches include fingerprinting, iris scanning and facial recognition, but all are less than ideal due to complexity, infringement on privacy, cost, or portability. Ear biometrics, however, proved to be a unique and viable solution. We developed an identification algorithm then conducted a cross sectional study in which we photographed left and right ears from 25 consenting adults. We then conducted re-identification and statistical analyses to identify the accuracy and replicability of our approach. Through principal component analysis, we found the curve of the ear helix to be the most reliable anatomical structure and the basis for re-identification. Although an individual ear allowed for high re-identification rate (88.3%), when both left and right ears were paired together, our rate of re-identification amidst the pool of potential matches was 100%. The results of this study have implications on future efforts towards building a biometrics solution for patient identification in LMICs. We provide a conceptual platform for further investigation into the development of an ear biometrics identification mobile application.
Wei, Qingyang; Ma, Tianyu; Xu, Tianpeng; Zeng, Ming; Gu, Yu; Dai, Tiantian; Liu, Yaqiang
2018-01-01
Modern positron emission tomography (PET) detectors are made from pixelated scintillation crystal arrays and readout by Anger logic. The interaction position of the gamma-ray should be assigned to a crystal using a crystal position map or look-up table. Crystal identification is a critical procedure for pixelated PET systems. In this paper, we propose a novel crystal identification method for a dual-layer-offset LYSO based animal PET system via Lu-176 background radiation and mean shift algorithm. Single photon event data of the Lu-176 background radiation are acquired in list-mode for 3 h to generate a single photon flood map (SPFM). Coincidence events are obtained from the same data using time information to generate a coincidence flood map (CFM). The CFM is used to identify the peaks of the inner layer using the mean shift algorithm. The response of the inner layer is deducted from the SPFM by subtracting CFM. Then, the peaks of the outer layer are also identified using the mean shift algorithm. The automatically identified peaks are manually inspected by a graphical user interface program. Finally, a crystal position map is generated using a distance criterion based on these peaks. The proposed method is verified on the animal PET system with 48 detector blocks on a laptop with an Intel i7-5500U processor. The total runtime for whole system peak identification is 67.9 s. Results show that the automatic crystal identification has 99.98% and 99.09% accuracy for the peaks of the inner and outer layers of the whole system respectively. In conclusion, the proposed method is suitable for the dual-layer-offset lutetium based PET system to perform crystal identification instead of external radiation sources.
Directory of Open Access Journals (Sweden)
Byung Eun Lee
2014-09-01
Full Text Available This paper proposes an algorithm for fault detection, faulted phase and winding identification of a three-winding power transformer based on the induced voltages in the electrical power system. The ratio of the induced voltages of the primary-secondary, primary-tertiary and secondary-tertiary windings is the same as the corresponding turns ratio during normal operating conditions, magnetic inrush, and over-excitation. It differs from the turns ratio during an internal fault. For a single phase and a three-phase power transformer with wye-connected windings, the induced voltages of each pair of windings are estimated. For a three-phase power transformer with delta-connected windings, the induced voltage differences are estimated to use the line currents, because the delta winding currents are practically unavailable. Six detectors are suggested for fault detection. An additional three detectors and a rule for faulted phase and winding identification are presented as well. The proposed algorithm can not only detect an internal fault, but also identify the faulted phase and winding of a three-winding power transformer. The various test results with Electromagnetic Transients Program (EMTP-generated data show that the proposed algorithm successfully discriminates internal faults from normal operating conditions including magnetic inrush and over-excitation. This paper concludes by implementing the algorithm into a prototype relay based on a digital signal processor.
International Nuclear Information System (INIS)
Koyama, Hisanobu; Ohno, Yoshiharu; Kono, Atsushi A.; Kusaka, Akiko; Konishi, Minoru; Yoshii, Masaru; Sugimura, Kazuro
2010-01-01
Purpose: The purpose of this study was to assess the influence of reconstruction algorithm on identification and image quality of ground-glass opacities (GGOs) and partly solid nodules on low-dose thin-section CT. Materials and methods: A chest CT phantom including simulated GGOs and partly solid nodules was scanned with five different tube currents and reconstructed by using standard (A) and newly developed (B) high-resolution reconstruction algorithms, followed by visually assessment of identification and image quality of GGOs and partly solid nodules by two chest radiologists. Inter-observer agreement, ROC analysis and ANOVA were performed to compare identification and image quality of each data set with those of the standard reference. The standard reference used 120 mA s in conjunction with reconstruction algorithm A. Results: Kappa values (κ) of overall identification and image qualities were substantial or almost perfect (0.60 < κ). Assessment of identification showed that area under the curve of 25 mA reconstructed with reconstruction algorithm A was significantly lower than that of standard reference (p < 0.05), while assessment of image quality indicated that 50 mA s reconstructed with reconstruction algorithm A and 25 mA s reconstructed with both reconstruction algorithms were significantly lower than standard reference (p < 0.05). Conclusion: Reconstruction algorithm may be an important factor for identification and image quality of ground-glass opacities and partly solid nodules on low-dose CT examination.
Guo, Wei-Feng; Zhang, Shao-Wu; Shi, Qian-Qian; Zhang, Cheng-Ming; Zeng, Tao; Chen, Luonan
2018-01-19
The advances in target control of complex networks not only can offer new insights into the general control dynamics of complex systems, but also be useful for the practical application in systems biology, such as discovering new therapeutic targets for disease intervention. In many cases, e.g. drug target identification in biological networks, we usually require a target control on a subset of nodes (i.e., disease-associated genes) with minimum cost, and we further expect that more driver nodes consistent with a certain well-selected network nodes (i.e., prior-known drug-target genes). Therefore, motivated by this fact, we pose and address a new and practical problem called as target control problem with objectives-guided optimization (TCO): how could we control the interested variables (or targets) of a system with the optional driver nodes by minimizing the total quantity of drivers and meantime maximizing the quantity of constrained nodes among those drivers. Here, we design an efficient algorithm (TCOA) to find the optional driver nodes for controlling targets in complex networks. We apply our TCOA to several real-world networks, and the results support that our TCOA can identify more precise driver nodes than the existing control-fucus approaches. Furthermore, we have applied TCOA to two bimolecular expert-curate networks. Source code for our TCOA is freely available from http://sysbio.sibcb.ac.cn/cb/chenlab/software.htm or https://github.com/WilfongGuo/guoweifeng . In the previous theoretical research for the full control, there exists an observation and conclusion that the driver nodes tend to be low-degree nodes. However, for target control the biological networks, we find interestingly that the driver nodes tend to be high-degree nodes, which is more consistent with the biological experimental observations. Furthermore, our results supply the novel insights into how we can efficiently target control a complex system, and especially many evidences on the
Energy Technology Data Exchange (ETDEWEB)
Plassard, C.; Ladeyn, I.; Staunton, S. [Institut National de Recherches Agronomiques (INRA), UMR Rhizosphere and Symbiose 34 - Montpellier (France)
2004-07-01
Mycorrhizal infection is known to improve phosphate nutrition and water supply of higher plants. It has been reported to both increase the uptake of potentially toxic pollutant elements and to protect plants against toxic effects. Little is known about the effect of mycorrhizal infection on the dynamics of radioactive pollutants in soil-plant systems. The aim of this study was to compare the root uptake and root-shoot transfer of three radio-isotopes with contrasting chemical properties ({sup 85}Sr, {sup 95m}Tc and {sup 137}Cs) in mycorrhizal and control, non mycorrhizal plants. The plant studied was Pinus pinaster and the associated ecto-mycorrhizal fungus was Rhizopogon roseolus (strain R18-2). Plants were grown under anoxic conditions for 3 months then transferred to thin layers of autoclaved soil and allowed to grow for four months. After this period, the rhizotrons were dismantled, and plant tissue analysed. Biomass, nutrient content (K, P, N, Ca) and activities of each isotope in roots, shoots and stems were measured, and the degree of mycorrhizal infection assessed. The transfer factors decreased in the order Tc>Sr>Cs as expected from the degree of immobilisation by soil. No effect of mycorrhizal infection on root uptake was observed for Sr. Shoot activity concentration of Tc was decreased by mycorrhizal infection but root uptake correlated well with mycelial soil surface area. In contrast, Cs shoot activity was greater in mycorrhizal than control plants. The uptake and root to shoot distribution shall be discussed in relation to nutrient dynamics. (author)
Directory of Open Access Journals (Sweden)
Zongyan Li
2016-01-01
Full Text Available This paper describes an improved global harmony search (IGHS algorithm for identifying the nonlinear discrete-time systems based on second-order Volterra model. The IGHS is an improved version of the novel global harmony search (NGHS algorithm, and it makes two significant improvements on the NGHS. First, the genetic mutation operation is modified by combining normal distribution and Cauchy distribution, which enables the IGHS to fully explore and exploit the solution space. Second, an opposition-based learning (OBL is introduced and modified to improve the quality of harmony vectors. The IGHS algorithm is implemented on two numerical examples, and they are nonlinear discrete-time rational system and the real heat exchanger, respectively. The results of the IGHS are compared with those of the other three methods, and it has been verified to be more effective than the other three methods on solving the above two problems with different input signals and system memory sizes.
Energy Technology Data Exchange (ETDEWEB)
Rojas R, E.; Benitez R, J. S. [Instituto Tecnologico de Toluca, Division de Estudios de Posgrado e Investigacion, Av. Tecnologico s/n, Ex-Rancho La Virgen, 50140 Metepec, Estado de Mexico (Mexico); Segovia de los Rios, J. A.; Rivero G, T. [ININ, Gerencia de Ciencias Aplicadas, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)], e-mail: jorge.benitez@inin.gob.mx
2009-10-15
In this work are presented the results of design and implementation of an algorithm based on diffuse logic systems and neural networks like method of neutronic power identification of TRIGA Mark III reactor. This algorithm uses the punctual kinetics equation as data generator of training, a cost function and a learning stage based on the descending gradient algorithm allow to optimize the parameters of membership functions of a diffuse system. Also, a series of criteria like part of the initial conditions of training algorithm are established. These criteria according to the carried out simulations show a quick convergence of neutronic power estimated from the first iterations. (Author)
Bellows, Brandon K; Sainski-Nguyen, Amy M; Olsen, Cody J; Boklage, Susan H; Charland, Scott; Mitchell, Matthew P; Brixner, Diana I
2017-09-01
While statins are safe and efficacious, some patients may experience statin intolerance or treatment-limiting adverse events. Identifying patients with statin intolerance may allow optimal management of cardiovascular event risk through other strategies. Recently, an administrative claims data (ACD) algorithm was developed to identify patients with statin intolerance and validated against electronic medical records. However, how this algorithm compared with perceptions of statin intolerance by integrated delivery networks remains largely unknown. To determine the concurrent validity of an algorithm developed by a regional integrated delivery network multidisciplinary panel (MP) and a published ACD algorithm in identifying patients with statin intolerance. The MP consisted of 3 physicians and 2 pharmacists with expertise in cardiology, internal medicine, and formulary management. The MP algorithm used pharmacy and medical claims to identify patients with statin intolerance, classifying them as having statin intolerance if they met any of the following criteria: (a) medical claim for rhabdomyolysis, (b) medical claim for muscle weakness, (c) an outpatient medical claim for creatinine kinase assay, (d) fills for ≥ 2 different statins excluding dose increases, (e) decrease in statin dose, or (f) discontinuation of a statin with a subsequent fill for a nonstatin lipid-lowering therapy. The validated ACD algorithm identified statin intolerance as absolute intolerance with rhabdomyolysis; absolute intolerance without rhabdomyolysis (i.e., other adverse events); or as dose titration intolerance. Adult patients (aged ≥ 18 years) from the integrated delivery network with at least 1 prescription fill for a statin between January 1, 2011, and December 31, 2012 (first fill defined the index date) were identified. Patients with ≥ 1 year pre- and ≥ 2 years post-index continuous enrollment and no statin prescription fills in the pre-index period were included. The MP and
Current status of radio-isotopes utilization
Energy Technology Data Exchange (ETDEWEB)
Singh, M [Banaras Hindu Univ. (India)
1974-08-01
Utilization of radioisotopes were reviewed briefly in a categorized manner. In plant biochemistry, long lived radioactive carbon ,/sup 14/C, was applied to clarify such metabolic processes as photosynthesis, respiration and protein synthesis, etc., while radioactive oxygen ,/sup 18/O, was used to study the O/sub 2/ generation mechanism. Radioactive phosphorus ,/sup 32/P, was used to detect the amount, grain size of phosphatic fertilizer as well as the time and depth for better utilization. Radioactive sulphur ,/sup 35/S, and nitrogen ,/sup 15/N, could be of use in studies of protein metabolism in plants. Radioactive tracers of other minerals such as N, P, K, Ca, Mg, Zn, Mo, B, and Co were also used to detect their specific role in plants. Use of radioactive isotopes in protein synthesis and transfer of genetic information was described. Radioactive iodine ,/sup 131/I, binding capacity of milk proteins, and radio trace studies in the iodine turn over in the use of radioactive iodine were summarized.
Radio-isotope powered light source
International Nuclear Information System (INIS)
Spottiswoode, N.L.; Ryden, D.J.
1979-01-01
The light source described comprises a radioisotope fuel source, thermal insulation against heat loss, a biological shield against the escape of ionizing radiation and a material having a surface which attains incandescence when subject to isotope decay heat. There is then a means for transferring this heat to produce incandescence of the surface and thus emit light. A filter associated with the surface permits a relatively high transmission of visible radiation but has a relatively high reflectance in the infra red spectrum. Such light sources require the minimum of attention and servicing and are therefore suitable for use in navigational aids such as lighthouses and lighted buoys. The isotope fuel sources and thus the insulation and shielding and the incandescent material can be chosen for the use required and several sources, materials, means of housing etc. are detailed. Operation and efficiency are discussed. (U.K.)
Radio-isotopes in gastro-enterology
International Nuclear Information System (INIS)
Pettengell, K.E.; Houlder, A.
1988-01-01
Many recent advances in nuclear imaging have applications in gastro-enterology, and these have shown an increasing shift of emphasis away from the simple demonstration of anatomy towards methods suitable for the investigation of gastro-intestinal (GI) function and pathophysiology. Scintigraphic techniques are non-invasive, well tolerated by ill patients and perhaps most importantly permit quantitation of abnormal physiology. It is therefore not surprising that nuclear imaging is gaining an increasingly important place in routine patient management. This article discuss its value in the fields of GI bleeding, inflammatory bowel disease, tumour localisation and oesophageal motility
DEFF Research Database (Denmark)
De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk
2013-01-01
We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at low...... to the initialization. Its asymptotic performance does not reach the DML performance though. The second strategy, called Pseudo-Quadratic ML (PQML), is naturally denoised. The denoising in PQML is furthermore more efficient than in DIQML: PQML yields the same asymptotic performance as DML, as opposed to DIQML......, but requires a consistent initialization. We furthermore compare DIQML and PQML to the strategy of alternating minimization w.r.t. symbols and channel for solving DML (AQML). An asymptotic performance analysis, a complexity evaluation and simulation results are also presented. The proposed DIQML and PQML...
González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio
2015-03-01
A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.
Lu, J; Celis, E
2000-09-15
Tumor cells can be effectively recognized and eliminated by CTLs. One approach for the development of CTL-based cancer immunotherapy for solid tumors requires the use of the appropriate immunogenic peptide epitopes that are derived from defined tumor-associated antigens. Because CTL peptide epitopes are restricted to specific MHC alleles, to design immune therapies for the general population it is necessary to identify epitopes for the most commonly found human MHC alleles. The identification of such epitopes has been based on MHC-peptide-binding assays that are costly and labor-intensive. We report here the use of two computer-based prediction algorithms, which are readily available in the public domain (Internet), to identify HL4-B7-restricted CTL epitopes for carcinoembryonic antigen (CEA). These algorithms identified three candidate peptides that we studied for their capacity to induce CTL responses in vitro using lymphocytes from HLA-B7+ normal blood donors. The results show that one of these peptides, CEA9(632) (IPQQHTQVL) was efficient in the induction of primary CTL responses when dendritic cells were used as antigen-presenting cells. These CTLs were efficient in killing tumor cells that express HLA-B7 and produce CEA. The identification of this HLA-B7-restricted CTL epitope will be useful for the design of ethnically unbiased, widely applicable immunotherapies for common solid epithelial tumors expressing CEA. Moreover, our strategy of identifying MHC class I-restricted CTL epitopes without the need of peptide/HLA-binding assays provides a convenient and cost-saving alternative approach to previous methods.
Wang, Jianren; Xu, Junkai; Shull, Peter B
2018-03-01
Vertical jump height is widely used for assessing motor development, functional ability, and motor capacity. Traditional methods for estimating vertical jump height rely on force plates or optical marker-based motion capture systems limiting assessment to people with access to specialized laboratories. Current wearable designs need to be attached to the skin or strapped to an appendage which can potentially be uncomfortable and inconvenient to use. This paper presents a novel algorithm for estimating vertical jump height based on foot-worn inertial sensors. Twenty healthy subjects performed countermovement jumping trials and maximum jump height was determined via inertial sensors located above the toe and under the heel and was compared with the gold standard maximum jump height estimation via optical marker-based motion capture. Average vertical jump height estimation errors from inertial sensing at the toe and heel were -2.2±2.1 cm and -0.4±3.8 cm, respectively. Vertical jump height estimation with the presented algorithm via inertial sensing showed excellent reliability at the toe (ICC(2,1)=0.98) and heel (ICC(2,1)=0.97). There was no significant bias in the inertial sensing at the toe, but proportional bias (b=1.22) and fixed bias (a=-10.23cm) were detected in inertial sensing at the heel. These results indicate that the presented algorithm could be applied to foot-worn inertial sensors to estimate maximum jump height enabling assessment outside of traditional laboratory settings, and to avoid bias errors, the toe may be a more suitable location for inertial sensor placement than the heel.
International Nuclear Information System (INIS)
Portnoy, David; Fisher, Brian; Phifer, Daniel
2015-01-01
The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal
Energy Technology Data Exchange (ETDEWEB)
Portnoy, David; Fisher, Brian; Phifer, Daniel
2015-06-01
The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal
Energy Technology Data Exchange (ETDEWEB)
Zagrouba, M.; Sellami, A.; Bouaicha, M. [Laboratoire de Photovoltaique, des Semi-conducteurs et des Nanostructures, Centre de Recherches et des Technologies de l' Energie, Technopole de Borj-Cedria, Tunis, B.P. 95, 2050 Hammam-Lif (Tunisia); Ksouri, M. [Unite de Recherche RME-Groupe AIA, Institut National des Sciences Appliquees et de Technologie (Tunisia)
2010-05-15
In this paper, we propose to perform a numerical technique based on genetic algorithms (GAs) to identify the electrical parameters (I{sub s}, I{sub ph}, R{sub s}, R{sub sh}, and n) of photovoltaic (PV) solar cells and modules. These parameters were used to determine the corresponding maximum power point (MPP) from the illuminated current-voltage (I-V) characteristic. The one diode type approach is used to model the AM1.5 I-V characteristic of the solar cell. To extract electrical parameters, the approach is formulated as a non convex optimization problem. The GAs approach was used as a numerical technique in order to overcome problems involved in the local minima in the case of non convex optimization criteria. Compared to other methods, we find that the GAs is a very efficient technique to estimate the electrical parameters of PV solar cells and modules. Indeed, the race of the algorithm stopped after five generations in the case of PV solar cells and seven generations in the case of PV modules. The identified parameters are then used to extract the maximum power working points for both cell and module. (author)
Chuang, Li-Yeh; Lane, Hsien-Yuan; Lin, Yu-Da; Lin, Ming-Teng; Yang, Cheng-Hong; Chang, Hsueh-Wei
2014-01-01
Facial emotion perception (FEP) can affect social function. We previously reported that parts of five tested single-nucleotide polymorphisms (SNPs) in the MET and AKT1 genes may individually affect FEP performance. However, the effects of SNP-SNP interactions on FEP performance remain unclear. This study compared patients with high and low FEP performances (n = 89 and 93, respectively). A particle swarm optimization (PSO) algorithm was used to identify the best SNP barcodes (i.e., the SNP combinations and genotypes that revealed the largest differences between the high and low FEP groups). The analyses of individual SNPs showed no significant differences between the high and low FEP groups. However, comparisons of multiple SNP-SNP interactions involving different combinations of two to five SNPs showed that the best PSO-generated SNP barcodes were significantly associated with high FEP score. The analyses of the joint effects of the best SNP barcodes for two to five interacting SNPs also showed that the best SNP barcodes had significantly higher odds ratios (2.119 to 3.138; P < 0.05) compared to other SNP barcodes. In conclusion, the proposed PSO algorithm effectively identifies the best SNP barcodes that have the strongest associations with FEP performance. This study also proposes a computational methodology for analyzing complex SNP-SNP interactions in social cognition domains such as recognition of facial emotion.
Directory of Open Access Journals (Sweden)
Carlos Andres Perez-Ramirez
2017-01-01
Full Text Available Nowadays, the accurate identification of natural frequencies and damping ratios play an important role in smart civil engineering, since they can be used for seismic design, vibration control, and condition assessment, among others. To achieve it in practical way, it is required to instrument the structure and apply techniques which are able to deal with noise-corrupted and non-linear signals, as they are common features in real-life civil structures. In this article, a two-step strategy is proposed for performing accurate modal parameters identification in an automated manner. In the first step, it is obtained and decomposed the measured signals using the natural excitation technique and the synchrosqueezed wavelet transform, respectively. Then, the second step estimates the modal parameters by solving an optimization problem employing a genetic algorithm-based approach, where the micropopulation concept is used to improve the speed convergence as well as the accuracy of the estimated values. The accuracy and effectiveness of the proposal are tested using both the simulated response of a benchmark structure and the measurements of a real eight-story building. The obtained results show that the proposed strategy can estimate the modal parameters accurately, indicating than the proposal can be considered as an alternative to perform the abovementioned task.
Directory of Open Access Journals (Sweden)
Suyan Tian
Full Text Available The existence of fundamental differences between lung adenocarcinoma (AC and squamous cell carcinoma (SCC in their underlying mechanisms motivated us to postulate that specific genes might exist relevant to prognosis of each histology subtype. To test on this research hypothesis, we previously proposed a simple Cox-regression model based feature selection algorithm and identified successfully some subtype-specific prognostic genes when applying this method to real-world data. In this article, we continue our effort on identification of subtype-specific prognostic genes for AC and SCC, and propose a novel embedded feature selection method by extending Threshold Gradient Descent Regularization (TGDR algorithm and minimizing on a corresponding negative partial likelihood function. Using real-world datasets and simulated ones, we show these two proposed methods have comparable performance whereas the new proposal is superior in terms of model parsimony. Our analysis provides some evidence on the existence of such subtype-specific prognostic genes, more investigation is warranted.
Moorthy, Arun S; Wallace, William E; Kearsley, Anthony J; Tchekhovskoi, Dmitrii V; Stein, Stephen E
2017-12-19
A mass spectral library search algorithm that identifies compounds that differ from library compounds by a single "inert" structural component is described. This algorithm, the Hybrid Similarity Search, generates a similarity score based on matching both fragment ions and neutral losses. It employs the parameter DeltaMass, defined as the mass difference between query and library compounds, to shift neutral loss peaks in the library spectrum to match corresponding neutral loss peaks in the query spectrum. When the spectra being compared differ by a single structural feature, these matching neutral loss peaks should contain that structural feature. This method extends the scope of the library to include spectra of "nearest-neighbor" compounds that differ from library compounds by a single chemical moiety. Additionally, determination of the structural origin of the shifted peaks can aid in the determination of the chemical structure and fragmentation mechanism of the query compound. A variety of examples are presented, including the identification of designer drugs and chemical derivatives not present in the library.
Sarode, Ketan Dinkar; Kumar, V Ravi; Kulkarni, B D
2016-05-01
An efficient inverse problem approach for parameter estimation, state and structure identification from dynamic data by embedding training functions in a genetic algorithm methodology (ETFGA) is proposed for nonlinear dynamical biosystems using S-system canonical models. Use of multiple shooting and decomposition approach as training functions has been shown for handling of noisy datasets and computational efficiency in studying the inverse problem. The advantages of the methodology are brought out systematically by studying it for three biochemical model systems of interest. By studying a small-scale gene regulatory system described by a S-system model, the first example demonstrates the use of ETFGA for the multifold aims of the inverse problem. The estimation of a large number of parameters with simultaneous state and network identification is shown by training a generalized S-system canonical model with noisy datasets. The results of this study bring out the superior performance of ETFGA on comparison with other metaheuristic approaches. The second example studies the regulation of cAMP oscillations in Dictyostelium cells now assuming limited availability of noisy data. Here, flexibility of the approach to incorporate partial system information in the identification process is shown and its effect on accuracy and predictive ability of the estimated model are studied. The third example studies the phenomenological toy model of the regulation of circadian oscillations in Drosophila that follows rate laws different from S-system power-law. For the limited noisy data, using a priori information about properties of the system, we could estimate an alternate S-system model that showed robust oscillatory behavior with predictive abilities. Copyright © 2016 Elsevier Inc. All rights reserved.
Schellenberg, Graham; Stortz, Greg; Goertzen, Andrew L
2016-02-07
A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x-y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5-82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial
Directory of Open Access Journals (Sweden)
Mitsuaki Takemi
2017-10-01
Full Text Available Cortical stimulation mapping is a valuable tool to test the functional organization of the motor cortex in both basic neurophysiology (e.g., elucidating the process of motor plasticity and clinical practice (e.g., before resecting brain tumors involving the motor cortex. However, compilation of motor maps based on the motor threshold (MT requires a large number of cortical stimulations and is therefore time consuming. Shortening the time for mapping may reduce stress on the subjects and unveil short-term plasticity mechanisms. In this study, we aimed to establish a cortical stimulation mapping procedure in which the time needed to identify a motor area is reduced to the order of minutes without compromising reliability. We developed an automatic motor mapping system that applies epidural cortical surface stimulations (CSSs through one-by-one of 32 micro-electrocorticographic electrodes while examining the muscles represented in a cortical region. The next stimulus intensity was selected according to previously evoked electromyographic responses in a closed-loop fashion. CSS was repeated at 4 Hz and electromyographic responses were submitted to a newly proposed algorithm estimating the MT with smaller number of stimuli with respect to traditional approaches. The results showed that in all tested rats (n = 12 the motor area maps identified by our novel mapping procedure (novel MT algorithm and 4-Hz CSS significantly correlated with the maps achieved by the conventional MT algorithm with 1-Hz CSS. The reliability of the both mapping methods was very high (intraclass correlation coefficients ≧0.8, while the time needed for the mapping was one-twelfth shorter with the novel method. Furthermore, the motor maps assessed by intracortical microstimulation and the novel CSS mapping procedure in two rats were compared and were also significantly correlated. Our novel mapping procedure that determined a cortical motor area within a few minutes could help
International Nuclear Information System (INIS)
Schellenberg, Graham; Goertzen, Andrew L; Stortz, Greg
2016-01-01
A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x–y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5–82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial
Schellenberg, Graham; Stortz, Greg; Goertzen, Andrew L.
2016-02-01
A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x-y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5-82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial
Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.
2017-07-01
A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.
Yazdanparast, R; Zadeh, S Abdolhossein; Dadras, D; Azadeh, A
2018-06-01
Healthcare quality is affected by various factors including trust. Patients' trust to healthcare providers is one of the most important factors for treatment outcomes. The presented study identifies optimum mixture of patient demographic features with respect to trust in three large and busy medical centers in Tehran, Iran. The presented algorithm is composed of adaptive neuro-fuzzy inference system and statistical methods. It is used to deal with data and environmental uncertainty. The required data are collected from three large hospitals using standard questionnaires. The reliability and validity of the collected data is evaluated using Cronbach's Alpha, factor analysis and statistical tests. The results of this study indicate that middle age patients with low level of education and moderate illness severity and young patients with high level of education, moderate illness severity and moderate to weak financial status have the highest trust to the considered medical centers. To the best of our knowledge this the first study that investigates patient demographic features using adaptive neuro-fuzzy inference system in healthcare sector. Second, it is a practical approach for continuous improvement of trust features in medical centers. Third, it deals with the existing uncertainty through the unique neuro-fuzzy approach. Copyright © 2018 Elsevier B.V. All rights reserved.
Chung-Wei, Li; Gwo-Hshiung, Tzeng
To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.
Directory of Open Access Journals (Sweden)
Sanjiv Kumar
Full Text Available Pathogenic bacteria interacting with eukaryotic host express adhesins on their surface. These adhesins aid in bacterial attachment to the host cell receptors during colonization. A few adhesins such as Heparin binding hemagglutinin adhesin (HBHA, Apa, Malate Synthase of M. tuberculosis have been identified using specific experimental interaction models based on the biological knowledge of the pathogen. In the present work, we carried out computational screening for adhesins of M. tuberculosis. We used an integrated computational approach using SPAAN for predicting adhesins, PSORTb, SubLoc and LocTree for extracellular localization, and BLAST for verifying non-similarity to human proteins. These steps are among the first of reverse vaccinology. Multiple claims and attacks from different algorithms were processed through argumentative approach. Additional filtration criteria included selection for proteins with low molecular weights and absence of literature reports. We examined binding potential of the selected proteins using an image based ELISA. The protein Rv2599 (membrane protein binds to human fibronectin, laminin and collagen. Rv3717 (N-acetylmuramoyl-L-alanine amidase and Rv0309 (L,D-transpeptidase bind to fibronectin and laminin. We report Rv2599 (membrane protein, Rv0309 and Rv3717 as novel adhesins of M. tuberculosis H37Rv. Our results expand the number of known adhesins of M. tuberculosis and suggest their regulated expression in different stages.
Lavine, B K; Ritter, J; Moores, A J; Wilson, M; Faruque, A; Mayfield, H T
2000-01-15
Solid-phase microextraction (SPME), capillary column gas chromatography, and pattern recognition methods were used to develop a potential method for typing jet fuels so a spill sample in the environment can be traced to its source. The test data consisted of gas chromatograms from 180 neat jet fuel samples representing common aviation turbine fuels found in the United States (JP-4, Jet-A, JP-7, JPTS, JP-5, JP-8). SPME sampling of the fuel's headspace afforded well-resolved reproducible profiles, which were standardized using special peak-matching software. The peak-matching procedure yielded 84 standardized retention time windows, though not all peaks were present in all gas chromatograms. A genetic algorithm (GA) was employed to identify features (in the standardized chromatograms of the neat jet fuels) suitable for pattern recognition analysis. The GA selected peaks, whose two largest principal components showed clustering of the chromatograms on the basis of fuel type. The principal component analysis routine in the fitness function of the GA acted as an information filter, significantly reducing the size of the search space, since it restricted the search to feature subsets whose variance is primarily about differences between the various fuel types in the training set. In addition, the GA focused on those classes and/or samples that were difficult to classify as it trained using a form of boosting. Samples that consistently classify correctly were not as heavily weighted as samples that were difficult to classify. Over time, the GA learned its optimal parameters in a manner similar to a perceptron. The pattern recognition GA integrated aspects of strong and weak learning to yield a "smart" one-pass procedure for feature selection.
Energy Technology Data Exchange (ETDEWEB)
Stavrov, Andrei; Yamamoto, Eugene [Rapiscan Systems, Inc., 14000 Mead Street, Longmont, CO, 80504 (United States)
2015-07-01
Radiation Portal Monitors (RPM) with plastic detectors represent the main instruments used for primary border (customs) radiation control. RPM are widely used because they are simple, reliable, relatively inexpensive and have a high sensitivity. However, experience using the RPM in various countries has revealed the systems have some grave shortcomings. There is a dramatic decrease of the probability of detection of radioactive sources under high suppression of the natural gamma background (radiation control of heavy cargoes, containers and, especially, trains). NORM (Naturally Occurring Radioactive Material) existing in objects under control trigger the so-called 'nuisance alarms', requiring a secondary inspection for source verification. At a number of sites, the rate of such alarms is so high it significantly complicates the work of customs and border officers. This paper presents a brief description of new variant of algorithm ASIA-New (New Advanced Source Identification Algorithm), which was developed by the Rapiscan company. It also demonstrates results of different tests and the capability of a new system to overcome the shortcomings stated above. New electronics and ASIA-New enables RPM to detect radioactive sources under a high background suppression (tested at 15-30%) and to verify the detected NORM (KCl) and the artificial isotopes (Co- 57, Ba-133 and other). New variant of ASIA is based on physical principles, a phenomenological approach and analysis of some important parameter changes during the vehicle passage through the monitor control area. Thanks to this capability main advantage of new system is that this system can be easily installed into any RPM with plastic detectors. Taking into account that more than 4000 RPM has been installed worldwide their upgrading by ASIA-New may significantly increase probability of detection and verification of radioactive sources even masked by NORM. This algorithm was tested for 1,395 passages of
Lu, Jing; Chen, Lei; Yin, Jun; Huang, Tao; Bi, Yi; Kong, Xiangyin; Zheng, Mingyue; Cai, Yu-Dong
2016-01-01
Lung cancer, characterized by uncontrolled cell growth in the lung tissue, is the leading cause of global cancer deaths. Until now, effective treatment of this disease is limited. Many synthetic compounds have emerged with the advancement of combinatorial chemistry. Identification of effective lung cancer candidate drug compounds among them is a great challenge. Thus, it is necessary to build effective computational methods that can assist us in selecting for potential lung cancer drug compounds. In this study, a computational method was proposed to tackle this problem. The chemical-chemical interactions and chemical-protein interactions were utilized to select candidate drug compounds that have close associations with approved lung cancer drugs and lung cancer-related genes. A permutation test and K-means clustering algorithm were employed to exclude candidate drugs with low possibilities to treat lung cancer. The final analysis suggests that the remaining drug compounds have potential anti-lung cancer activities and most of them have structural dissimilarity with approved drugs for lung cancer.
Directory of Open Access Journals (Sweden)
Ángel Cobo
2011-12-01
Full Text Available This paper presents a document representation strategy and a bio-inspired algorithm to cluster multilingual collections of documents in the field of economics and business. The proposed approach allows the user to identify groups of related economics documents written in Spanish and English using techniques inspired on clustering and sorting behaviours observed in some types of ants. In order to obtain a language independent vector representation of each document two multilingual resources are used: an economic glossary and a thesaurus. Each document is represented using four feature vectors: words, proper names, economic terms in the glossary and thesaurus descriptors. The proper name identification, word extraction and lemmatization are performed using specific tools. The tf-idf scheme is used to measure the importance of each feature in the document, and a convex linear combination of angular separations between feature vectors is used as similarity measure of documents. The paper shows experimental results of the application of the proposed algorithm in a Spanish-English corpus of research papers in economics and management areas. The results demonstrate the usefulness and effectiveness of the ant clustering algorithm and the proposed representation scheme.Este artículo presenta una estrategia de representación documental y un algoritmo bioinspirado para realizar procesos de agrupamiento en colecciones multilingües de documentos en las áreas de la economía y la empresa. El enfoque propuesto permite al usuario identificar grupos de documentos económicos relacionados escritos en español o inglés usando técnicas inspiradas en comportamientos de organización y agrupamiento de objetos observados en algunos tipos de hormigas. Para conseguir una representación vectorial de cada documento independiente del idioma, se han utilizado dos recursos lingüísticos: un glosario económico y un tesauro. Cada documento es representado usando
Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin
2015-04-01
Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Auroux, Didier; Bansart, Patrick; Blum, Jacques
2008-01-01
This paper deals with a new data assimilation algorithm called the Back and Forth Nudging. The standard nudging technique consists in adding to the model equations a relaxation term, which is supposed to force the model to the observations. The BFN algorithm consists of repeating forward and backward resolutions of the model with relaxation (or nudging) terms, that have opposite signs in the direct and inverse resolutions, so as to make the backward evolution numerically stable. We then applied the Back and Forth Nudging algorithm to a simple non-linear model: the ID viscous Burgers' equations. The tests were carried out through several cases relative to the precision and density of the observations. These simulations were then compared with both the variational assimilation (VAR) and quasi-inverse (QIL) algorithms. The comparisons deal with the programming, the convergence, and time computing for each of these three algorithms.
Energy Technology Data Exchange (ETDEWEB)
Auroux, Didier [Institut de Mathematiques, Universite Paul Sabatier Toulouse 3, 31062 Toulouse cedex 9 (France); Bansart, Patrick; Blum, Jacques [Laboratoire J. A. Dieudonne, Universite de Nice Sophia-Antipolis, Pare Valrose, 06108 Nice cedex 2 (France)], E-mail: didier.auroux@math.univ-toulouse.fr
2008-11-01
This paper deals with a new data assimilation algorithm called the Back and Forth Nudging. The standard nudging technique consists in adding to the model equations a relaxation term, which is supposed to force the model to the observations. The BFN algorithm consists of repeating forward and backward resolutions of the model with relaxation (or nudging) terms, that have opposite signs in the direct and inverse resolutions, so as to make the backward evolution numerically stable. We then applied the Back and Forth Nudging algorithm to a simple non-linear model: the ID viscous Burgers' equations. The tests were carried out through several cases relative to the precision and density of the observations. These simulations were then compared with both the variational assimilation (VAR) and quasi-inverse (QIL) algorithms. The comparisons deal with the programming, the convergence, and time computing for each of these three algorithms.
Bousquet, P-J; Caillet, P; Coeuret-Pellicer, M; Goulard, H; Kudjawu, Y C; Le Bihan, C; Lecuyer, A I; Séguret, F
2017-10-01
The development and use of healthcare databases accentuates the need for dedicated tools, including validated selection algorithms of cancer diseased patients. As part of the development of the French National Health Insurance System data network REDSIAM, the tumor taskforce established an inventory of national and internal published algorithms in the field of cancer. This work aims to facilitate the choice of a best-suited algorithm. A non-systematic literature search was conducted for various cancers. Results are presented for lung, breast, colon, and rectum. Medline, Scopus, the French Database in Public Health, Google Scholar, and the summaries of the main French journals in oncology and public health were searched for publications until August 2016. An extraction grid adapted to oncology was constructed and used for the extraction process. A total of 18 publications were selected for lung cancer, 18 for breast cancer, and 12 for colorectal cancer. Validation studies of algorithms are scarce. When information is available, the performance and choice of an algorithm are dependent on the context, purpose, and location of the planned study. Accounting for cancer disease specificity, the proposed extraction chart is more detailed than the generic chart developed for other REDSIAM taskforces, but remains easily usable in practice. This study illustrates the complexity of cancer detection through sole reliance on healthcare databases and the lack of validated algorithms specifically designed for this purpose. Studies that standardize and facilitate validation of these algorithms should be developed and promoted. Copyright © 2017. Published by Elsevier Masson SAS.
Sreih, Antoine G.; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D.; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A.
2016-01-01
Purpose To develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener’s, GPA), microscopic polyangiitis (MPA), and eosinophilic granulomatosis with polyangiitis (Churg-Strauss, EGPA). Methods 250 patients per disease were randomly selected from 2 large healthcare systems using the International Classification of Diseases version 9 (ICD9) codes for GPA/EGPA (446.4) and MPA (446.0). 16 case-finding algorithms were constructed using a combination of ICD9 code, encounter type (inpatient or outpatient), physician specialty, use of immunosuppressive medications, and the anti-neutrophil cytoplasmic antibody (ANCA) type. Algorithms with the highest average positive predictive value (PPV) were validated in a third healthcare system. Results An algorithm excluding patients with eosinophilia or asthma and including the encounter type and physician specialty had the highest PPV for GPA (92.4%). An algorithm including patients with eosinophilia and asthma and the physician specialty had the highest PPV for EGPA (100%). An algorithm including patients with one of the following diagnoses: alveolar hemorrhage, interstitial lung disease, glomerulonephritis, acute or chronic kidney disease, the encounter type, physician specialty, and immunosuppressive medications had the highest PPV for MPA (76.2%). When validated in a third healthcare system, these algorithms had high PPV (85.9% for GPA, 85.7% for EGPA, and 61.5% for MPA). Adding the ANCA type increased the PPV to 94.4%, 100%, and 81.2% for GPA, EGPA, and MPA respectively. Conclusion Case-finding algorithms accurately identify patients with GPA, EGPA, and MPA in administrative databases. These algorithms can be used to assemble population-based cohorts and facilitate future research in epidemiology, drug safety, and comparative effectiveness. PMID:27804171
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
José Antonio Agüero-Fernández; Lisandra Aguilar-Bultet; Yandy Abreu-Jorge; Agustín Lage-Castellanos; Yannier Estévez-Dieppa
2015-01-01
Beta-barrel type proteins play an important role in both, human and veterinary medicine. In particular, their localization on the bacterial surface, and their involvement in virulence mechanisms of pathogens, have turned them into an interesting target in studies to search for vaccine candidates. Recently, Freeman and Wimley developed a prediction algorithm based on the physicochemical properties of transmembrane beta-barrels proteins (TMBBs). Based on that algorithm, and using Grails, a web-...
Energy Technology Data Exchange (ETDEWEB)
Victoria R, M.A.; Morales S, J.B. [UNAM, DEPFI, Campus Morelos, en IMTA Jiutepec, Morelos (Mexico)]. e-mail: angelvr@gmail.com
2005-07-01
Presently work is applied the modified algorithm of the ellipsoid of optimal volume (MOVE) to a reduced order model of 5 differential equations of the core of a boiling water reactor (BWR) with the purpose of estimating the parameters that model the dynamics. The viability is analyzed of carrying out an analysis that calculates the global dynamic parameters that determine the stability of the system and the uncertainty of the estimate. The modified algorithm of the ellipsoid of optimal volume (MOVE), is a method applied to the parametric identification of systems, in particular to the estimate of groups of parameters (PSE for their initials in English). It is looked for to obtain the ellipsoid of smaller volume that guarantees to contain the real value of the parameters of the model. The PSE MOVE is a recursive identification method that can manage the sign of noise and to ponder it, the ellipsoid represents an advantage due to its easy mathematical handling in the computer, the results that surrender are very useful for the design of Robust Control since to smaller volume of the ellipsoid, better is in general the performance of the system to control. The comparison with other methods presented in the literature to estimate the reason of decline (DR) of a BWR is presented. (Author)
DEFF Research Database (Denmark)
Poulsen, Mikael Kjær; Henriksen, Jan Erik; Vach, W
2010-01-01
, the algorithm had low sensitivity and specificity, combined with high cost and time requirements.Trial registration: clinicaltrials.gov NCT00298844 Funding: The study was funded by the Danish Cardio vascular Research Academy (DaCRA), The Danish Diabetes Association and The Danish Heart Foundation....
De Götzen , Amalia; Mion , Luca; Tache , Olivier
2007-01-01
International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Directory of Open Access Journals (Sweden)
Sharma M
2016-10-01
Full Text Available Manuj Sharma,1 Irene Petersen,1,2 Irwin Nazareth,1 Sonia J Coton,1 1Department of Primary Care and Population Health, University College London, London, UK; 2Department of Clinical Epidemiology, Aarhus University, Aarhus, Denmark Background: Research into diabetes mellitus (DM often requires a reproducible method for identifying and distinguishing individuals with type 1 DM (T1DM and type 2 DM (T2DM. Objectives: To develop a method to identify individuals with T1DM and T2DM using UK primary care electronic health records. Methods: Using data from The Health Improvement Network primary care database, we developed a two-step algorithm. The first algorithm step identified individuals with potential T1DM or T2DM based on diagnostic records, treatment, and clinical test results. We excluded individuals with records for rarer DM subtypes only. For individuals to be considered diabetic, they needed to have at least two records indicative of DM; one of which was required to be a diagnostic record. We then classified individuals with T1DM and T2DM using the second algorithm step. A combination of diagnostic codes, medication prescribed, age at diagnosis, and whether the case was incident or prevalent were used in this process. We internally validated this classification algorithm through comparison against an independent clinical examination of The Health Improvement Network electronic health records for a random sample of 500 DM individuals. Results: Out of 9,161,866 individuals aged 0–99 years from 2000 to 2014, we classified 37,693 individuals with T1DM and 418,433 with T2DM, while 1,792 individuals remained unclassified. A small proportion were classified with some uncertainty (1,155 [3.1%] of all individuals with T1DM and 6,139 [1.5%] with T2DM due to unclear health records. During validation, manual assignment of DM type based on clinical assessment of the entire electronic record and algorithmic assignment led to equivalent classification
Inclusive Flavour Tagging Algorithm
International Nuclear Information System (INIS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-01-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Turan, Muhammed K.; Sehirli, Eftal; Elen, Abdullah; Karas, Ismail R.
2015-07-01
Gel electrophoresis (GE) is one of the most used method to separate DNA, RNA, protein molecules according to size, weight and quantity parameters in many areas such as genetics, molecular biology, biochemistry, microbiology. The main way to separate each molecule is to find borders of each molecule fragment. This paper presents a software application that show columns edges of DNA fragments in 3 steps. In the first step the application obtains lane histograms of agarose gel electrophoresis images by doing projection based on x-axis. In the second step, it utilizes k-means clustering algorithm to classify point values of lane histogram such as left side values, right side values and undesired values. In the third step, column edges of DNA fragments is shown by using mean algorithm and mathematical processes to separate DNA fragments from the background in a fully automated way. In addition to this, the application presents locations of DNA fragments and how many DNA fragments exist on images captured by a scientific camera.
Energy Technology Data Exchange (ETDEWEB)
Yoo, Hyun Jun [Div. of Radiation Regulation, Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Kim, Ye Won; Kim, Hyun Duk; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Yi, Yun [Dept. of of Electronics and Information Engineering, Korea University, Seoul (Korea, Republic of)
2017-06-15
A gamma energy identifying algorithm using spectral decomposition combined with smoothing method was suggested to confirm the existence of the artificial radio isotopes. The algorithm is composed by original pattern recognition method and smoothing method to enhance the performance to identify gamma energy of radiation sensors that have low energy resolution. The gamma energy identifying algorithm for the compact radiation sensor is a three-step of refinement process. Firstly, the magnitude set is calculated by the original spectral decomposition. Secondly, the magnitude of modeling error in the magnitude set is reduced by the smoothing method. Thirdly, the expected gamma energy is finally decided based on the enhanced magnitude set as a result of the spectral decomposition with the smoothing method. The algorithm was optimized for the designed radiation sensor composed of a CsI (Tl) scintillator and a silicon pin diode. The two performance parameters used to estimate the algorithm are the accuracy of expected gamma energy and the number of repeated calculations. The original gamma energy was accurately identified with the single energy of gamma radiation by adapting this modeling error reduction method. Also the average error decreased by half with the multi energies of gamma radiation in comparison to the original spectral decomposition. In addition, the number of repeated calculations also decreased by half even in low fluence conditions under 104 (/0.09 cm{sup 2} of the scintillator surface). Through the development of this algorithm, we have confirmed the possibility of developing a product that can identify artificial radionuclides nearby using inexpensive radiation sensors that are easy to use by the public. Therefore, it can contribute to reduce the anxiety of the public exposure by determining the presence of artificial radionuclides in the vicinity.
Deb, Anish; Sarkar, Gautam
2016-01-01
This book introduces a new set of orthogonal hybrid functions (HF) which approximates time functions in a piecewise linear manner which is very suitable for practical applications. The book presents an analysis of different systems namely, time-invariant system, time-varying system, multi-delay systems---both homogeneous and non-homogeneous type- and the solutions are obtained in the form of discrete samples. The book also investigates system identification problems for many of the above systems. The book is spread over 15 chapters and contains 180 black and white figures, 18 colour figures, 85 tables and 56 illustrative examples. MATLAB codes for many such examples are included at the end of the book.
Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong
2018-06-01
The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.
Energy Technology Data Exchange (ETDEWEB)
Picon-Ruiz, A.; Echazarra-Higuet, J.; Bereciartua-Perez, A.
2010-07-01
Waste electrical and Electronic Equipment (WEEE) constitutes 4% of the municipal waste in Europe, being increased by 16-28% every five years. Nowadays, Europe produces 6,5 million tonnes of WEEE per year and currently 90% goes to landfill. WEEE waste is growing 3 times faster than municipal waste and this figure is expected to be increased up to 12 million tones by 2015. Applying a new technology to separate non-ferrous metal Waste from WEEE is the aim of this paper, by identifying multi-and hyper-spectral materials and inserting them in a recycling plant. This technology will overcome the shortcomings passed by current methods, which are unable to separate valuable materials very similar in colour, size or shape. For this reason, it is necessary to develop new algorithms able to distinguish among these materials and to face the timing requirements. (Author). 22 refs.
Wang, Nanyi; Wang, Lirong; Xie, Xiang-Qun
2017-11-27
Molecular docking is widely applied to computer-aided drug design and has become relatively mature in the recent decades. Application of docking in modeling varies from single lead compound optimization to large-scale virtual screening. The performance of molecular docking is highly dependent on the protein structures selected. It is especially challenging for large-scale target prediction research when multiple structures are available for a single target. Therefore, we have established ProSelection, a docking preferred-protein selection algorithm, in order to generate the proper structure subset(s). By the ProSelection algorithm, protein structures of "weak selectors" are filtered out whereas structures of "strong selectors" are kept. Specifically, the structure which has a good statistical performance of distinguishing active ligands from inactive ligands is defined as a strong selector. In this study, 249 protein structures of 14 autophagy-related targets are investigated. Surflex-dock was used as the docking engine to distinguish active and inactive compounds against these protein structures. Both t test and Mann-Whitney U test were used to distinguish the strong from the weak selectors based on the normality of the docking score distribution. The suggested docking score threshold for active ligands (SDA) was generated for each strong selector structure according to the receiver operating characteristic (ROC) curve. The performance of ProSelection was further validated by predicting the potential off-targets of 43 U.S. Federal Drug Administration approved small molecule antineoplastic drugs. Overall, ProSelection will accelerate the computational work in protein structure selection and could be a useful tool for molecular docking, target prediction, and protein-chemical database establishment research.
Directory of Open Access Journals (Sweden)
José Antonio Agüero-Fernández
2015-11-01
Full Text Available Beta-barrel type proteins play an important role in both, human and veterinary medicine. In particular, their localization on the bacterial surface, and their involvement in virulence mechanisms of pathogens, have turned them into an interesting target in studies to search for vaccine candidates. Recently, Freeman and Wimley developed a prediction algorithm based on the physicochemical properties of transmembrane beta-barrels proteins (TMBBs. Based on that algorithm, and using Grails, a web-based application was implemented. This system, named Beta Predictor, is capable of processing from one protein sequence to complete predicted proteomes up to 10000 proteins with a runtime of about 0.019 seconds per 500-residue protein, and it allows graphical analyses for each protein. The application was evaluated with a validation set of 535 non-redundant proteins, 102 TMBBs and 433 non-TMBBs. The sensitivity, specificity, Matthews correlation coefficient, positive predictive value and accuracy were calculated, being 85.29%, 95.15%, 78.72%, 80.56% and 93.27%, respectively. The performance of this system was compared with TMBBs predictors, BOMP and TMBHunt, using the same validation set. Taking into account the order mentioned above, the following results were obtained: 76.47%, 99.31%, 83.05%, 96.30% and 94.95% for BOMP, and 78.43%, 92.38%, 67.90%, 70.17% and 89.78% for TMBHunt. Beta Predictor was outperformed by BOMP but the latter showed better behavior than TMBHunt
New applications of radio-isotopes in France
International Nuclear Information System (INIS)
Leveque, P.; Hours, R.; Martinelli, P.; May, S.; Sandier, J.
1958-01-01
By measuring the transmission of a flat beam of thermal neutrons, the moisture content of a parallelepiped shaped soil sample can be measured to ± 4 per cent and the moisture gradient along the longitudinal axis determined. The method permits the determination of chemically bound water and the measurement of diffusion coefficients of water into low hydrogenated materials. By measuring the intensity of fluorescence excited by 13 radiation it is possible to determine the thickness of metal coatings of less than 20 p. for metals of atomic number less than 40. This method has been applied to chromium, manganese, iron, cobalt, nickel, copper and zinc. By using a suitable metal filter it is possible to measure coating thicknesses of metals differing by only one atomic number from the supporting material. By employing labeled cement it is possible to determine the extent or movement of cement grout used for soil stabilization and waterproofing. The kinetic of ion exchange of different ultramarines in aqueous solutions were studied by tracing the movement of labeled ions in the solution or in the exchanger. Values of the diffusion coefficients and activation energies were determined from the exchange studies. (author) [fr
International Nuclear Information System (INIS)
Abdallh, A.; Crevecoeur, G.; Dupré, L.
2012-01-01
The magnetic characteristics of the electromagnetic devices' core materials can be recovered by solving an inverse problem, where sets of measurements need to be properly interpreted using a forward numerical model of the device. However, the uncertainties of the geometrical parameter values in the forward model lead to appreciable recovery errors in the recovered values of the material parameters. In this paper, we propose an effective inverse approach technique, in which the influences of the uncertainties in the geometrical model parameters are minimized. In this proposed approach, the cost function that needs to be minimized is adapted with respect to the uncertain geometrical model parameters. The proposed methodology is applied onto the identification of the magnetizing B–H curve of the magnetic material of an EI core inductor. The numerical results show a significant reduction of the recovery errors in the identified magnetic material parameter values. Moreover, the proposed methodology is validated by solving an inverse problem starting from real magnetic measurements. - Highlights: ► A new method to minimize the influence of the uncertain parameters in inverse problems is proposed. ► The technique is based on adapting iteratively the objective function that needs to be minimized. ► The objective function is adapted by the model response sensitivity to the uncertain parameters. ► The proposed technique is applied for recovering the B–H curve of an EI core inductor material. ► The error in the inverse problem solution is dramatically reduced using the proposed methodology.
Sadeghieh, Ali; Sazgar, Hadi; Goodarzi, Kamyar; Lucas, Caro
2012-01-01
This paper presents a new intelligent approach for adaptive control of a nonlinear dynamic system. A modified version of the brain emotional learning based intelligent controller (BELBIC), a bio-inspired algorithm based upon a computational model of emotional learning which occurs in the amygdala, is utilized for position controlling a real laboratorial rotary electro-hydraulic servo (EHS) system. EHS systems are known to be nonlinear and non-smooth due to many factors such as leakage, friction, hysteresis, null shift, saturation, dead zone, and especially fluid flow expression through the servo valve. The large value of these factors can easily influence the control performance in the presence of a poor design. In this paper, a mathematical model of the EHS system is derived, and then the parameters of the model are identified using the recursive least squares method. In the next step, a BELBIC is designed based on this dynamic model and utilized to control the real laboratorial EHS system. To prove the effectiveness of the modified BELBIC's online learning ability in reducing the overall tracking error, results have been compared to those obtained from an optimal PID controller, an auto-tuned fuzzy PI controller (ATFPIC), and a neural network predictive controller (NNPC) under similar circumstances. The results demonstrate not only excellent improvement in control action, but also less energy consumption. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
S. Salehi
2016-06-01
Full Text Available Lichens are the dominant autotrophs of polar and subpolar ecosystems commonly encrust the rock outcrops. Spectral mixing of lichens and bare rock can shift diagnostic spectral features of materials of interest thus leading to misinterpretation and false positives if mapping is done based on perfect spectral matching methodologies. Therefore, the ability to distinguish the lichen coverage from rock and decomposing a mixed pixel into a collection of pure reflectance spectra, can improve the applicability of hyperspectral methods for mineral exploration. The objective of this study is to propose a robust lichen index that can be used to estimate lichen coverage, regardless of the mineral composition of the underlying rocks. The performance of three index structures of ratio, normalized ratio and subtraction have been investigated using synthetic linear mixtures of pure rock and lichen spectra with prescribed mixing ratios. Laboratory spectroscopic data are obtained from lichen covered samples collected from Karrat, Liverpool Land, and Sisimiut regions in Greenland. The spectra are then resampled to Hyperspectral Mapper (HyMAP resolution, in order to further investigate the functionality of the indices for the airborne platform. In both resolutions, a Pattern Search (PS algorithm is used to identify the optimal band wavelengths and bandwidths for the lichen index. The results of our band optimization procedure revealed that the ratio between R894-1246 and R1110 explains most of the variability in the hyperspectral data at the original laboratory resolution (R2=0.769. However, the normalized index incorporating R1106-1121 and R904-1251 yields the best results for the HyMAP resolution (R2=0.765.
AUTHOR|(CDS)2071660; Schael, Stefan; Rohlf, James W
2007-01-01
In the past thirty years particle physics has developed rapidly resulting in the formulation of the Standard Model, which seems to provide, at least in principle, a microscopic description for all known physical phenomena except gravity. The Standard Model is not complete, e.g. it lacks any explanation for the pattern of particle masses. The Higgs mechanism provides a solution to the problem of how particles acquire their masses. It implies the existence of at least one new particle, the Higgs boson H0 , which has not yet been observed. The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) will be switched on in winter 2007. If the Higgs boson exists, the LHC will be able to detect it. Depending on the mass of the Higgs boson, physicists have a clear idea regarding its experi- mental signature. For quite low masses (50 < mH0 < 130[ GeV]) )1 the Higgs will predominantly decay into two b-quarks. The present study describes the investigation of the identification ca- pabi...
Advances of operational modal identification
International Nuclear Information System (INIS)
Zhang, L.
2001-01-01
Operational modal analysis has shown many advantages compared to the traditional one. In this paper, the development of ambient modal identification in time domain is summarized. The mathematical models for modal identification have been presented as unified framework for time domain (TD) System realization algorithms, such as polyrefence (PRCE), extended Ibrahim time domain (EITD) and eigensystem realization algorithm (ERA), etc., and recently developed Stochastic subspace technique (SST). The latest technique named as frequency domain decomposition (FDD) is introduced for operational modal identification, which has many advantages as a frequency domain (FD) technique. Applications of the operational modal analysis in civil and mechanical engineering have shown the success and accuracy of the advanced operational modal identification algorithms- FDD and SST techniques. The major issues of TD and FD operational modal identification are also discussed. (author)
International Nuclear Information System (INIS)
Creutz, M.
1987-11-01
A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Dynamic training algorithm for dynamic neural networks
International Nuclear Information System (INIS)
Tan, Y.; Van Cauwenberghe, A.; Liu, Z.
1996-01-01
The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
Estimating spatial travel times using automatic vehicle identification data
2001-01-01
Prepared ca. 2001. The paper describes an algorithm that was developed for estimating reliable and accurate average roadway link travel times using Automatic Vehicle Identification (AVI) data. The algorithm presented is unique in two aspects. First, ...
Backtracking algorithm for lepton reconstruction with HADES
International Nuclear Information System (INIS)
Sellheim, P
2015-01-01
The High Acceptance Di-Electron Spectrometer (HADES) at the GSI Helmholtzzentrum für Schwerionenforschung investigates dilepton and strangeness production in elementary and heavy-ion collisions. In April - May 2012 HADES recorded 7 billion Au+Au events at a beam energy of 1.23 GeV/u with the highest multiplicities measured so far. The track reconstruction and particle identification in the high track density environment are challenging. The most important detector component for lepton identification is the Ring Imaging Cherenkov detector. Its main purpose is the separation of electrons and positrons from large background of charged hadrons produced in heavy-ion collisions. In order to improve lepton identification this backtracking algorithm was developed. In this contribution we will show the results of the algorithm compared to the currently applied method for e +/- identification. Efficiency and purity of a reconstructed e +/- sample will be discussed as well. (paper)
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Electron identification capabilities of CBM
Energy Technology Data Exchange (ETDEWEB)
Lebedev, Semen [GSI, Darmstadt (Germany)]|[JINR, Dubna (Russian Federation)
2008-07-01
The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility at Darmstadt will measure dileptons emitted from the hot and dense phase in heavy-ion collisions. In case of an electron measurement, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a RICH and TRD detectors. In this contribution we will present routines which have been developed for electron identification in CBM. A RICH ring recognition algorithm based on the Hough Transform has been implemented. An ellipse fitting algorithm has been elaborated because most of the CBM RICH rings have elliptic shapes, moreover, it helps to improve ring-track matching and electron identification procedures. An Artificial Neural Network can be used in order to suppress fake rings. The electron identification in RICH is substantially improved by the use of TRD information for which 3 different algorithms are implemented. Results of primary electron identification are presented. All developed algorithms were tested on large statistics of simulated events and are included into the CBM software framework for common use.
Identification of physical models
DEFF Research Database (Denmark)
Melgaard, Henrik
1994-01-01
of the model with the available prior knowledge. The methods for identification of physical models have been applied in two different case studies. One case is the identification of thermal dynamics of building components. The work is related to a CEC research project called PASSYS (Passive Solar Components......The problem of identification of physical models is considered within the frame of stochastic differential equations. Methods for estimation of parameters of these continuous time models based on descrete time measurements are discussed. The important algorithms of a computer program for ML or MAP...... design of experiments, which is for instance the design of an input signal that are optimal according to a criterion based on the information provided by the experiment. Also model validation is discussed. An important verification of a physical model is to compare the physical characteristics...
Optimized Bayesian dynamic advising theory and algorithms
Karny, Miroslav
2006-01-01
Written by one of the world's leading groups in the area of Bayesian identification, control, and decision making, this book provides the theoretical and algorithmic basis of optimized probabilistic advising. Starting from abstract ideas and formulations, and culminating in detailed algorithms, the book comprises a unified treatment of an important problem of the design of advisory systems supporting supervisors of complex processes. It introduces the theoretical and algorithmic basis of developed advising, relying on novel and powerful combination black-box modelling by dynamic mixture models
A speedup technique for (l, d-motif finding algorithms
Directory of Open Access Journals (Sweden)
Dinh Hieu
2011-03-01
Full Text Available Abstract Background The discovery of patterns in DNA, RNA, and protein sequences has led to the solution of many vital biological problems. For instance, the identification of patterns in nucleic acid sequences has resulted in the determination of open reading frames, identification of promoter elements of genes, identification of intron/exon splicing sites, identification of SH RNAs, location of RNA degradation signals, identification of alternative splicing sites, etc. In protein sequences, patterns have proven to be extremely helpful in domain identification, location of protease cleavage sites, identification of signal peptides, protein interactions, determination of protein degradation elements, identification of protein trafficking elements, etc. Motifs are important patterns that are helpful in finding transcriptional regulatory elements, transcription factor binding sites, functional genomics, drug design, etc. As a result, numerous papers have been written to solve the motif search problem. Results Three versions of the motif search problem have been proposed in the literature: Simple Motif Search (SMS, (l, d-motif search (or Planted Motif Search (PMS, and Edit-distance-based Motif Search (EMS. In this paper we focus on PMS. Two kinds of algorithms can be found in the literature for solving the PMS problem: exact and approximate. An exact algorithm identifies the motifs always and an approximate algorithm may fail to identify some or all of the motifs. The exact version of PMS problem has been shown to be NP-hard. Exact algorithms proposed in the literature for PMS take time that is exponential in some of the underlying parameters. In this paper we propose a generic technique that can be used to speedup PMS algorithms. Conclusions We present a speedup technique that can be used on any PMS algorithm. We have tested our speedup technique on a number of algorithms. These experimental results show that our speedup technique is indeed very
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Directory of Open Access Journals (Sweden)
Bocquet V
2017-01-01
Full Text Available Valéry Bocquet Competence Center for Methodology and Statistics, Luxembourg Institute of Health, LuxembourgDiabetes is a disease whose global prevalence has been rising year after year, and by 2014 more than 400 million individuals were diagnosed with diabetes.1 As a consequence, screening of patients with type 1 or type 2 diabetes has become important, both to estimate the prevalence of diabetes and to treat affected individuals. For that purpose, a two-step algorithm suggested by Sharma et al2 was recently published, whose aims were to identify type 1 or type 2 individuals from a primary care database. The first step of the algorithm was based on the diagnostic records, treatment given, and results obtained from clinical tests. The second part was based on the combination of diagnostic codes, prescribed medications, age at the time of diagnosis, and finally whether the case was prevalent or incident.View original paper by Sharma et al
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.
2015-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.
2014-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Basics of identification measurement technology
Klikushin, Yu N.; Kobenko, V. Yu; Stepanov, P. P.
2018-01-01
All available algorithms and suitable for pattern recognition do not give 100% guarantee, therefore there is a field of scientific night activity in this direction, studies are relevant. It is proposed to develop existing technologies for pattern recognition in the form of application of identification measurements. The purpose of the study is to identify the possibility of recognizing images using identification measurement technologies. In solving problems of pattern recognition, neural networks and hidden Markov models are mainly used. A fundamentally new approach to the solution of problems of pattern recognition based on the technology of identification signal measurements (IIS) is proposed. The essence of IIS technology is the quantitative evaluation of the shape of images using special tools and algorithms.
Kunenkov, Erast V; Kononikhin, Alexey S; Perminova, Irina V; Hertkorn, Norbert; Gaspar, Andras; Schmitt-Kopplin, Philippe; Popov, Igor A; Garmash, Andrew V; Nikolaev, Evgeniy N
2009-12-15
The ultrahigh-resolution Fourier transform ion cyclotron resonance (FTICR) mass spectrum of natural organic matter (NOM) contains several thousand peaks with dozens of molecules matching the same nominal mass. Such a complexity poses a significant challenge for automatic data interpretation, in which the most difficult task is molecular formula assignment, especially in the case of heavy and/or multielement ions. In this study, a new universal algorithm for automatic treatment of FTICR mass spectra of NOM and humic substances based on total mass difference statistics (TMDS) has been developed and implemented. The algorithm enables a blind search for unknown building blocks (instead of a priori known ones) by revealing repetitive patterns present in spectra. In this respect, it differs from all previously developed approaches. This algorithm was implemented in designing FIRAN-software for fully automated analysis of mass data with high peak density. The specific feature of FIRAN is its ability to assign formulas to heavy and/or multielement molecules using "virtual elements" approach. To verify the approach, it was used for processing mass spectra of sodium polystyrene sulfonate (PSS, M(w) = 2200 Da) and polymethacrylate (PMA, M(w) = 3290 Da) which produce heavy multielement and multiply-charged ions. Application of TMDS identified unambiguously monomers present in the polymers consistent with their structure: C(8)H(7)SO(3)Na for PSS and C(4)H(6)O(2) for PMA. It also allowed unambiguous formula assignment to all multiply-charged peaks including the heaviest peak in PMA spectrum at mass 4025.6625 with charge state 6- (mass bias -0.33 ppm). Application of the TMDS-algorithm to processing data on the Suwannee River FA has proven its unique capacities in analysis of spectra with high peak density: it has not only identified the known small building blocks in the structure of FA such as CH(2), H(2), C(2)H(2)O, O but the heavier unit at 154.027 amu. The latter was
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
[A new peak detection algorithm of Raman spectra].
Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing
2014-01-01
The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.
Structural Identification Problem
Directory of Open Access Journals (Sweden)
Suvorov Aleksei
2016-01-01
Full Text Available The identification problem of the existing structures though the Quasi-Newton and its modification, Trust region algorithms is discussed. For the structural problems, which could be represented by means of the mathematical modelling of the finite element code discussed method is extremely useful. The nonlinear minimization problem of the L2 norm for the structures with linear elastic behaviour is solved by using of the Optimization Toolbox of Matlab. The direct and inverse procedures for the composition of the desired function to minimize are illustrated for the spatial 3D truss structure as well as for the problem of plane finite elements. The truss identification problem is solved with 2 and 3 unknown parameters in order to compare the computational efforts and for the graphical purposes. The particular commands of the Matlab codes are present in this paper.
Quantum Computation and Algorithms
International Nuclear Information System (INIS)
Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.
1999-01-01
It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution
Sequential blind identification of underdetermined mixtures using a novel deflation scheme.
Zhang, Mingjian; Yu, Simin; Wei, Gang
2013-09-01
In this brief, we consider the problem of blind identification in underdetermined instantaneous mixture cases, where there are more sources than sensors. A new blind identification algorithm, which estimates the mixing matrix in a sequential fashion, is proposed. By using the rank-1 detecting device, blind identification is reformulated as a constrained optimization problem. The identification of one column of the mixing matrix hence reduces to an optimization task for which an efficient iterative algorithm is proposed. The identification of the other columns of the mixing matrix is then carried out by a generalized eigenvalue decomposition-based deflation method. The key merit of the proposed deflation method is that it does not suffer from error accumulation. The proposed sequential blind identification algorithm provides more flexibility and better robustness than its simultaneous counterpart. Comparative simulation results demonstrate the superior performance of the proposed algorithm over the simultaneous blind identification algorithm.
International Nuclear Information System (INIS)
Chandrasekharan, Shailesh
2000-01-01
Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
FORENSIC LINGUISTICS: AUTOMATIC WEB AUTHOR IDENTIFICATION
Directory of Open Access Journals (Sweden)
A. A. Vorobeva
2016-03-01
Full Text Available Internet is anonymous, this allows posting under a false name, on behalf of others or simply anonymous. Thus, individuals, criminal or terrorist organizations can use Internet for criminal purposes; they hide their identity to avoid the prosecuting. Existing approaches and algorithms for author identification of web-posts on Russian language are not effective. The development of proven methods, technics and tools for author identification is extremely important and challenging task. In this work the algorithm and software for authorship identification of web-posts was developed. During the study the effectiveness of several classification and feature selection algorithms were tested. The algorithm includes some important steps: 1 Feature extraction; 2 Features discretization; 3 Feature selection with the most effective Relief-f algorithm (to find the best feature set with the most discriminating power for each set of candidate authors and maximize accuracy of author identification; 4 Author identification on model based on Random Forest algorithm. Random Forest and Relief-f algorithms are used to identify the author of a short text on Russian language for the first time. The important step of author attribution is data preprocessing - discretization of continuous features; earlier it was not applied to improve the efficiency of author identification. The software outputs top q authors with maximum probabilities of authorship. This approach is helpful for manual analysis in forensic linguistics, when developed tool is used to narrow the set of candidate authors. For experiments on 10 candidate authors, real author appeared in to top 3 in 90.02% cases, on first place real author appeared in 70.5% of cases.
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
Energy Technology Data Exchange (ETDEWEB)
Karpius, Peter Joseph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-09-18
The objective of this training modules is to examine the process of using gamma spectroscopy for radionuclide identification; apply pattern recognition to gamma spectra; identify methods of verifying energy calibration; and discuss potential causes of isotope misidentification.
VISUALIZATION OF PAGERANK ALGORITHM
Perhaj, Ervin
2013-01-01
The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
Takahashi, Hiro; Sai, Kimie; Saito, Yoshiro; Kaniwa, Nahoko; Matsumura, Yasuhiro; Hamaguchi, Tetsuya; Shimada, Yasuhiro; Ohtsu, Atsushi; Yoshino, Takayuki; Doi, Toshihiko; Okuda, Haruhiro; Ichinohe, Risa; Takahashi, Anna; Doi, Ayano; Odaka, Yoko; Okuyama, Misuzu; Saijo, Nagahiro; Sawada, Jun-ichi; Sakamoto, Hiromi; Yoshida, Teruhiko
2014-01-01
Interindividual variation in a drug response among patients is known to cause serious problems in medicine. Genomic information has been proposed as the basis for “personalized” health care. The genome-wide association study (GWAS) is a powerful technique for examining single nucleotide polymorphisms (SNPs) and their relationship with drug response variation; however, when using only GWAS, it often happens that no useful SNPs are identified due to multiple testing problems. Therefore, in a previous study, we proposed a combined method consisting of a knowledge-based algorithm, 2 stages of screening, and a permutation test for identifying SNPs. In the present study, we applied this method to a pharmacogenomics study where 109,365 SNPs were genotyped using Illumina Human-1 BeadChip in 168 cancer patients treated with irinotecan chemotherapy. We identified the SNP rs9351963 in potassium voltage-gated channel subfamily KQT member 5 (KCNQ5) as a candidate factor related to incidence of irinotecan-induced diarrhea. The p value for rs9351963 was 3.31×10−5 in Fisher's exact test and 0.0289 in the permutation test (when multiple testing problems were corrected). Additionally, rs9351963 was clearly superior to the clinical parameters and the model involving rs9351963 showed sensitivity of 77.8% and specificity of 57.6% in the evaluation by means of logistic regression. Recent studies showed that KCNQ4 and KCNQ5 genes encode members of the M channel expressed in gastrointestinal smooth muscle and suggested that these genes are associated with irritable bowel syndrome and similar peristalsis diseases. These results suggest that rs9351963 in KCNQ5 is a possible predictive factor of incidence of diarrhea in cancer patients treated with irinotecan chemotherapy and for selecting chemotherapy regimens, such as irinotecan alone or a combination of irinotecan with a KCNQ5 opener. Nonetheless, clinical importance of rs9351963 should be further elucidated. PMID:25127363
Directory of Open Access Journals (Sweden)
Xin Li
2008-06-01
Full Text Available Abstract Background Target genes of a transcription factor (TF Pou5f1 (Oct3/4 or Oct4, which is essential for pluripotency maintenance and self-renewal of embryonic stem (ES cells, have previously been identified based on their response to Pou5f1 manipulation and occurrence of Chromatin-immunoprecipitation (ChIP-binding sites in promoters. However, many responding genes with binding sites may not be direct targets because response may be mediated by other genes and ChIP-binding site may not be functional in terms of transcription regulation. Results To reduce the number of false positives, we propose to separate responding genes into groups according to direction, magnitude, and time of response, and to apply the false discovery rate (FDR criterion to each group individually. Using this novel algorithm with stringent statistical criteria (FDR Pou5f1 suppression and published ChIP data, we identified 420 tentative target genes (TTGs for Pou5f1. The majority of TTGs (372 were down-regulated after Pou5f1 suppression, indicating that the Pou5f1 functions as an activator of gene expression when it binds to promoters. Interestingly, many activated genes are potent suppressors of transcription, which include polycomb genes, zinc finger TFs, chromatin remodeling factors, and suppressors of signaling. Similar analysis showed that Sox2 and Nanog also function mostly as transcription activators in cooperation with Pou5f1. Conclusion We have identified the most reliable sets of direct target genes for key pluripotency genes – Pou5f1, Sox2, and Nanog, and found that they predominantly function as activators of downstream gene expression. Thus, most genes related to cell differentiation are suppressed indirectly.
Directory of Open Access Journals (Sweden)
Taimoor Zahid
2016-09-01
Full Text Available Battery energy storage management for electric vehicles (EV and hybrid EV is the most critical and enabling technology since the dawn of electric vehicle commercialization. A battery system is a complex electrochemical phenomenon whose performance degrades with age and the existence of varying material design. Moreover, it is very tedious and computationally very complex to monitor and control the internal state of a battery’s electrochemical systems. For Thevenin battery model we established a state-space model which had the advantage of simplicity and could be easily implemented and then applied the least square method to identify the battery model parameters. However, accurate state of charge (SoC estimation of a battery, which depends not only on the battery model but also on highly accurate and efficient algorithms, is considered one of the most vital and critical issue for the energy management and power distribution control of EV. In this paper three different estimation methods, i.e., extended Kalman filter (EKF, particle filter (PF and unscented Kalman Filter (UKF, are presented to estimate the SoC of LiFePO4 batteries for an electric vehicle. Battery’s experimental data, current and voltage, are analyzed to identify the Thevenin equivalent model parameters. Using different open circuit voltages the SoC is estimated and compared with respect to the estimation accuracy and initialization error recovery. The experimental results showed that these online SoC estimation methods in combination with different open circuit voltage-state of charge (OCV-SoC curves can effectively limit the error, thus guaranteeing the accuracy and robustness.
Optimizations for the EcoPod field identification tool
Directory of Open Access Journals (Sweden)
Yu YuanYuan
2008-03-01
Full Text Available Abstract Background We sketch our species identification tool for palm sized computers that helps knowledgeable observers with census activities. An algorithm turns an identification matrix into a minimal length series of questions that guide the operator towards identification. Historic observation data from the census geographic area helps minimize question volume. We explore how much historic data is required to boost performance, and whether the use of history negatively impacts identification of rare species. We also explore how characteristics of the matrix interact with the algorithm, and how best to predict the probability of observing a previously unseen species. Results Point counts of birds taken at Stanford University's Jasper Ridge Biological Preserve between 2000 and 2005 were used to examine the algorithm. A computer identified species by correctly answering, and counting the algorithm's questions. We also explored how the character density of the key matrix and the theoretical minimum number of questions for each bird in the matrix influenced the algorithm. Our investigation of the required probability smoothing determined whether Laplace smoothing of observation probabilities was sufficient, or whether the more complex Good-Turing technique is required. Conclusion Historic data improved identification speed, but only impacted the top 25% most frequently observed birds. For rare birds the history based algorithms did not impose a noticeable penalty in the number of questions required for identification. For our dataset neither age of the historic data, nor the number of observation years impacted the algorithm. Density of characters for different taxa in the identification matrix did not impact the algorithms. Intrinsic differences in identifying different birds did affect the algorithm, but the differences affected the baseline method of not using historic data to exactly the same degree. We found that Laplace smoothing
Semioptimal practicable algorithmic cooling
International Nuclear Information System (INIS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-01-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Identification of computer graphics objects
Directory of Open Access Journals (Sweden)
Rossinskyi Yu.M.
2016-04-01
Full Text Available The article is devoted to the use of computer graphics methods in problems of creating drawings, charts, drafting, etc. The widespread use of these methods requires the development of efficient algorithms for the identification of objects of drawings. The article analyzes the model-making algorithms for this problem and considered the possibility of reducing the time using graphics editing operations. Editing results in such operations as copying, moving and deleting objects specified images. These operations allow the use of a reliable identification of images of objects methods. For information on the composition of the image of the object along with information about the identity and the color should include information about the spatial location and other characteristics of the object (the thickness and style of contour lines, fill style, and so on. In order to enable the pixel image analysis to structure the information it is necessary to enable the initial code image objects color. The article shows the results of the implementation of the algorithm of encoding object identifiers. To simplify the process of building drawings of any kind, and reduce time-consuming, method of drawing objects identification is proposed based on the use as the ID information of the object color.
Introduction to Evolutionary Algorithms
Yu, Xinjie
2010-01-01
Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
Woo, Andrew
2012-01-01
Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Quick fuzzy backpropagation algorithm.
Nikov, A; Stoeva, S
2001-03-01
A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.
Portfolios of quantum algorithms.
Maurer, S M; Hogg, T; Huberman, B A
2001-12-17
Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.
On multiple crack identification by ultrasonic scanning
Brigante, M.; Sumbatyan, M. A.
2018-04-01
The present work develops an approach which reduces operator equations arising in the engineering problems to the problem of minimizing the discrepancy functional. For this minimization, an algorithm of random global search is proposed, which is allied to some genetic algorithms. The efficiency of the method is demonstrated by the solving problem of simultaneous identification of several linear cracks forming an array in an elastic medium by using the circular Ultrasonic scanning.
Parameter Estimation of Damped Compound Pendulum Using Bat Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available In this study, the parameter identification of the damped compound pendulum system is proposed using one of the most promising nature inspired algorithms which is Bat Algorithm (BA. The procedure used to achieve the parameter identification of the experimental system consists of input-output data collection, ARX model order selection and parameter estimation using bat algorithm (BA method. PRBS signal is used as an input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the autoregressive with exogenous input (ARX model. The performance of the model is validated using mean squares error (MSE between the actual and predicted output responses of the models. Finally, comparative study is conducted between BA and the conventional estimation method (i.e. Least Square. Based on the results obtained, MSE produce from Bat Algorithm (BA is outperformed the Least Square (LS method.
The secondary vertex finding algorithm with the ATLAS detector
Heer, Sebastian; The ATLAS collaboration
2017-01-01
A high performance identification of jets, produced via fragmentation of bottom quarks, is crucial for the ATLAS physics program. These jets can be identified by exploiting the presence of cascade decay vertices from bottom hadrons. A general vertex-finding algorithm is introduced and its ap- plication to the search for secondary vertices inside jets is described. Kinematic properties of the reconstructed vertices are used to construct several b-jet identification algorithms. The features and performance of the secondary vertex finding algorithm in a jet, as well as the performance of the jet tagging algorithms, are studied using simulated $pp$ -> $t\\bar{t}$ events at a centre-of-mass energy of 13 TeV.
Algorithm 426 : Merge sort algorithm [M1
Bron, C.
1972-01-01
Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives
Electron identification capabilities of the CBM experiment at FAIR
Energy Technology Data Exchange (ETDEWEB)
Hoehne, Claudia; Kisel, Ivan [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Lebedev, Semen [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna (Russian Federation); Ososkov, Gennady [Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna (Russian Federation)
2010-07-01
The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility at Darmstadt will measure dileptons emitted from the hot and dense phase in heavy-ion collisions. In case of an electron measurement, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a RICH and TRD detectors. In this contribution, methods which have been developed for the electron identification in CBM are presented. A fast and efficient RICH ring recognition algorithm based on the Hough Transform has been implemented. An ellipse fitting algorithm has been elaborated because most of the CBM RICH rings have elliptic shapes. An Artificial Neural Network can be used in order to suppress fake rings. The electron identification in RICH is substantially improved by the use of TRD detectors for which several different algorithms for electron identification are implemented. Results of electron identification and pion suppression are presented.
International Nuclear Information System (INIS)
Hevener, Ryne; Yim, Man-Sung; Baird, Ken
2013-01-01
Radiation portal monitors (RPMs) are distributed across the globe in an effort to decrease the illicit trafficking of nuclear materials. Many current generation RPMs utilizes large polyvinyltoluene (PVT) plastic scintillators. These detectors are low cost and reliable but have very poor energy resolution. The lack of spectroscopic detail available from PVT spectra has restricted these systems primarily to performing simple gross counting measurements in the past. A common approach to extend the capability of PVT detectors beyond simple “gross-gamma” use is to apply a technique known as energy windowing (EW) to perform rough nuclide identification with limited spectral information. An approach to creating EW algorithms was developed in this work utilizing a specific set of calibration sources and modified EW equations; this algorithm provided a degree of increased identification capability. A simulated real-time emulation of the algorithm utilizing actual port-of-entry RPM data supplied by ORNL provided an extensive proving ground for the algorithm. This algorithm is able to identify four potential threat nuclides and the major NORM source with a high degree of accuracy. High-energy masking, a major detriment of EW algorithms, is reduced by the algorithm's design. - Highlights: • Gross counting algorithms do not produce detailed screenings. • Energy windowing algorithms enhance nuclide identification capability. • Proper use of EW algorithm can identify multiple threat nuclides. • Utilizing specific set of calibration sources is important for nuclide identification
Composite Differential Search Algorithm
Directory of Open Access Journals (Sweden)
Bo Liu
2014-01-01
Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.
Algorithms and Their Explanations
Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.
2014-01-01
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of
Finite lattice extrapolation algorithms
International Nuclear Information System (INIS)
Henkel, M.; Schuetz, G.
1987-08-01
Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)
Recursive automatic classification algorithms
Energy Technology Data Exchange (ETDEWEB)
Bauman, E V; Dorofeyuk, A A
1982-03-01
A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
Group leaders optimization algorithm
Daskin, Anmer; Kais, Sabre
2011-03-01
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
International Nuclear Information System (INIS)
Noga, M.T.
1984-01-01
This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Directory of Open Access Journals (Sweden)
Francesca Musiani
2013-08-01
Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
Where genetic algorithms excel.
Baum, E B; Boneh, D; Garrett, C
2001-01-01
We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....
Multiscale global identification of porous structures
Hatłas, Marcin; Beluch, Witold
2018-01-01
The paper is devoted to the evolutionary identification of the material constants of porous structures based on measurements conducted on a macro scale. Numerical homogenization with the RVE concept is used to determine the equivalent properties of a macroscopically homogeneous material. Finite element method software is applied to solve the boundary-value problem in both scales. Global optimization methods in form of evolutionary algorithm are employed to solve the identification task. Modal analysis is performed to collect the data necessary for the identification. A numerical example presenting the effectiveness of proposed attitude is attached.
Identification of systems with distributed parameters
International Nuclear Information System (INIS)
Moret, J.M.
1990-10-01
The problem of finding a model for the dynamical response of a system with distributed parameters based on measured data is addressed. First a mathematical formalism is developed in order to obtain the specific properties of such a system. Then a linear iterative identification algorithm is proposed that includes these properties, and that produces better results than usual non linear minimisation techniques. This algorithm is further improved by an original data decimation that allow to artificially increase the sampling period without losing between sample information. These algorithms are tested with real laboratory data
Directory of Open Access Journals (Sweden)
Hans Schonemann
1996-12-01
Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].
Electron and photon identification in the D0 experiment
Energy Technology Data Exchange (ETDEWEB)
Abazov, V. M.; Abbott, B.; Acharya, B. S.; Adams, M.; Adams, T.; Agnew, J. P.; Alexeev, G. D.; Alkhazov, G.; Alton, A.; Askew, A.; Atkins, S.; Augsten, K.; Avila, C.; Badaud, F.; Bagby, L.; Baldin, B.; Bandurin, D. V.; Banerjee, S.; Barberis, E.; Baringer, P.; Bartlett, J. F.; Bassler, U.; Bazterra, V.; Bean, A.; Begalli, M.; Bellantoni, L.; Beri, S. B.; Bernardi, G.; Bernhard, R.; Bertram, I.; Besançon, M.; Beuselinck, R.; Bhat, P. C.; Bhatia, S.; Bhatnagar, V.; Blazey, G.; Blessing, S.; Bloom, K.; Boehnlein, A.; Boline, D.; Boos, E. E.; Borissov, G.; Borysova, M.; Brandt, A.; Brandt, O.; Brock, R.; Bross, A.; Brown, D.; Bu, X. B.; Buehler, M.; Buescher, V.; Bunichev, V.; Burdin, S.; Buszello, C. P.; Camacho-Pérez, E.; Casey, B. C. K.; Castilla-Valdez, H.; Caughron, S.; Chakrabarti, S.; Chan, K. M.; Chandra, A.; Chapon, E.; Chen, G.; Cho, S. W.; Choi, S.; Choudhary, B.; Cihangir, S.; Claes, D.; Clutter, J.; Cooke, M.; Cooper, W. E.; Corcoran, M.; Couderc, F.; Cousinou, M. -C.; Cutts, D.; Das, A.; Davies, G.; de Jong, S. J.; De La Cruz-Burelo, E.; Déliot, F.; Demina, R.; Denisov, D.; Denisov, S. P.; Desai, S.; Deterre, C.; DeVaughan, K.; Diehl, H. T.; Diesburg, M.; Ding, P. F.; Dominguez, A.; Dubey, A.; Dudko, L. V.; Duperrina, A.; Dutt, S.; Eads, M.; Edmunds, D.; Ellison, J.; Elvira, V. D.; Enari, Y.; Evans, H.; Evdokimov, V. N.; Feng, L.; Ferbel, T.; Fiedler, F.; Filthaut, F.; Fisher, W.; Fisk, H. E.; Fortner, M.; Fox, H.; Fuess, S.; Garbincius, P. H.; Garcia-Bellido, A.; García-González, J. A.; Gavrilov, V.; Geng, W.; Gerber, C. E.; Gershtein, Y.; Ginther, G.; Golovanov, G.; Grannis, P. D.; Greder, S.; Greenlee, H.; Grenier, G.; Gris, Ph.; Grivaz, J. -F.; Grohsjean, A.; Grünendahl, S.; Grünewald, M. W.; Guillemin, T.; Gutierrez, G.; Gutierrez, P.; Haley, J.; Han, L.; Harder, K.; Harel, A.; Hauptman, J. M.; Hays, J.; Head, T.; Hebbeker, T.; Hedin, D.; Hegab, H.; Heinson, A. P.; Heintz, U.; Hensel, C.; Heredia-De La Cruz, I.; Herner, K.; Hesketh, G.; Hildreth, M. D.; Hirosky, R.; Hoang, T.; Hobbs, J. D.; Hoeneisen, B.; Hogan, J.; Hohlfeld, M.; Holzbauer, J. L.; Howley, I.; Hubacek, Z.; Hynek, V.; Iashvili, I.; Ilchenko, Y.; Illingworth, R.; Ito, A. S.; Jabeen, S.; Jaffré, M.; Jayasinghe, A.; Jeong, M. S.; Jesik, R.; Jiang, P.; Johns, K.; Johnson, E.; Johnson, M.; Jonckheere, A.; Jonsson, P.; Joshi, J.; Jung, A. W.; Juste, A.; Kajfasz, E.; Karmanov, D.; Katsanos, I.; Kehoe, R.; Kermiche, S.; Khalatyan, N.; Khanov, A.; Kharchilava, A.; Kharzheev, Y. N.; Kiselevich, I.; Kohli, J. M.; Kozelov, A. V.; Kraus, J.; Kumar, A.; Kupco, A.; Kurča, T.; Kuzmin, V. A.; Lammers, S.; Lebrun, P.; Lee, H. S.; Lee, S. W.; Lee, W. M.; Lei, X.; Lellouch, J.; Li, D.; Li, H.; Li, L.; Li, Q. Z.; Lim, J. K.; Lincoln, D.; Linnemann, J.; Lipaev, V. V.; Lipton, R.; Liu, H.; Liu, Y.; Lobodenko, A.; Lokajicek, M.; Lopes de Sa, R.; Luna-Garcia, R.; Lyon, A. L.; Maciel, A. K. A.; Madar, R.; Magaña-Villalba, R.; Malik, S.; Malyshev, V. L.; Mansour, J.; Martínez-Ortega, J.; McCarthy, R.; McGivern, C. L.; Meijer, M. M.; Melnitchouk, A.; Menezes, D.; Mercadante, P. G.; Merkin, M.; Meyer, A.; Meyer, J.; Miconi, F.; Mondal, N. K.; Mulhearn, M.; Nagy, E.; Narain, M.; Nayyar, R.; Neal, H. A.; Negret, J. P.; Neustroev, P.; Nguyen, H. T.; Nunnemann, T.; Orduna, J.; Osman, N.; Osta, J.; Pal, A.; Parashar, N.; Parihar, V.; Park, S. K.; Partridge, R.; Parua, N.; Patwa, A.; Penning, B.; Perfilov, M.; Peters, Y.; Petridis, K.; Petrillo, G.; Pétroff, P.; Pleier, M. -A.; Podstavkov, V. M.; Popov, A. V.; Prewitt, M.; Price, D.; Prokopenko, N.; Qian, J.; Quadt, A.; Quinn, B.; Raja, R.; Ratoff, P. N.; Razumov, I.; Ripp-Baudot, I.; Rizatdinova, F.; Rominsky, M.; Ross, A.; Royon, C.; Rubinov, P.; Ruchti, R.; Sajot, G.; Sánchez-Hernández, A.; Sanders, M. P.; Santos, A. S.; Savage, G.; Sawyer, L.; Scanlon, T.; Schamberger, R. D.; Scheglov, Y.; Schellman, H.; Schwanenberger, C.; Schwienhorst, R.; Sekaric, J.; Severini, H.; Shabalina, E.; Shary, V.; Shaw, S.; Shchukin, A. A.; Simak, V.; Skubic, P.; Slattery, P.; Smirnov, D.; Snow, G. R.; Snow, J.; Snyder, S.; Söldner-Rembold, S.; Sonnenschein, L.; Soustruznik, K.; Stark, J.; Stoyanova, D. A.; Strauss, M.; Suter, L.; Svoisky, P.; Titov, M.; Tokmenin, V. V.; Tsai, Y. -T.; Tsybychev, D.; Tuchming, B.; Tully, C.; Uvarov, L.; Uvarov, S.; Uzunyan, S.; Van Kooten, R.; van Leeuwen, W. M.; Varelas, N.; Varnes, E. W.; Vasilyev, I. A.; Verkheev, A. Y.; Vertogradov, L. S.; Verzocchi, M.; Vesterinen, M.; Vilanova, D.; Vokac, P.; Wahl, H. D.; Wang, M. H. L. S.; Warchol, J.; Watts, G.; Wayne, M.; Weichert, J.; Welty-Rieger, L.; Williams, M. R. J.; Wilson, G. W.; Wobisch, M.; Wood, D. R.; Wyatt, T. R.; Xie, Y.; Yamada, R.; Yang, S.; Yasuda, T.; Yatsunenko, Y. A.; Ye, W.; Ye, Z.; Yin, H.; Yip, K.; Youn, S. W.; Yu, J. M.; Zennamo, J.; Zhao, T. G.; Zhou, B.; Zhu, J.; Zielinski, M.; Zieminska, D.; Zivkovic, L.
2014-06-01
The electron and photon reconstruction and identification algorithms used by the D0 Collaboration at the Fermilab Tevatron collider are described. The determination of the electron energy scale and resolution is presented. Studies of the performance of the electron and photon reconstruction and identification are summarized.
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
International Nuclear Information System (INIS)
Dinev, D.
1996-01-01
Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)
Yao, Guang-tao; Zhang, Xiao-hui; Ge, Wei-long
2012-01-01
The underwater laser imaging detection is an effective method of detecting short distance target underwater as an important complement of sonar detection. With the development of underwater laser imaging technology and underwater vehicle technology, the underwater automatic target identification has gotten more and more attention, and is a research difficulty in the area of underwater optical imaging information processing. Today, underwater automatic target identification based on optical imaging is usually realized with the method of digital circuit software programming. The algorithm realization and control of this method is very flexible. However, the optical imaging information is 2D image even 3D image, the amount of imaging processing information is abundant, so the electronic hardware with pure digital algorithm will need long identification time and is hard to meet the demands of real-time identification. If adopt computer parallel processing, the identification speed can be improved, but it will increase complexity, size and power consumption. This paper attempts to apply optical correlation identification technology to realize underwater automatic target identification. The optics correlation identification technology utilizes the Fourier transform characteristic of Fourier lens which can accomplish Fourier transform of image information in the level of nanosecond, and optical space interconnection calculation has the features of parallel, high speed, large capacity and high resolution, combines the flexibility of calculation and control of digital circuit method to realize optoelectronic hybrid identification mode. We reduce theoretical formulation of correlation identification and analyze the principle of optical correlation identification, and write MATLAB simulation program. We adopt single frame image obtained in underwater range gating laser imaging to identify, and through identifying and locating the different positions of target, we can improve
A genetic ensemble approach for gene-gene interaction identification
Directory of Open Access Journals (Sweden)
Ho Joshua WK
2010-10-01
Full Text Available Abstract Background It has now become clear that gene-gene interactions and gene-environment interactions are ubiquitous and fundamental mechanisms for the development of complex diseases. Though a considerable effort has been put into developing statistical models and algorithmic strategies for identifying such interactions, the accurate identification of those genetic interactions has been proven to be very challenging. Methods In this paper, we propose a new approach for identifying such gene-gene and gene-environment interactions underlying complex diseases. This is a hybrid algorithm and it combines genetic algorithm (GA and an ensemble of classifiers (called genetic ensemble. Using this approach, the original problem of SNP interaction identification is converted into a data mining problem of combinatorial feature selection. By collecting various single nucleotide polymorphisms (SNP subsets as well as environmental factors generated in multiple GA runs, patterns of gene-gene and gene-environment interactions can be extracted using a simple combinatorial ranking method. Also considered in this study is the idea of combining identification results obtained from multiple algorithms. A novel formula based on pairwise double fault is designed to quantify the degree of complementarity. Conclusions Our simulation study demonstrates that the proposed genetic ensemble algorithm has comparable identification power to Multifactor Dimensionality Reduction (MDR and is slightly better than Polymorphism Interaction Analysis (PIA, which are the two most popular methods for gene-gene interaction identification. More importantly, the identification results generated by using our genetic ensemble algorithm are highly complementary to those obtained by PIA and MDR. Experimental results from our simulation studies and real world data application also confirm the effectiveness of the proposed genetic ensemble algorithm, as well as the potential benefits of
Visual Analysis in Traffic & Re-identification
DEFF Research Database (Denmark)
Møgelmose, Andreas
and analysis, and person re-identification. In traffic sign detection, the work comprises a thorough survey of the state of the art, assembly of the worlds largest public dataset with U.S. traffic signs, and work in machine learning based detection algorithms. It was shown that detection of U.S. traffic signs...
Radionuclide identification using subtractive clustering method
International Nuclear Information System (INIS)
Farias, Marcos Santana; Mourelle, Luiza de Macedo
2011-01-01
Radionuclide identification is crucial to planning protective measures in emergency situations. This paper presents the application of a method for a classification system of radioactive elements with a fast and efficient response. To achieve this goal is proposed the application of subtractive clustering algorithm. The proposed application can be implemented in reconfigurable hardware, a flexible medium to implement digital hardware circuits. (author)
Channel Identification Machines
Directory of Open Access Journals (Sweden)
Aurel A. Lazar
2012-01-01
Full Text Available We present a formal methodology for identifying a channel in a system consisting of a communication channel in cascade with an asynchronous sampler. The channel is modeled as a multidimensional filter, while models of asynchronous samplers are taken from neuroscience and communications and include integrate-and-fire neurons, asynchronous sigma/delta modulators and general oscillators in cascade with zero-crossing detectors. We devise channel identification algorithms that recover a projection of the filter(s onto a space of input signals loss-free for both scalar and vector-valued test signals. The test signals are modeled as elements of a reproducing kernel Hilbert space (RKHS with a Dirichlet kernel. Under appropriate limiting conditions on the bandwidth and the order of the test signal space, the filter projection converges to the impulse response of the filter. We show that our results hold for a wide class of RKHSs, including the space of finite-energy bandlimited signals. We also extend our channel identification results to noisy circuits.
Channel identification machines.
Lazar, Aurel A; Slutskiy, Yevgeniy B
2012-01-01
We present a formal methodology for identifying a channel in a system consisting of a communication channel in cascade with an asynchronous sampler. The channel is modeled as a multidimensional filter, while models of asynchronous samplers are taken from neuroscience and communications and include integrate-and-fire neurons, asynchronous sigma/delta modulators and general oscillators in cascade with zero-crossing detectors. We devise channel identification algorithms that recover a projection of the filter(s) onto a space of input signals loss-free for both scalar and vector-valued test signals. The test signals are modeled as elements of a reproducing kernel Hilbert space (RKHS) with a Dirichlet kernel. Under appropriate limiting conditions on the bandwidth and the order of the test signal space, the filter projection converges to the impulse response of the filter. We show that our results hold for a wide class of RKHSs, including the space of finite-energy bandlimited signals. We also extend our channel identification results to noisy circuits.
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Fluid structure coupling algorithm
International Nuclear Information System (INIS)
McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.
1980-01-01
A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D
Hockney, Roger
1987-01-01
Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.
Diagnostic Algorithm Benchmarking
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Unsupervised learning algorithms
Aydin, Kemal
2016-01-01
This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...
An effective one-dimensional anisotropic fingerprint enhancement algorithm
Ye, Zhendong; Xie, Mei
2012-01-01
Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images, so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results show that our algorithm performs within less time.
Vector Network Coding Algorithms
Ebrahimi, Javad; Fragouli, Christina
2010-01-01
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
Homogeneous slowpoke reactor for the production of radio-isotope. A feasibility study
International Nuclear Information System (INIS)
Busatta, P.; Bonin, H.
2005-01-01
The purpose of this research is to study the feasibility of replacing the actual heterogeneous fuel core of the present SLOWPOKE-2 by a reservoir containing a homogeneous fuel for the production of Mo-99. The study looked at three items: by using the MCNP 5 simulation code, develop a series of parameters required for an homogeneous fuel and evaluate the uranyl sulfate concentration of the aqueous solution fuel in order to keep a similar excess reactivity; verify if the homogeneous reactor will retain its inherent safety attributes; and with the new dimensions and geometry of the fuel core, observe whether the natural convection will still effectively cool the reactor using the modeling software FEMLAB. The MCNP 5 simulation code was validated by using a simulation with WIMS-AECL code. It was found that it is indeed feasible to modify the SLOWPOKE-2 reactor for a homogeneous reactor using a solution of uranyl sulfate and water. (author)
Homogeneous Slowpoke reactor for the production of radio-isotope: a feasibility study
Energy Technology Data Exchange (ETDEWEB)
Busetta, P.; Bonin, H.W. [Royal Military College of Canada, Kingston, Ontario (Canada)
2006-09-15
The purpose of this research is to study the feasibility of replacing the actual heterogeneous fuel core of the present SLOWPOKE-2 by a reservoir containing a homogeneous fuel for the production of Mo-99. The study looked at three items: by using the MCNP Monte Carlo reactor calculation code, develop a series of parameters required for an homogeneous fuel and evaluate the uranyl sulfate concentration of the aqueous solution fuel in order to keep a similar excess reactivity; verify if the homogeneous react will retain its inherent safety attributes; and with the new dimensions and geometry of the fuel core, observe whether natural convection can still effectively cool the reactor using the modeling software FEMLAB(r). It was found that it is needed feasible to modify the SLOWPOKE-2 reactor for a homogeneous reactor using a solution of uranyl sulfate and water. (author)
Homogeneous slowpoke reactor for the production of radio-isotope. A feasibility study
Energy Technology Data Exchange (ETDEWEB)
Busatta, P.; Bonin, H. [Royal Military College of Canada, Kingston, Ontario (Canada)]. E-mail: paul.busatta@rmc.ca; bonin-h@rmc.ca
2005-07-01
The purpose of this research is to study the feasibility of replacing the actual heterogeneous fuel core of the present SLOWPOKE-2 by a reservoir containing a homogeneous fuel for the production of Mo-99. The study looked at three items: by using the MCNP 5 simulation code, develop a series of parameters required for an homogeneous fuel and evaluate the uranyl sulfate concentration of the aqueous solution fuel in order to keep a similar excess reactivity; verify if the homogeneous reactor will retain its inherent safety attributes; and with the new dimensions and geometry of the fuel core, observe whether the natural convection will still effectively cool the reactor using the modeling software FEMLAB. The MCNP 5 simulation code was validated by using a simulation with WIMS-AECL code. It was found that it is indeed feasible to modify the SLOWPOKE-2 reactor for a homogeneous reactor using a solution of uranyl sulfate and water. (author)
Homogeneous SLOWPOKE reactor for the production of radio-isotope. A feasibility study
Energy Technology Data Exchange (ETDEWEB)
Busatta, P.; Bonin, H.W. [Royal Military College of Canada, Kingston, Ontario (Canada)]. E-mail: paul.busatta@rmc.ca; bonin-h@rmc.ca
2006-07-01
The purpose of this research is to study the feasibility of replacing the actual heterogeneous fuel core of the present SLOWPOKE-2 by a reservoir containing a homogeneous fuel for the production of Mo-99. The study looked at three items: by using the MCNP Monte Carlo reactor calculation code, develop a series of parameters required for an homogeneous fuel and evaluate the uranyl sulfate concentration of the aqueous solution fuel in order to keep a similar excess reactivity; verify if the homogeneous reactor will retain its inherent safety attributes; and with the new dimensions and geometry of the fuel core, observe whether natural convection can still effectively cool the reactor using the modeling software FEMLAB. It was found that it is indeed feasible to modify the SLOWPOKE-2 reactor for a homogeneous reactor using a solution of uranyl sulfate and water. (author)
Homogeneous SLOWPOKE reactor for the production of radio-isotope. A feasibility study
International Nuclear Information System (INIS)
Busatta, P.; Bonin, H.W.
2006-01-01
The purpose of this research is to study the feasibility of replacing the actual heterogeneous fuel core of the present SLOWPOKE-2 by a reservoir containing a homogeneous fuel for the production of Mo-99. The study looked at three items: by using the MCNP Monte Carlo reactor calculation code, develop a series of parameters required for an homogeneous fuel and evaluate the uranyl sulfate concentration of the aqueous solution fuel in order to keep a similar excess reactivity; verify if the homogeneous reactor will retain its inherent safety attributes; and with the new dimensions and geometry of the fuel core, observe whether natural convection can still effectively cool the reactor using the modeling software FEMLAB. It was found that it is indeed feasible to modify the SLOWPOKE-2 reactor for a homogeneous reactor using a solution of uranyl sulfate and water. (author)
Coastal submarine springs in Lebanon and Syria: Geological, geochemical, and radio-isotopic study
International Nuclear Information System (INIS)
Al-Charideh, A.
2004-10-01
The coastal karst aquifer system (upper Cretaceous) and the submarine springs in the Syrian coast have been studies using chemical and isotopic methods in order to determine the hydraulic connections between the groundwater and the submarine springs. Results show that the groundwater and submarine springs are having the same slope on the σ 18 O/σ 2 H plot indicate the same hydrological origin for both. In addition this relation is very close to the local meteoric water line (LMWL) reflecting a rapid infiltration of rainfall to recharge coastal aquifer. The calculated percentage of freshwater in the two locations (Bassieh and Tartous) range from 20 to 96%. The estimation rate of the permanent submarine springs (BS1, BS2 and TS2, TS3) is 11m 3 /s or 350 million m 3 /y. The maximum residence time of the groundwater in the Cenomanian/Turonian aquifer was estimated at around 8 years, using the piston-flow model.(author)
Cherenkov radiation as a means of radio isotope diagnosis of eyeball tumors
International Nuclear Information System (INIS)
Moshnikov, O.S.; Kolesnichenko, V.N.
1986-01-01
Radiophosphorus indication of eye new-growths can be accomplished through registration of beta-particle or Cherenkov radiation. In both cases the criterion for the conclusion to be drawn from the experimental results is the relative increment of the count rate. The article analyses the specific features of the equipment aimed at recording Cherenkov radiation in the process of radiophosphorus studied in ophthalmology, and discusses the method for these studies. (orig.)
Radio-isotopic apparatus for analyzing low atomic number elements by fluorescence
International Nuclear Information System (INIS)
Robert, Andre; Martinelli, Pierre; Daniel, Georges; Laflotte, Jean-Luc
1969-10-01
An apparatus is described for analyzing light elements of atomic number between 6 and 24 by X-fluorescence. The samples are excited by means of X or α isotopic sources. Various examples of analytical determinations are given. (author) [fr
Energy Technology Data Exchange (ETDEWEB)
Leveque, P; Hours, R; Martinelli, P; May, S; Sandier, J [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires; Brillant, J [Societe Soletanche (France)
1958-07-01
By measuring the transmission of a flat beam of thermal neutrons, the moisture content of a parallelepiped shaped soil sample can be measured to {+-} 4 per cent and the moisture gradient along the longitudinal axis determined. The method permits the determination of chemically bound water and the measurement of diffusion coefficients of water into low hydrogenated materials. By measuring the intensity of fluorescence excited by 13 radiation it is possible to determine the thickness of metal coatings of less than 20 p. for metals of atomic number less than 40. This method has been applied to chromium, manganese, iron, cobalt, nickel, copper and zinc. By using a suitable metal filter it is possible to measure coating thicknesses of metals differing by only one atomic number from the supporting material. By employing labeled cement it is possible to determine the extent or movement of cement grout used for soil stabilization and waterproofing. The kinetic of ion exchange of different ultramarines in aqueous solutions were studied by tracing the movement of labeled ions in the solution or in the exchanger. Values of the diffusion coefficients and activation energies were determined from the exchange studies. (author)Fren. [French] La mesure de la transmission d'un faisceau plat de neutrons thermiques permet d'evaluer a {+-} 4 pour cent l'humidite d'un echantillon parallelepipedique de sol et d'en effectuer une exploration longitudinale systematique. La methode peut s'appliquer a la determination de l'eau de constitution ainsi qu'a la mesure des coefficients de diffusion de l'eau a travers des substances faiblement hydrogenees. En utilisant la fluorescence X excitee au moyen de rayons {beta} il est possible de mesurer des epaisseurs (< 20) de depots metalliques de faible numero atomique (Z < 40). Ces essais ont porte sur: Cr, Mn, Fe, Co, Ni, Cu et Zn. L'emploi d'ecrans metalliques convenables permet de mesurer l'epaisseur d'un metal dont le numero atomique differe de 1 unite de celui du metal de base. L'ition de ciment dans les forages a pour but la consolidation et l'impermeabilisation des sols dans une certaine zone. L'extention de cette zone est etudiee en marquant le ciment et en envoyant dans les forages voisins non injectes une sonde Geiger-Muller qui decele si le ciment a ou non atteint ces forages. On peut ainsi controler l'extension de la zone injectee. La cinetique des echanges d'ions entre differents outremers et des solutions acqueuses est etudiee en suivant un ion radioactif, soit dans la solution, soit dans l'outremer lui-meme. Differentes valeurs de coefficients de diffusion et d'energies d'activation sont donnees par les echanges etudies. (auteur)
Detection of HCV genotypes using molecular and radio-isotopic methods
International Nuclear Information System (INIS)
Ahmad, N.; Baig, S.M.; Shah, W.A.; Khattak, K.F.; Khan, B.; Qureshi, J.A.
2004-01-01
Hepatitis C virus (HCV) accounts for most cases of acute and chronic non-A and non-B liver diseases. Persistent HCV infection may lead to liver cirrhosis and hepatocellular carcinoma. Six major HCV genotypes have been recognized. Infection with different genotypes results in different clinical pictures and responses to antiviral therapy. In the area of Faisalabad (Punjab province of Pakistan), the prevalence and molecular epidemiology of Hepatitis C virus infection had never been investigated before. In this study, we have made an attempt to determine the prevalence, distribution and clinical significance of HCV infection in 1100 suspected patients of liver disease by nested reverse transcriptase polymerase chain reaction (RTPCR) over a period of four years. HCV genotypes of isolates were determined by dot-blot hybridization with genotype specific radiolabeled probes in 337 subjects. The proportion of patients with HCV genotypes 1,2,3 and 4 were 37.38%, 1.86%, 16.16% and 0.29% respectively. Mixed infection of HCV genotype was detected in 120 (35.6%) patients, whereas 31 (9.1%) samples remained unclassified. This study revealed changing epidemiology of hepatitis C virus genotype 1 and 3 in the patients. Multiple infection of HCV genotype in the same patient may be of great clinical and pathological importance and interest. (author)
State of enforcement of the law concerning prevention from radiation hazards due to radio-isotopes
International Nuclear Information System (INIS)
1977-01-01
In view of the recent advance of radiation utilization in many fields, the situation as of the end of fiscal 1976 under the law is described. The statistics on the number of enterprises concerning radioisotope usage, sales and waste-treatment are first given. Then, the measures taken by the Science and Technology Agency to improve radiation hazard prevention are explained, and cooperation with other governmental offices, efforts by the enterprises, steps taken for the enterprises of nondestructive testing, hospitals, universities, etc., and restudy on the law are described. (Mori, K.)
International Nuclear Information System (INIS)
Coulon, P.; Clerc, R.; Tommasi, J.
1993-01-01
During the last few years different tests have been made to optimize the blanket of the reactor. Year after year the breeding ratio has lost a part of interest regarding the production and availability of plutonium in the world. A characteristic of a fast reactor is to have important neutron leaks from the core. The spectrum of those neutrons is intermediate, the idea was to find a moderator compatible with sodium and stable in temperature. After different tests we kept as a moderator the calcium hydride and as a samply support, a cluster which is separated from the carrier. At the end we present the model used for thermalized calculations. The scheme is then applied to a heavy nuclide transmutation example (Np237 Pu238) and to fission product transmutation (Tc99). (authors). 9 figs
The double radio-isotope derivative techniques for the assay of drugs in biological material
International Nuclear Information System (INIS)
Riess, W.
1977-01-01
The neuroleptic drug opipramol and its deshydroxyethyl metabolite can be determined simultaneously in the same biological sample. Known amounts of 14 C-labelled opipramol and 14 C-labelled metabolite are added to the sample to serve as internal standards. After suitable extraction, both compounds are acetylated by 3 H-labelled acetic anhydride. Together with μg-amounts of carrier compounds, the O-acetyl derivative of opipramol and the N-acetyl derivative of the metabolite are purified and separated by two-dimensional thin-layer chromatography. Each of the derivatives is isolated and counted for 14 C- and 3 H-activity. The 14 C-activities recovered serve to determine the overall yield of the opipramol and metabolite, and to convert the measured 3 H-activity to 100% theoretical yield. From analyses of standard samples, the specific 3 H-activities of the acetyl derivatives were calculated and these values were used to convert the measured 3 H-activites from biological samples to concentrations of original opipramol and metabolite. For both compounds the standard deviations of blank samples were +- 1 ng/ml. For concentrations up to 100 ng/ml the standard deviation was +- 3 ng/ml
International Nuclear Information System (INIS)
Carvalho Pinto Ribela, M.T. de.
1979-01-01
A technique for the estimation of kidney depth is described. It is based on a comparison between the measurements obtained in a radioisotopic renogram carried out for two specific energies and the same measurements made with a phanto-kidney at different depths. Experiments performed with kidney and abdomen phantoms provide calibration curves which are obtained by plotting the photopeak to scatter ratio for 131 I pulse height spectrum against depth. Through this technique it is possible to obtain the Hippuran- 131 I kidney uptake with external measurements only. In fact it introduces a correction in the measurements for the depth itself and for the attenuation and scattering effects due to the tissues interposed between the kidney and the detector. When the two kidneys are not equidistant from the detector, their respective renograms are different and it is therefore very important to introduce a correction to the measurements according to the organ depth in order to obtain the exact information on Hippuran partition between the kidneys. The significative influence of the extrarenal activity is analyzed in the renogram by monitoring the praecordial region after 131 I-human serum albumin injection and establishing a calibration factor relating the radioactivity level of this area to that present in each kidney area. It is shown that it is possible to obtain the values for the clearance of each kidney from the renogram once the alteration in efficiency due to the organ depth and to non-renal tissue interference in the renal area is considered. This way, values for the effective renal plasma flow were obtained, which are comparable to those obtained with other techniques, estimating the total flow of the kidneys. Finally the mean absorbed dose of the kidneys in a renography is also estimated. (Author) [pt
Impact of water mobility in some porous media on migration of radio isotopes to the geosphere
International Nuclear Information System (INIS)
AL-Hajji, E.
2011-07-01
The aim of this work is to study the behavior of the water state (bound /free water) in porous materials such as illite and bentonite. These materials are widely used in the storage of radioactive waste. The study was carried out using proton nuclear magnetic resonance technique (NMR). That permits to understand the mechanism of leakage of radioactive elements from these porous materials to the surrounding environment. Such a leakage may have very serious consequences like the contamination of groundwater. In addition, other porous materials like silica and alumina, the two main composite of bentonite, have been studied as well.(author)
Application of ALARA to transport of Radio-Isotopes for Medical use
International Nuclear Information System (INIS)
Fett, H. J.; Lange, F.; Schwarz, G.; Van Hienen, J. F. A.; Jansma, R.; Gelder, R.; Shaw, K. B.; Gulberg, O.; Josefsson, D.; Svahn, B.
2002-01-01
In 1999 a multi-national research project on transport safety commissioned by DG-TREN of the European Commission was completed and reported in The Evaluation of the Situation in the EC as regards Safety in the Transport of Radioactive Material and the Prospects for the Development of such Type of Transport (EC Contract No 4.1020/D/97-003. The result project was focussed on the radiological protection for routine (incident-free) shipments of radioactive material (RAM) in the public domain in four EU Member States Germany, the Netherlands, Sweden and the UK. This information satisfies the requirements of the Euratom Basic Safety Standards, Article 14, which calls for reasonable steps taken by EU Member States to ensure and to substantiate that the contribution to exposure from practices including transport operations involving radioactive substances, is kept as low as reasonably achievable (ALARA), economic and social factors being taken into account. The present paper, which has been partially based on the 1999 study for the European Commission, is focussed on the application of ALARA to the shipments by road in the four countries of radioactive material used for health care, research and industry. From the analysis of the data for these shipment it has been found that these shipments result in transport worker effective doses that vary from very low values which represent a major fraction of the applicable dose limit of 20 mSv/yr. From the primarily recorded worker radiation doses collected at the national level indicate, however, that the vast majority of transport workers, i. e. drivers and handlers, receive typically annual effective doses in the range of up to 10 mSv/yr with most values being in the low dose range. The maximum recorded occupational doses of workers involved primarily in manual handling, sorting and moving of high-T1 multiple-package radioisotope shipments at depots, re-distribution centres etc. were found to be from about 10-14 mSv/yr, From the data available it is evident that the magnitude and frequency of individual radiation doses incurred from these operations are broadly related to the volume of radioactive material packages and the associated package-T1 being handled and shipped on a national bases. (Author)
Radio-isotope production scale-up at the University of Wisconsin
Energy Technology Data Exchange (ETDEWEB)
Nickles, Robert Jerome [Univ of Wisconsin
2014-06-19
Our intent has been to scale up our production capacity for a subset of the NSAC-I list of radioisotopes in jeopardy, so as to make a significant impact on the projected national needs for Cu-64, Zr-89, Y-86, Ga-66, Br-76, I-124 and other radioisotopes that offer promise as PET synthons. The work-flow and milestones in this project have been compressed into a single year (Aug 1, 2012- July 31, 2013). The grant budget was virtually dominated by the purchase of a pair of dual-mini-cells that have made the scale-up possible, now permitting the Curie-level processing of Cu-64 and Zr-89 with greatly reduced radiation exposure. Mile stones: 1. We doubled our production of Cu-64 and Zr-89 during the grant period, both for local use and out-bound distribution to ≈ 30 labs nationwide. This involved the dove-tailing of beam schedules of both our PETtrace and legacy RDS cyclotron. 2. Implemented improved chemical separation of Zr-89, Ga-66, Y-86 and Sc-44, with remote, semi-automated dissolution, trap-and-release separation under LabView control in the two dual-mini-cells provided by this DOE grant. A key advance was to fit the chemical stream with miniature radiation detectors to confirm the transfer operations. 3. Implemented improved shipping of radioisotopes (Cu-64, Zr-89, Tc-95m, and Ho-163) with approved DOT 7A boxes, with a much-improved FedEx shipping success compared to our previous steel drums. 4. Implemented broad range quantitative trace metal analysis, employing a new microwave plasma atomic emission spectrometer (Agilent 4200) capable of ppb sensitivity across the periodic table. This new instrument will prove essential in bringing our radiometals into FDA compliance needing CoA’s for translational research in clinical trials. 5. Expanded our capabilities in target fabrication, with the purchase of a programmable 1600 oC inert gas tube furnace for the smelting of binary alloy target materials. A similar effort makes use of our RF induction furnace, allowing small scale metallurgy with greater control. This alloy feedstock was then used to electroplate cyclotron targets with elevated melting temperatures capable of withstanding higher beam currents. 6. Finished the beam-line developments needed for the irradiation of low-melting target materials (Se and Ga) now being used for the production of Br-76, and radioactive germanium (68, 69, 71Ge). Our planned development of I-124 production has been deferred, given the wide access from commercial suppliers. The passing of these milestones has been the subject of the previous quarterly reports. These signature accomplishments were made possible by the DOE support, and have strengthened the infrastructure at the University of Wisconsin, provided the training ground for a very talented graduate research assistant (Mr. Valdovinos) and more than doubled our out-shipments of Cu-64 and Zr-89.
An investigation of X-ray and radio isotope energy absorption of ...
Indian Academy of Sciences (India)
tive removal cross-sections for fast neutrons. The effective removal .... vation of foods by irradiation has gained importance as an alternative to canning .... Cember H 1992 Introduction to health physics (Health Professions. Division: McGraw ...
From Genetics to Genetic Algorithms
Indian Academy of Sciences (India)
Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.
Algorithmic Principles of Mathematical Programming
Faigle, Ulrich; Kern, Walter; Still, Georg
2002-01-01
Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear
Directory of Open Access Journals (Sweden)
Wang Zi Min
2016-01-01
Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.
Directory of Open Access Journals (Sweden)
Surafel Luleseged Tilahun
2012-01-01
Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.
LHCb New algorithms for Flavour Tagging at the LHCb experiment
Fazzini, Davide
2016-01-01
The Flavour Tagging technique allows to identify the B initial flavour, required in the measurements of flavour oscillations and time-dependent CP asymmetries in neutral B meson systems. The identification performances at LHCb are further enhanced thanks to the contribution of new algorithms.
System identification using Nuclear Norm & Tabu Search optimization
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
Improved multivariate polynomial factoring algorithm
International Nuclear Information System (INIS)
Wang, P.S.
1978-01-01
A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included
1991-01-01
R:BASE for DOS, a computer program developed under NASA contract, has been adapted by the National Marine Mammal Laboratory and the College of the Atlantic to provide and advanced computerized photo matching technique for identification of humpback whales. The program compares photos with stored digitized descriptions, enabling researchers to track and determine distribution and migration patterns. R:BASE is a spinoff of RIM (Relational Information Manager), which was used to store data for analyzing heat shielding tiles on the Space Shuttle Orbiter. It is now the world's second largest selling line of microcomputer database management software.
Integrated identification, modeling and control with applications
Shi, Guojun
This thesis deals with the integration of system design, identification, modeling and control. In particular, six interdisciplinary engineering problems are addressed and investigated. Theoretical results are established and applied to structural vibration reduction and engine control problems. First, the data-based LQG control problem is formulated and solved. It is shown that a state space model is not necessary to solve this problem; rather a finite sequence from the impulse response is the only model data required to synthesize an optimal controller. The new theory avoids unnecessary reliance on a model, required in the conventional design procedure. The infinite horizon model predictive control problem is addressed for multivariable systems. The basic properties of the receding horizon implementation strategy is investigated and the complete framework for solving the problem is established. The new theory allows the accommodation of hard input constraints and time delays. The developed control algorithms guarantee the closed loop stability. A closed loop identification and infinite horizon model predictive control design procedure is established for engine speed regulation. The developed algorithms are tested on the Cummins Engine Simulator and desired results are obtained. A finite signal-to-noise ratio model is considered for noise signals. An information quality index is introduced which measures the essential information precision required for stabilization. The problems of minimum variance control and covariance control are formulated and investigated. Convergent algorithms are developed for solving the problems of interest. The problem of the integrated passive and active control design is addressed in order to improve the overall system performance. A design algorithm is developed, which simultaneously finds: (i) the optimal values of the stiffness and damping ratios for the structure, and (ii) an optimal output variance constrained stabilizing
Low Rank Approximation Algorithms, Implementation, Applications
Markovsky, Ivan
2012-01-01
Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...
Authorship Identification for Tamil Classical Poem using Subspace Discriminant Algorithm
Pandian, A.; Ramalingam, V. V.; Manikandan, K.; Vishnu Preet, R. P.
2018-04-01
The Development of extensive perceiving confirmation of a creator's work consolidates stylometry examination that joins various fascinating issues. Extraction of specific kind of highlights from the substance draws in us with the chance to perceive the producers of obscure works. Center of this paper is to briefly recognize the creators of unidentified Tamil dataset in context of crafted by known creators. Content preparing is the technique for getting amazing data from the dataset that joins quantifiable highlights from the dataset. This paper proposes content preparing method to concentrate features and perform grouping on the same. Crafted by a unidentified sonnet or content can be discovered in light of performing arrangement on potential creators' past known work and building a classifier to characterize the obscure lyric or content in any dialect. This procedure can be additionally reached out to every single provincial dialect around the globe. Numerous writing analysts are thinking that it’s hard to sort ballads as the writers of them are not recognized. By playing out this procedure, creators of different lyrics in Tamil vernacular can be perceived which will be significant to the general public.
Overhead longwave infrared hyperspectral material identification using radiometric models
Energy Technology Data Exchange (ETDEWEB)
Zelinski, M. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2018-01-09
Material detection algorithms used in hyperspectral data processing are computationally efficient but can produce relatively high numbers of false positives. Material identification performed as a secondary processing step on detected pixels can help separate true and false positives. This paper presents a material identification processing chain for longwave infrared hyperspectral data of solid materials collected from airborne platforms. The algorithms utilize unwhitened radiance data and an iterative algorithm that determines the temperature, humidity, and ozone of the atmospheric profile. Pixel unmixing is done using constrained linear regression and Bayesian Information Criteria for model selection. The resulting product includes an optimal atmospheric profile and full radiance material model that includes material temperature, abundance values, and several fit statistics. A logistic regression method utilizing all model parameters to improve identification is also presented. This paper details the processing chain and provides justification for the algorithms used. Several examples are provided using modeled data at different noise levels.
Machine learning based global particle indentification algorithms at LHCb experiment
Derkach, Denis; Likhomanenko, Tatiana; Rogozhnikov, Aleksei; Ratnikov, Fedor
2017-01-01
One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep architecture and gradient boosting have been applied to data. These new approaches provide higher identification efficiencies than existing implementations for all charged particle types. It is also necessary to achieve a flat dependency between efficiencies and spectator variables such as particle momentum, in order to reduce systematic uncertainties during later stages of data analysis. For this purpose, "flat” algorithms that guarantee the flatness property for efficiencies have also been developed. This talk presents this new approach based on machine learning and its performance.
Genetic Optimization Algorithm for Metabolic Engineering Revisited
Directory of Open Access Journals (Sweden)
Tobias B. Alter
2018-05-01
Full Text Available To date, several independent methods and algorithms exist for exploiting constraint-based stoichiometric models to find metabolic engineering strategies that optimize microbial production performance. Optimization procedures based on metaheuristics facilitate a straightforward adaption and expansion of engineering objectives, as well as fitness functions, while being particularly suited for solving problems of high complexity. With the increasing interest in multi-scale models and a need for solving advanced engineering problems, we strive to advance genetic algorithms, which stand out due to their intuitive optimization principles and the proven usefulness in this field of research. A drawback of genetic algorithms is that premature convergence to sub-optimal solutions easily occurs if the optimization parameters are not adapted to the specific problem. Here, we conducted comprehensive parameter sensitivity analyses to study their impact on finding optimal strain designs. We further demonstrate the capability of genetic algorithms to simultaneously handle (i multiple, non-linear engineering objectives; (ii the identification of gene target-sets according to logical gene-protein-reaction associations; (iii minimization of the number of network perturbations; and (iv the insertion of non-native reactions, while employing genome-scale metabolic models. This framework adds a level of sophistication in terms of strain design robustness, which is exemplarily tested on succinate overproduction in Escherichia coli.
Set-Membership Proportionate Affine Projection Algorithms
Directory of Open Access Journals (Sweden)
Stefan Werner
2007-01-01
Full Text Available Proportionate adaptive filters can improve the convergence speed for the identification of sparse systems as compared to their conventional counterparts. In this paper, the idea of proportionate adaptation is combined with the framework of set-membership filtering (SMF in an attempt to derive novel computationally efficient algorithms. The resulting algorithms attain an attractive faster converge for both situations of sparse and dispersive channels while decreasing the average computational complexity due to the data discerning feature of the SMF approach. In addition, we propose a rule that allows us to automatically adjust the number of past data pairs employed in the update. This leads to a set-membership proportionate affine projection algorithm (SM-PAPA having a variable data-reuse factor allowing a significant reduction in the overall complexity when compared with a fixed data-reuse factor. Reduced-complexity implementations of the proposed algorithms are also considered that reduce the dimensions of the matrix inversions involved in the update. Simulations show good results in terms of reduced number of updates, speed of convergence, and final mean-squared error.
A Parallel Butterfly Algorithm
Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing
2014-01-01
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Hanns Holger Rutz
2016-11-01
Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory.
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Kinematic Identification of Parallel Mechanisms by a Divide and Conquer Strategy
DEFF Research Database (Denmark)
Durango, Sebastian; Restrepo, David; Ruiz, Oscar
2010-01-01
using the inverse calibration method. The identification poses are selected optimizing the observability of the kinematic parameters from a Jacobian identification matrix. With respect to traditional identification methods the main advantages of the proposed Divide and Conquer kinematic identification...... strategy are: (i) reduction of the kinematic identification computational costs, (ii) improvement of the numerical efficiency of the kinematic identification algorithm and, (iii) improvement of the kinematic identification results. The contributions of the paper are: (i) The formalization of the inverse...... calibration method as the Divide and Conquer strategy for the kinematic identification of parallel symmetrical mechanisms and, (ii) a new kinematic identification protocol based on the Divide and Conquer strategy. As an application of the proposed kinematic identification protocol the identification...
Fuzzy Algorithm for the Detection of Incidents in the Transport System
Nikolaev, Andrey B.; Sapego, Yuliya S.; Jakubovich, Anatolij N.; Berner, Leonid I.; Stroganov, Victor Yu.
2016-01-01
In the paper it's proposed an algorithm for the management of traffic incidents, aimed at minimizing the impact of incidents on the road traffic in general. The proposed algorithm is based on the theory of fuzzy sets and provides identification of accidents, as well as the adoption of appropriate measures to address them as soon as possible. A…
Minimum Probability of Error-Based Equalization Algorithms for Fading Channels
Directory of Open Access Journals (Sweden)
Janos Levendovszky
2007-06-01
Full Text Available Novel channel equalizer algorithms are introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithms are based on newly derived bounds on the probability of error (PE and guarantee better performance than the traditional zero forcing (ZF or minimum mean square error (MMSE algorithms. The new equalization methods require channel state information which is obtained by a fast adaptive channel identification algorithm. As a result, the combined convergence time needed for channel identification and PE minimization still remains smaller than the convergence time of traditional adaptive algorithms, yielding real-time equalization. The performance of the new algorithms is tested by extensive simulations on standard mobile channels.
Detection of algorithmic trading
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
Handbook of Memetic Algorithms
Cotta, Carlos; Moscato, Pablo
2012-01-01
Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems. The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes. “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now. A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem, memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...
Algorithms in invariant theory
Sturmfels, Bernd
2008-01-01
J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.
CERN. Geneva; PUNZI, Giovanni
2015-01-01
Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.
Jet observables without jet algorithms
Energy Technology Data Exchange (ETDEWEB)
Bertolini, Daniele; Chan, Tucker; Thaler, Jesse [Center for Theoretical Physics, Massachusetts Institute of Technology,Cambridge, MA 02139 (United States)
2014-04-02
We introduce a new class of event shapes to characterize the jet-like structure of an event. Like traditional event shapes, our observables are infrared/collinear safe and involve a sum over all hadrons in an event, but like a jet clustering algorithm, they incorporate a jet radius parameter and a transverse momentum cut. Three of the ubiquitous jet-based observables — jet multiplicity, summed scalar transverse momentum, and missing transverse momentum — have event shape counterparts that are closely correlated with their jet-based cousins. Due to their “local” computational structure, these jet-like event shapes could potentially be used for trigger-level event selection at the LHC. Intriguingly, the jet multiplicity event shape typically takes on non-integer values, highlighting the inherent ambiguity in defining jets. By inverting jet multiplicity, we show how to characterize the transverse momentum of the n-th hardest jet without actually finding the constituents of that jet. Since many physics applications do require knowledge about the jet constituents, we also build a hybrid event shape that incorporates (local) jet clustering information. As a straightforward application of our general technique, we derive an event-shape version of jet trimming, allowing event-wide jet grooming without explicit jet identification. Finally, we briefly mention possible applications of our method for jet substructure studies.
Named Entity Linking Algorithm
Directory of Open Access Journals (Sweden)
M. F. Panteleev
2017-01-01
Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.
Embedded gamma spectrometry: new algorithms for spectral analysis
International Nuclear Information System (INIS)
Martin-Burtart, Nicolas
2012-01-01
Airborne gamma spectrometry was first used for mining prospecting. Three main families were looked for: K-40, U-238 and Th-232. The Chernobyl accident acted as a trigger and for the last fifteen years, a lot of new systems have been developed for intervention in case of nuclear accident or environmental purposes. Depending on their uses, new algorithms were developed, mainly for medium or high energy signal extraction. These spectral regions are characteristics of natural emissions (K-40, U-238 and Th-232 decay chains) and fissions products (mainly Cs-137 and Co-60). Below 400 keV, where special nuclear materials emit, these methods can still be used but are greatly imprecise. A new algorithm called 2-windows (extended to 3), was developed. It allows an accurate extraction, taking the flight altitude into account to minimize false detection. Watching radioactive materials traffic appeared with homeland security policy a few years ago. This particular use of dedicated sensors require a new type of algorithms. Before, one algorithm was very efficient for a particular nuclide or spectral region. Now, we need algorithm able to detect an anomaly wherever it is and whatever it is: industrial, medical or SNM. This work identified two families of methods working under these circumstances. Finally, anomalies have to be identified. IAEA recommend to watch around 30 radionuclides. A brand new identification algorithm was developed, using several rays per element and avoiding identifications conflicts. (author) [fr
Fokkinga, M.M.
1992-01-01
An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as
A cluster algorithm for graphs
S. van Dongen
2000-01-01
textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
Animation of planning algorithms
Sun, Fan
2014-01-01
Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...
Secondary Vertex Finder Algorithm
Heer, Sebastian; The ATLAS collaboration
2017-01-01
If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Randomized Filtering Algorithms
DEFF Research Database (Denmark)
Katriel, Irit; Van Hentenryck, Pascal
2008-01-01
of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...
International Nuclear Information System (INIS)
Lang, A.R.
1979-01-01
Methods of producing sets of records of the internal defects of diamonds as a means of identification of the gems by x-ray topography are described. To obtain the records one can either use (a) monochromatic x-radiation reflected at the Bragg angle from crystallographically equivalent planes of the diamond lattice structure, Bragg reflections from each such plane being recorded from a number of directions of view, or (b) white x-radiation incident upon the diamond in directions having a constant angular relationship to each equivalent axis of symmetry of the diamond lattice structure, Bragg reflections being recorded for each direction of the incident x-radiation. By either method an overall point-to-point three dimensional representation of the diamond is produced. (U.K.)
A fast readout algorithm for Cluster Counting/Timing drift chambers on a FPGA board
Energy Technology Data Exchange (ETDEWEB)
Cappelli, L. [Università di Cassino e del Lazio Meridionale (Italy); Creti, P.; Grancagnolo, F. [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Pepino, A., E-mail: Aurora.Pepino@le.infn.it [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Tassielli, G. [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Fermilab, Batavia, IL (United States); Università Marconi, Roma (Italy)
2013-08-01
A fast readout algorithm for Cluster Counting and Timing purposes has been implemented and tested on a Virtex 6 core FPGA board. The algorithm analyses and stores data coming from a Helium based drift tube instrumented by 1 GSPS fADC and represents the outcome of balancing between cluster identification efficiency and high speed performance. The algorithm can be implemented in electronics boards serving multiple fADC channels as an online preprocessing stage for drift chamber signals.
An Ordering Linear Unification Algorithm
Institute of Scientific and Technical Information of China (English)
胡运发
1989-01-01
In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.
A tunable algorithm for collective decision-making.
Pratt, Stephen C; Sumpter, David J T
2006-10-24
Complex biological systems are increasingly understood in terms of the algorithms that guide the behavior of system components and the information pathways that link them. Much attention has been given to robust algorithms, or those that allow a system to maintain its functions in the face of internal or external perturbations. At the same time, environmental variation imposes a complementary need for algorithm versatility, or the ability to alter system function adaptively as external circumstances change. An important goal of systems biology is thus the identification of biological algorithms that can meet multiple challenges rather than being narrowly specified to particular problems. Here we show that emigrating colonies of the ant Temnothorax curvispinosus tune the parameters of a single decision algorithm to respond adaptively to two distinct problems: rapid abandonment of their old nest in a crisis and deliberative selection of the best available new home when their old nest is still intact. The algorithm uses a stepwise commitment scheme and a quorum rule to integrate information gathered by numerous individual ants visiting several candidate homes. By varying the rates at which they search for and accept these candidates, the ants yield a colony-level response that adaptively emphasizes either speed or accuracy. We propose such general but tunable algorithms as a design feature of complex systems, each algorithm providing elegant solutions to a wide range of problems.
Parallel clustering algorithm for large-scale biological data sets.
Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang
2014-01-01
Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.
New Optimization Algorithms in Physics
Hartmann, Alexander K
2004-01-01
Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.
A propositional CONEstrip algorithm
E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)
2014-01-01
textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations
Modular Regularization Algorithms
DEFF Research Database (Denmark)
Jacobsen, Michael
2004-01-01
The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...
Indian Academy of Sciences (India)
Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...
The Copenhagen Triage Algorithm
DEFF Research Database (Denmark)
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia
2016-01-01
is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...
de Casteljau's Algorithm Revisited
DEFF Research Database (Denmark)
Gravesen, Jens
1998-01-01
It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...
Algorithms in ambient intelligence
Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.
2005-01-01
We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of
General Algorithm (High level)
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...
Comprehensive eye evaluation algorithm
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko
2013-01-01
In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.
DEFF Research Database (Denmark)
This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...
Optimal Quadratic Programming Algorithms
Dostal, Zdenek
2009-01-01
Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers
Sparse Matrix for ECG Identification with Two-Lead Features
Directory of Open Access Journals (Sweden)
Kuo-Kun Tseng
2015-01-01
Full Text Available Electrocardiograph (ECG human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods.
Recursive Subspace Identification of AUV Dynamic Model under General Noise Assumption
Directory of Open Access Journals (Sweden)
Zheping Yan
2014-01-01
Full Text Available A recursive subspace identification algorithm for autonomous underwater vehicles (AUVs is proposed in this paper. Due to the advantages at handling nonlinearities and couplings, the AUV model investigated here is for the first time constructed as a Hammerstein model with nonlinear feedback in the linear part. To better take the environment and sensor noises into consideration, the identification problem is concerned as an errors-in-variables (EIV one which means that the identification procedure is under general noise assumption. In order to make the algorithm recursively, propagator method (PM based subspace approach is extended into EIV framework to form the recursive identification method called PM-EIV algorithm. With several identification experiments carried out by the AUV simulation platform, the proposed algorithm demonstrates its effectiveness and feasibility.
Benchmarking monthly homogenization algorithms
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
Effect of size heterogeneity on community identification in complex networks
Energy Technology Data Exchange (ETDEWEB)
Danon, L.; Diaz-Guilera, A.; Arenas, A.
2008-01-01
Identifying community structure can be a potent tool in the analysis and understanding of the structure of complex networks. Up to now, methods for evaluating the performance of identification algorithms use ad-hoc networks with communities of equal size. We show that inhomogeneities in community sizes can and do affect the performance of algorithms considerably, and propose an alternative method which takes these factors into account. Furthermore, we propose a simple modification of the algorithm proposed by Newman for community detection (Phys. Rev. E 69 066133) which treats communities of different sizes on an equal footing, and show that it outperforms the original algorithm while retaining its speed.
Python algorithms mastering basic algorithms in the Python language
Hetland, Magnus Lie
2014-01-01
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc
The research on algorithms for optoelectronic tracking servo control systems
Zhu, Qi-Hai; Zhao, Chang-Ming; Zhu, Zheng; Li, Kun
2016-10-01
The photoelectric servo control system based on PC controllers is mainly used to control the speed and position of the load. This paper analyzed the mathematical modeling and the system identification of the servo system. In the aspect of the control algorithm, the IP regulator, the fuzzy PID, the Active Disturbance Rejection Control (ADRC) and the adaptive algorithms were compared and analyzed. The PI-P control algorithm was proposed in this paper, which not only has the advantages of the PI regulator that can be quickly saturated, but also overcomes the shortcomings of the IP regulator. The control system has a good starting performance and the anti-load ability in a wide range. Experimental results show that the system has good performance under the guarantee of the PI-P control algorithm.
Optimized design of embedded DSP system hardware supporting complex algorithms
Li, Yanhua; Wang, Xiangjun; Zhou, Xinling
2003-09-01
The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.
Directory of Open Access Journals (Sweden)
Guillermo Sanchez-Diaz
2012-11-01
Full Text Available In this paper, we introduce a fast implementation of the CT EXT algorithm for testor property identification, that is based on an accumulative binary tuple. The fast implementation of the CT EXT algorithm (one of the fastest algorithms reported, is designed to generate all the typical testors from a training matrix, requiring a reduced number of operations. Experimental results using this fast implementation and the comparison with other state-of-the-art algorithms that generate typical testors are presented.
Closed and Open Loop Subspace System Identification of the Kalman Filter
Directory of Open Access Journals (Sweden)
David Di Ruscio
2009-04-01
Full Text Available Some methods for consistent closed loop subspace system identification presented in the literature are analyzed and compared to a recently published subspace algorithm for both open as well as for closed loop data, the DSR_e algorithm. Some new variants of this algorithm are presented and discussed. Simulation experiments are included in order to illustrate if the algorithms are variance efficient or not.
Directory of Open Access Journals (Sweden)
Dazhi Jiang
2015-01-01
Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
Noise Reduction with Microphone Arrays for Speaker Identification
Energy Technology Data Exchange (ETDEWEB)
Cohen, Z
2011-12-22
Reducing acoustic noise in audio recordings is an ongoing problem that plagues many applications. This noise is hard to reduce because of interfering sources and non-stationary behavior of the overall background noise. Many single channel noise reduction algorithms exist but are limited in that the more the noise is reduced; the more the signal of interest is distorted due to the fact that the signal and noise overlap in frequency. Specifically acoustic background noise causes problems in the area of speaker identification. Recording a speaker in the presence of acoustic noise ultimately limits the performance and confidence of speaker identification algorithms. In situations where it is impossible to control the environment where the speech sample is taken, noise reduction filtering algorithms need to be developed to clean the recorded speech of background noise. Because single channel noise reduction algorithms would distort the speech signal, the overall challenge of this project was to see if spatial information provided by microphone arrays could be exploited to aid in speaker identification. The goals are: (1) Test the feasibility of using microphone arrays to reduce background noise in speech recordings; (2) Characterize and compare different multichannel noise reduction algorithms; (3) Provide recommendations for using these multichannel algorithms; and (4) Ultimately answer the question - Can the use of microphone arrays aid in speaker identification?
Reactive Collision Avoidance Algorithm
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
System Identification A Frequency Domain Approach
Pintelon, Rik
2012-01-01
System identification is a general term used to describe mathematical tools and algorithms that build dynamical models from measured data. Used for prediction, control, physical interpretation, and the designing of any electrical systems, they are vital in the fields of electrical, mechanical, civil, and chemical engineering. Focusing mainly on frequency domain techniques, System Identification: A Frequency Domain Approach, Second Edition also studies in detail the similarities and differences with the classical time domain approach. It high??lights many of the important steps in the identi
Directory of Open Access Journals (Sweden)
S. D. Kulik
2010-03-01
Full Text Available The modification of the algorithm of identification of the informational object, used for identification of the hand-written texts performer in an automated workplace of the forensic expert, is presented. As modification, it is offered to use a method of association rules discovery for definition of statistically dependent sets of feature of hand-written capital letters of the Russian language. The algorithm is approved on set of 691 samples of hand-written documents for which about 2000 identifying feature are defined. The modification of the identification algorithm allows to lower level of errors and to raise quality of accepted decisions for information security.
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Treatment Algorithm for Ameloblastoma
Directory of Open Access Journals (Sweden)
Madhumati Singh
2014-01-01
Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.
An Algorithmic Diversity Diet?
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik
2016-01-01
With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....
Aydemir, Bahar
2017-01-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Comparison of Parameter Identification Techniques
Directory of Open Access Journals (Sweden)
Eder Rafael
2016-01-01
Full Text Available Model-based control of mechatronic systems requires excellent knowledge about the physical behavior of each component. For several types of components of a system, e.g. mechanical or electrical ones, the dynamic behavior can be described by means of a mathematic model consisting of a set of differential equations, difference equations and/or algebraic constraint equations. The knowledge of a realistic mathematic model and its parameter values is essential to represent the behaviour of a mechatronic system. Frequently it is hard or impossible to obtain all required values of the model parameters from the producer, so an appropriate parameter estimation technique is required to compute missing parameters. A manifold of parameter identification techniques can be found in the literature, but their suitability depends on the mathematic model. Previous work dealt with the automatic assembly of mathematical models of serial and parallel robots with drives and controllers within the dynamic multibody simulation code HOTINT as fully-fledged mechatronic simulation. Several parameters of such robot models were identified successfully by our embedded algorithm. The present work proposes an improved version of the identification algorithm with higher performance. The quality of the identified parameter values and the computation effort are compared with another standard technique.
Boosting foundations and algorithms
Schapire, Robert E
2012-01-01
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.
Stochastic split determinant algorithms
International Nuclear Information System (INIS)
Horvatha, Ivan
2000-01-01
I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed
Quantum gate decomposition algorithms.
Energy Technology Data Exchange (ETDEWEB)
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
KAM Tori Construction Algorithms
Wiesel, W.
In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.
Irregular Applications: Architectures & Algorithms
Energy Technology Data Exchange (ETDEWEB)
Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone
2012-02-06
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
NEUTRON ALGORITHM VERIFICATION TESTING
International Nuclear Information System (INIS)
COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST
2000-01-01
Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility
Convex hull ranking algorithm for multi-objective evolutionary algorithms
Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.
2012-01-01
Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity
Directory of Open Access Journals (Sweden)
V. Krenn
2014-01-01
Full Text Available In histopathologic SLIM diagnostic (synovial-like interface membrane, SLIM apart from diagnosing periprosthetic infection particle identification has an important role to play. The differences in particle pathogenesis and variability of materials in endoprosthetics explain the particle heterogeneity that hampers the diagnostic identification of particles. For this reason, a histopathological particle algorithm has been developed. With minimal methodical complexity this histopathological particle algorithm offers a guide to prosthesis material-particle identification. Light microscopic-morphological as well as enzyme-histochemical characteristics and polarization-optical proporties have set and particles are defined by size (microparticles, macroparticles and supra- macroparticles and definitely characterized in accordance with a dichotomous principle. Based on these criteria, identification and validation of the particles was carried out in 120 joint endoprosthesis pathological cases. A histopathological particle score (HPS is proposed that summarizes the most important information for the orthopedist, material scientist and histopathologist concerning particle identification in the SLIM.
Evaluating ortholog prediction algorithms in a yeast model clade.
Directory of Open Access Journals (Sweden)
Leonidas Salichos
Full Text Available BACKGROUND: Accurate identification of orthologs is crucial for evolutionary studies and for functional annotation. Several algorithms have been developed for ortholog delineation, but so far, manually curated genome-scale biological databases of orthologous genes for algorithm evaluation have been lacking. We evaluated four popular ortholog prediction algorithms (MultiParanoid; and OrthoMCL; RBH: Reciprocal Best Hit; RSD: Reciprocal Smallest Distance; the last two extended into clustering algorithms cRBH and cRSD, respectively, so that they can predict orthologs across multiple taxa against a set of 2,723 groups of high-quality curated orthologs from 6 Saccharomycete yeasts in the Yeast Gene Order Browser. RESULTS: Examination of sensitivity [TP/(TP+FN], specificity [TN/(TN+FP], and accuracy [(TP+TN/(TP+TN+FP+FN] across a broad parameter range showed that cRBH was the most accurate and specific algorithm, whereas OrthoMCL was the most sensitive. Evaluation of the algorithms across a varying number of species showed that cRBH had the highest accuracy and lowest false discovery rate [FP/(FP+TP], followed by cRSD. Of the six species in our set, three descended from an ancestor that underwent whole genome duplication. Subsequent differential duplicate loss events in the three descendants resulted in distinct classes of gene loss patterns, including cases where the genes retained in the three descendants are paralogs, constituting 'traps' for ortholog prediction algorithms. We found that the false discovery rate of all algorithms dramatically increased in these traps. CONCLUSIONS: These results suggest that simple algorithms, like cRBH, may be better ortholog predictors than more complex ones (e.g., OrthoMCL and MultiParanoid for evolutionary and functional genomics studies where the objective is the accurate inference of single-copy orthologs (e.g., molecular phylogenetics, but that all algorithms fail to accurately predict orthologs when paralogy
A triangle voting algorithm based on double feature constraints for star sensors
Fan, Qiaoyun; Zhong, Xuyang
2018-02-01
A novel autonomous star identification algorithm is presented in this study. In the proposed algorithm, each sensor star constructs multi-triangle with its bright neighbor stars and obtains its candidates by triangle voting process, in which the triangle is considered as the basic voting element. In order to accelerate the speed of this algorithm and reduce the required memory for star database, feature extraction is carried out to reduce the dimension of triangles and each triangle is described by its base and height. During the identification period, the voting scheme based on double feature constraints is proposed to implement triangle voting. This scheme guarantees that only the catalog star satisfying two features can vote for the sensor star, which improves the robustness towards false stars. The simulation and real star image test demonstrate that compared with the other two algorithms, the proposed algorithm is more robust towards position noise, magnitude noise and false stars.
International Nuclear Information System (INIS)
Ahmadi, Mohamadreza; Mojallali, Hamed
2012-01-01
Highlights: ► A new meta-heuristic optimization algorithm. ► Integration of invasive weed optimization and chaotic search methods. ► A novel parameter identification scheme for chaotic systems. - Abstract: This paper introduces a novel hybrid optimization algorithm by taking advantage of the stochastic properties of chaotic search and the invasive weed optimization (IWO) method. In order to deal with the weaknesses associated with the conventional method, the proposed chaotic invasive weed optimization (CIWO) algorithm is presented which incorporates the capabilities of chaotic search methods. The functionality of the proposed optimization algorithm is investigated through several benchmark multi-dimensional functions. Furthermore, an identification technique for chaotic systems based on the CIWO algorithm is outlined and validated by several examples. The results established upon the proposed scheme are also supplemented which demonstrate superior performance with respect to other conventional methods.
Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.
2013-05-01
Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
THE APPROACHING TRAIN DETECTION ALGORITHM
S. V. Bibikov
2015-01-01
The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...
Combinatorial optimization algorithms and complexity
Papadimitriou, Christos H
1998-01-01
This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.