WorldWideScience

Sample records for classified removable electronic

  1. Sixty Percent Conceptual Design Report: Enterprise Accountability System for Classified Removable Electronic Media

    Energy Technology Data Exchange (ETDEWEB)

    B. Gardiner; L.Graton; J.Longo; T.Marks, Jr.; B.Martinez; R. Strittmatter; C.Woods; J. Joshua

    2003-05-03

    Classified removable electronic media (CREM) are tracked in several different ways at the Laboratory. To ensure greater security for CREM, we are creating a single, Laboratory-wide system to track CREM. We are researching technology that can be used to electronically tag and detect CREM, designing a database to track the movement of CREM, and planning to test the system at several locations around the Laboratory. We focus on affixing ''smart tags'' to items we want to track and installing gates at pedestrian portals to detect the entry or exit of tagged items. By means of an enterprise database, the system will track the entry and exit of tagged items into and from CREM storage vaults, vault-type rooms, access corridors, or boundaries of secure areas, as well as the identity of the person carrying an item. We are considering several options for tracking items that can give greater security, but at greater expense.

  2. Removal of micropollutants with coarse-ground activated carbon for enhanced separation with hydrocyclone classifiers.

    Science.gov (United States)

    Otto, N; Platz, S; Fink, T; Wutscherk, M; Menzel, U

    2016-01-01

    One key technology to eliminate organic micropollutants (OMP) from wastewater effluent is adsorption using powdered activated carbon (PAC). To avoid a discharge of highly loaded PAC particles into natural water bodies a separation stage has to be implemented. Commonly large settling tanks and flocculation filters with the application of coagulants and flocculation aids are used. In this study, a multi-hydrocyclone classifier with a downstream cloth filter has been investigated on a pilot plant as a space-saving alternative with no need for a dosing of chemical additives. To improve the separation, a coarser ground PAC type was compared to a standard PAC type with regard to elimination results of OMP as well as separation performance. With a PAC dosing rate of 20 mg/l an average of 64.7 wt% of the standard PAC and 79.5 wt% of the coarse-ground PAC could be separated in the hydrocyclone classifier. A total average separation efficiency of 93-97 wt% could be reached with a combination of both hydrocyclone classifier and cloth filter. Nonetheless, the OMP elimination of the coarse-ground PAC was not sufficient enough to compete with the standard PAC. Further research and development is necessary to find applicable coarse-grained PAC types with adequate OMP elimination capabilities. PMID:27232411

  3. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    International Nuclear Information System (INIS)

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively

  4. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen [Tianjin Key Laboratory of Process Measurement and Control, Institute of Robotics and Autonomous Systems, School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2014-05-15

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.

  5. Ensemble Classifier Strategy Based on Transient Feature Fusion in Electronic Nose

    Science.gov (United States)

    Bagheri, Mohammad Ali; Montazer, Gholam Ali

    2011-09-01

    In this paper, we test the performance of several ensembles of classifiers and each base learner has been trained on different types of extracted features. Experimental results show the potential benefits introduced by the usage of simple ensemble classification systems for the integration of different types of transient features.

  6. High-Energy Electron Beam Application to Air Pollutants Removal

    International Nuclear Information System (INIS)

    The advantage of electron beam (EB) process in pollutants removal is connected to its high efficiency to transfer high amount of energy directly into the matter under treatment. Disadvantage which is mostly related to high investment cost of accelerator may be effectively overcome in future as the result of use accelerator new developments. The potential use of medium to high-energy high power EB accelerators for air pollutants removal is demonstrated in [1]. The lower electrical efficiencies of accelerators with higher energies are partially compensated by the lower electron energy losses in the beam windows. In addition, accelerators with higher electron energies can provide higher beam powers with lower beam currents [1]. The total EB energy losses (backscattering, windows and in the intervening air space) are substantially lower with higher EB incident energy. The useful EB energy is under 50% for 0.5 MeV and about 95% above 3 MeV. In view of these arguments we decided to study the application of high energy EB for air pollutants removal. Two electron beam accelerators are available for our studies: electron linear accelerators ALIN-10 and ALID-7, built in the Electron Accelerator Laboratory, INFLPR, Bucharest, Romania. Both accelerators are of traveling-wave type, operating at a wavelength of 10 cm. They utilize tunable S-band magnetrons, EEV M 5125 type, delivering 2 MW of power in 4 μ pulses. The accelerating structure is a disk-loaded tube operating in the 2 mode. The optimum values of the EB peak current IEB and EB energy EEB to produce maximum output power PEB for a fixed pulse duration EB and repetition frequency fEB are as follows: for ALIN-10: EEB = 6.23 MeV; IEB =75 mA; PEB 164 W (fEB = 100 Hz, EB = 3.5 s) and for ALID-7: EEB 5.5 MeV; IEB = 130 mA; PEB = 670 W (fEB = 250 Hz, EB = 3.75 s). This paper presents a special designed installation, named SDI-1, and several representative results obtained by high energy EB application to SO2, NOx and VOCs

  7. Terra MODIS Band 27 Electronic Crosstalk Effect and Its Removal

    Science.gov (United States)

    Sun, Junqiang; Xiong, Xiaoxiong; Madhavan, Sriharsha; Wenny, Brian

    2012-01-01

    The MODerate-resolution Imaging Spectroradiometer (MODIS) is one of the primary instruments in the NASA Earth Observing System (EOS). The first MODIS instrument was launched in December, 1999 on-board the Terra spacecraft. MODIS has 36 bands, covering a wavelength range from 0.4 micron to 14.4 micron. MODIS band 27 (6.72 micron) is a water vapor band, which is designed to be insensitive to Earth surface features. In recent Earth View (EV) images of Terra band 27, surface feature contamination is clearly seen and striping has become very pronounced. In this paper, it is shown that band 27 is impacted by electronic crosstalk from bands 28-30. An algorithm using a linear approximation is developed to correct the crosstalk effect. The crosstalk coefficients are derived from Terra MODIS lunar observations. They show that the crosstalk is strongly detector dependent and the crosstalk pattern has changed dramatically since launch. The crosstalk contributions are positive to the instrument response of band 27 early in the mission but became negative and much larger in magnitude at later stages of the mission for most detectors of the band. The algorithm is applied to both Black Body (BB) calibration and MODIS L1B products. With the crosstalk effect removed, the calibration coefficients of Terra MODIS band 27 derived from the BB show that the detector differences become smaller. With the algorithm applied to MODIS L1B products, the Earth surface features are significantly removed and the striping is substantially reduced in the images of the band. The approach developed in this report for removal of the electronic crosstalk effect can be applied to other MODIS bands if similar crosstalk behaviors occur.

  8. Classifying Microorganisms

    DEFF Research Database (Denmark)

    Sommerlund, Julie

    2006-01-01

    This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris......This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological...... characteristics. The coexistence of the classification systems does not lead to a conflict between them. Rather, the systems seem to co-exist in different configurations, through which they are complementary, contradictory and inclusive in different situations-sometimes simultaneously. The systems come...

  9. Carbon classified?

    DEFF Research Database (Denmark)

    Lippert, Ingmar

    2012-01-01

    . Using an actor- network theory (ANT) framework, the aim is to investigate the actors who bring together the elements needed to classify their carbon emission sources and unpack the heterogeneous relations drawn on. Based on an ethnographic study of corporate agents of ecological modernisation over...... a period of 13 months, this paper provides an exploration of three cases of enacting classification. Drawing on ANT, we problematise the silencing of a range of possible modalities of consumption facts and point to the ontological ethics involved in such performances. In a context of global warming...

  10. Evaluation of sustainable electron donors for nitrate removal in different water media.

    Science.gov (United States)

    Fowdar, Harsha S; Hatt, Belinda E; Breen, Peter; Cook, Perran L M; Deletic, Ana

    2015-11-15

    An external electron donor is usually included in wastewater and groundwater treatment systems to enhance nitrate removal through denitrification. The choice of electron donor is critical for both satisfactory denitrification rates and sustainable long-term performance. Electron donors that are waste products are preferred to pure organic chemicals. Different electron donors have been used to treat different water types and little is known as to whether there are any electron donors that are suitable for multiple applications. Seven different carbon rich waste products, including liquid and solid electron donors, were studied in comparison to pure acetate. Batch-scale tests were used to measure their ability to reduce nitrate concentrations in a pure nutrient solution, light greywater, secondary-treated wastewater and tertiary-treated wastewater. The tested electron donors removed oxidised nitrogen (NOx) at varying rates, ranging from 48 mg N/L/d (acetate) to 0.3 mg N/L/d (hardwood). The concentrations of transient nitrite accumulation also varied across the electron donors. The different water types had an influence on NOx removal rates, the extent of which was dependent on the type of electron donor. Overall, the highest rates were recorded in light greywater, followed by the pure nutrient solution and the two partially treated wastewaters. Cotton wool and rice hulls were found to be promising electron donors with good NOx removal rates, lower leachable nutrients and had the least variation in performance across water types. PMID:26379204

  11. Multiple-electron removal and molecular fragmentation of CO by fast F4+ impact

    International Nuclear Information System (INIS)

    Multiple-electron removal from and molecular fragmentation of carbon monoxide molecules caused by collisions with 1-MeV/amu F4+ ions were studied using the coincidence time-of-flight technique. In these collisions, multiple-electron removal of the target molecule is a dominant process. Cross sections for the different levels of ionization of the CO molecule during the collision were determined. The relative cross sections of ionization decrease with increasing number of electrons removed in a similar way as seen in atomic targets. This behavior is in agreement with a two-step mechanism, where first the molecule is ionized by a Franck-Condon ionization and then the molecular ion dissociates. Most of the highly charged intermediate states of the molecule dissociate rapidly. Only CO+ and CO2+ molecular ions have been seen to survive long enough to be detected as molecular ions. The relative cross sections for the different breakup channels were evaluated for collisions in which the molecule broke into two charged fragments as well as for collisions where only a single charged molecular ion or fragment were produced. The average charge state of each fragment resulting from COQ+→Ci++Oj+ breakup increases with the number of electrons removed from the molecule approximately following the relationship bar i=bar j=Q/2 as long as K-shell electrons are not removed. This does not mean that the charge-state distribution is exactly symmetric, as, in general, removing electrons from the carbon fragment is slightly more likely than removing electrons from the oxygen due to the difference in binding energy. The cross sections for molecular breakup into a charged fragment and a neutral fragment drop rapidly with an increasing number of electrons removed

  12. Device for the removal of sulfur dioxide from exhaust gas by pulsed energization of free electrons

    International Nuclear Information System (INIS)

    The performance of a new device using pulsed streamer corona for the removal of sulfur dioxide from humid air has been evaluated. The pulsed streamer corona produced free electrons which enhance gas-phase chemical reactions, and convert SO2 to sulfuric acid mist. The SO2 removal efficiency was compared with that of the electron-beam flue-gas treatment process. The comparison demonstrates the advantage of the novel device

  13. Effect of cathode electron acceptors on simultaneous anaerobic sulfide and nitrate removal in microbial fuel cell.

    Science.gov (United States)

    Cai, Jing; Zheng, Ping; Mahmood, Qaisar

    2016-01-01

    The current investigation reports the effect of cathode electron acceptors on simultaneous sulfide and nitrate removal in two-chamber microbial fuel cells (MFCs). Potassium permanganate and potassium ferricyanide were common cathode electron acceptors and evaluated for substrate removal and electricity generation. The abiotic MFCs produced electricity through spontaneous electrochemical oxidation of sulfide. In comparison with abiotic MFC, the biotic MFC showed better ability for simultaneous nitrate and sulfide removal along with electricity generation. Keeping external resistance of 1,000 Ω, both MFCs showed good capacities for substrate removal where nitrogen and sulfate were the main end products. The steady voltage with potassium permanganate electrodes was nearly twice that of with potassium ferricyanide. Cyclic voltammetry curves confirmed that the potassium permanganate had higher catalytic activity than potassium ferricyanide. The potassium permanganate may be a suitable choice as cathode electron acceptor for enhanced electricity generation during simultaneous treatment of sulfide and nitrate in MFCs. PMID:26901739

  14. Characterization of phosphorus removal bacteria in (AO)2 SBR system by using different electron acceptors

    Institute of Scientific and Technical Information of China (English)

    JIANG Yi-feng; WANG Lin; YU Ying; WANG Bao-zhen; LIU Shuo; SHEN Zheng

    2007-01-01

    Characteristics of phosphorus removal bacteria were investigated by using three different types of electron acceptors, as well as the positive role of nitrite in phosphorus removal process. An (AO)2 SBR (anaerobic-aerobic-anoxic-aerobic sequencing batch reactor) was thereby employed to enrich denitrifying phosphorus removal bacteria for simultaneously removing phosphorus and nitrogen via anoxic phosphorus uptake. Ammonium oxidation was controlled at the first phase of the nitrification process. Nitrite-inhibition batch tests illustrated that nitrite was not an inhibitor to phosphorus uptake process, but served as an alternative electron acceptor to nitrate and oxygen if the concentration was under the inhibition level of 40mg NO2 - N · L- 1. It implied that in addition to the two well-accepted groups of phosphorus removal bacterium ( one can only utilize oxygen as electron acceptor, P1, while the other can use both oxygen and nitrate as electron acceptor, P2 ), a new group of phosphorus removal bacterium P3, which could use oxygen, nitrate and nitrite as electron acceptor to take up phosphorus were identified in the test system. To understand (AO)2 SBR sludge better, the relative population of the different bacteria in this system, plus another A/O SBR sludge (seed sludge) were respectively estimated by the phosphorus uptake batch tests with either oxygen or nitrate or nitrite as electron acceptor. The results demonstrated that phosphorus removal capability of (AO)2 SBR sludge had a little degradation after A/O sludge was cultivated in the (AO)2 mode over a long period of time. However, denitrifying phosphorus removal bacteria ( P2 and P3 ) was significantly enriched showed by the relative population of the three types of bacteria,which implied that energy for aeration and COD consumption could be reduced in theory.

  15. Advanced heat removal system with porous media for electronic devices

    Energy Technology Data Exchange (ETDEWEB)

    Mahalle, A.M. [Sant Gadge Baba Amravati Univ., Amravati (India). Dept. of Mechanical Engineering; Jajoo, B.N. [Sant Gadge Baba Amravati Univ., Amravati (India). College of Engineering and Technology

    2007-07-01

    High porosity metal foams are primarily utilized in aerospace applications, although there use has been widened to include cooling in electronic packaging. They are good for high heat dissipation and other important applications have been found taking advantages of the thermal properties of the metal foam, including compact heat exchangers for airborne equipment; regenerative and dissipative air cooled condenser towers; and compact heat sinks for electronic power. Metal foam heat exchangers are efficient, compact and lightweight because of their low relative density, open porosity and high thermal conductivity of the cell edges, as well as the large accessible surface area per unit volume, and the ability to mix the cooling fluid. This paper presented the results of an investigation whose purpose was to prove the foam metal is a high heat dissipater using different heat inputs. The paper discussed the experimental methodology and described the metal foam sample used in the experiment. The heat transfer coefficient was increased as the velocity increased. The Reynolds number and Nusselt number was increased to increasing velocity. It was concluded that heat transfer from foam was primarily governed by total heat transfer area of the foam rather than the thermal conductivity. 16 refs., 14 figs.

  16. Removal of NOx by pulsed, intense relativistic electron beam in distant gas chamber

    International Nuclear Information System (INIS)

    Removal of NOx has been studied using a pulsed, intense relativistic electron beam (IREB). The dependence of NOx concentration and the removal efficiency of NOx on the number of IREB shot have been investigated within a distant gas chamber spatially isolated from the electron beam source. The distant gas chamber is filled up with a dry-air-balanced NO gas mixture with the pressure of 270 kPa, and is irradiated by the IREB (2 MeV, 30 A, 35 ns) passing through a 1.6-m-long atmosphere. With the initial NO concentration of 88 ppm, ∼ 70 % of NOx is successfully removed by firing 10 shots of IREB. The NOx removal efficiency has been found to be 50-155 g/kWh

  17. A study on the removal of color in dyeing wastewater using electron beam irradiation

    International Nuclear Information System (INIS)

    In this research, experiments of electron beam irradiation have been carried out for the wastewater from different types of dye industry, and for the reactive dye, for the acid dye and for the disperse dye which are commercially widely used with respect to industrial dyeing process. At the electron beam irradiation dose of 2.34KGy, the efficiency of color removing was higher than that of usual chemical treatment for the reactive dye and for the acid dye. Wastewater from printing dye industry showed the highest measuring value of color among the wastewater from different types of dye industries, which are polyester, cotton T/C, printing, yarn dyeing, and nylon dye industry. Electron beam irradiation tests have been performed for the wastewater from different types of dye industries. Color removing rates by electron beam irradiation were higher than those by general chemical treatment for the wastewater from cotton T/C dye industry and from yarn dyeing industry, and whose dispersive dye contents are low. EA (electron beam irradiation + activated sludge) process and CA (chemical treatment + activated sludge) process have been tested for removing color and organic substance in wastewater from different types of dye industries. EA process showed better results in color removing rate for the wastewater from cotton T/C dye industry and yarn dyeing industry. However, CA process showed better results in color removing rate for the wastewater from polyester, printing, and nylon dye industry. CA process were predominant in CODMn removal rates compare to EA process for the wastewater from different types of dye industries. However, both CA and EA processes showed less than 80mg/L of BOD5, which is the legal effluent guideline. (author)

  18. Pilot plant for electron beam SO2 and NOx removal from combustion flue gases

    International Nuclear Information System (INIS)

    Polish pilot plant for electron beam flue gas treatment was built in Electro-power Station Kaweczyn. The flue gas flow capacity is equal to 20000 Nm3/h. The applied technology allows simultaneous removal of SO2 and NOx. The process is dry and by product can be used as fertilizer. In the report construction of the pilot plant is described. The preliminary results of investigations proved high efficiency of acidic pollutants removal from flue gases. (author). 23 refs, 6 tabs, 24 ills

  19. Removal of iopromide and degradation characteristics in electron beam irradiation process

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Minhwan; Yoon, Yeojoon; Cho, Eunha; Jung, Youmi [Department of Environmental Engineering (YIEST), Yonsei University, 234 Maeji, Heungup, Wonju 220-710 (Korea, Republic of); Lee, Byung-Cheol [Quantum Optics Laboratory, Korea Atomic Energy Research Institute, 1045, Daedeok-daero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Paeng, Ki-Jung [Department of Chemistry, Yonsei University, 234 Maeji, Heungup, Wonju 220-710 (Korea, Republic of); Kang, Joon-Wun, E-mail: jwk@yonsei.ac.kr [Department of Environmental Engineering (YIEST), Yonsei University, 234 Maeji, Heungup, Wonju 220-710 (Korea, Republic of)

    2012-08-15

    Highlights: Black-Right-Pointing-Pointer The second-order kinetic was fitted in overall removal tendency of iopromide. Black-Right-Pointing-Pointer In the electron beam/H{sub 2}O{sub 2} process, enhanced removal rate of iopromide was observed. Black-Right-Pointing-Pointer The iopromide removal rate increased in the presence of OH{center_dot} scavengers. Black-Right-Pointing-Pointer The mineralization was mainly performed in the electron beam/H{sub 2}O{sub 2} condition. Black-Right-Pointing-Pointer The e{sub aq}{sup -} mainly attacks the iodo-group, whereas the OH{center_dot} reacts non-selectively. - Abstract: The aim of this study is to evaluate the removal efficiency of iopromide using electron beam (E-beam) irradiation technology, and its degradation characteristics with hydroxyl radical (OH{center_dot}) and hydrated electron (e{sub aq}{sup -}). Studies are conducted with different initial concentrations of iopromide in pure water and in the presence of hydrogen peroxide, bicarbonate ion, or sulfite ion. E-beam absorbed dose of 19.6 kGy was required to achieve 90% degradation of 100 {mu}M iopromide and the E-beam/H{sub 2}O{sub 2} system increased the removal efficiency by an amount of OH{center_dot} generation. In the presence of OH{center_dot} scavengers (10 mM sulfite ion), the required dose for 90% removal of 100 {mu}M iopromide was only 0.9 kGy. This greatly enhanced removal was achieved in the presence of OH{center_dot} scavengers, which was rather unexpected and unlike the results obtained from most advanced oxidation process (AOP) experiments. The reasons for this enhancement can be explained by a kinetic study using the bimolecular rate constants of each reaction species. To explore the reaction scheme of iopromide with OH{center_dot} or e{sub aq}{sup -} and the percent of mineralization for the two reaction paths, the total organic carbon (TOC), released iodide, and intermediates were analyzed.

  20. Comparison of single-electron removal processes in collisions of electrons, positrons, protons, and antiprotons with hydrogen and helium

    International Nuclear Information System (INIS)

    We present and compare total cross sections for single-electron removal in collisions of electrons, positrons, protons, and antiprotons with atomic hydrogen and helium. These cross sections have been calculated using the classical trajectory Monte Carlo technique in the velocity range of 0.5--7.0 a.u. (6.25--1224 keV/u). The cross sections are compared at equal collision velocities and exhibit differences arising from variations in mass and sign of charge of the projectile. At low and intermediate velocities these differences are large in both the ionization and charge transfer channels. At high velocities the single-ionization cross section for each of these singly charged particles becomes equal. However, the differences in the single-charge-transfer cross sections for positron and proton impact persist to very large velocities. We extend our previous work [Phys. Rev. A 38, 1866 (1988)] to explain these mass and sign of the charge effects in single-electron removal collisions

  1. Power beaming, orbital debris removal, and other space applications of a ground based free electron laser

    OpenAIRE

    Wilder, Benjamin A.

    2010-01-01

    When compared to other laser types, the Free Electron Laser (FEL) provides optimal beam quality for successful atmospheric propagation. Assuming the development and deployment of a mega-watt (MW) class, ground or sea based FEL, this thesis investigates several proposed space applications including power beaming to satellites, the removal of orbital debris, laser illumination of objects within the solar system for scientific study, and interstellar laser illumination for communications. Po...

  2. A Classifier Ensemble of Binary Classifier Ensembles

    Directory of Open Access Journals (Sweden)

    Sajad Parvin

    2011-09-01

    Full Text Available This paper proposes an innovative combinational algorithm to improve the performance in multiclass classification domains. Because the more accurate classifier the better performance of classification, the researchers in computer communities have been tended to improve the accuracies of classifiers. Although a better performance for classifier is defined the more accurate classifier, but turning to the best classifier is not always the best option to obtain the best quality in classification. It means to reach the best classification there is another alternative to use many inaccurate or weak classifiers each of them is specialized for a sub-space in the problem space and using their consensus vote as the final classifier. So this paper proposes a heuristic classifier ensemble to improve the performance of classification learning. It is specially deal with multiclass problems which their aim is to learn the boundaries of each class from many other classes. Based on the concept of multiclass problems classifiers are divided into two different categories: pairwise classifiers and multiclass classifiers. The aim of a pairwise classifier is to separate one class from another one. Because of pairwise classifiers just train for discrimination between two classes, decision boundaries of them are simpler and more effective than those of multiclass classifiers.The main idea behind the proposed method is to focus classifier in the erroneous spaces of problem and use of pairwise classification concept instead of multiclass classification concept. Indeed although usage of pairwise classification concept instead of multiclass classification concept is not new, we propose a new pairwise classifier ensemble with a very lower order. In this paper, first the most confused classes are determined and then some ensembles of classifiers are created. The classifiers of each of these ensembles jointly work using majority weighting votes. The results of these ensembles

  3. Velocity dependence of CO and CH4 electron removal and fragmentation caused by fast proton impact

    International Nuclear Information System (INIS)

    Cross sections of the breakup channels of CO and CH4, caused by 1-14 MeV proton impact, have been measured. The total cross sections for single to triple electron removal are in reasonable agreement with SCA calculations. The production cross sections for CO+ and CH4+ ions are in good agreement with electron impact ionization at the same high velocities. Proton and electron impact are expected to by the same at high velocities. Proton and electron impact are expected to be the same at high velocities where the first Born approximation is valid. At lower velocities the proton impact cross sections are in general higher than the electron impact data. The ion-neutral breakup channels show similar trends, but ion-pair channels have a different velocity dependence. The effect of the charge sign and the projectile mass on the fragmentation of doubly ionized molecules (ion-pairs) needs further study because electron data of ion-pair production is scarce

  4. Positive role of nitrite as electron acceptor on anoxic denitrifying phosphorus removal process

    Institute of Scientific and Technical Information of China (English)

    HUANG RongXin; LI Dong; LI XiangKun; BAO LinLin; JIANG AnXi; ZHANG Jie

    2007-01-01

    Literatures revealed that the electron acceptor-nitrite could be inhibitory or toxic in the denitrifying phosphorus removal process.Batch test experiments were used to investigate the inhibitory effect during the anoxic condition.The inoculated activated sludge was taken from a continuous double- sludge denitrifying phosphorus and nitrogen removal system.Nitrite was added at the anoxic stage.One time injection and sequencing batch injection were carried on in the denitrifying dephosphorus procedure.The results indicated that the nitrite concentration higher than 30 mg/L would inhibit the anoxic phosphate uptake severely, and the threshold inhibitory concentration was dependent on the characteristics of the activated sludge and the operating conditions; instead, lower than the inhibitory concentration would not be detrimental to anoxic phosphorus uptake, and it could act as good electron acceptor for the anoxic phosphate accumulated.Positive effects performed during the denitrifying biological dephosphorus all the time.The utility of nitrite as good electron acceptor would provide a new feasible way in the denitrifying phosphorus process.

  5. Experimental facility for investigation of gaseous pollutants removal process stimulated by electron beam and microwave energy

    International Nuclear Information System (INIS)

    A laboratory unit for the investigation of toxic gases removal from flue gases based on an ILU 6 accelerator has been built at the Institute of Nuclear Chemistry and Technology. This installation was provided with independent pulsed and continuous wave (c.w.) microwave generators to create electrical discharge and another pulsed microwave generator for plasma diagnostics. This allows to investigate a combined removal process based on the simultaneous use of the electron beam and streams of microwave energy in one reaction vessel. Two heating furnaces, each of them being a water-tube boiler with 100 kW thermal power, were applied for the production of combustion gas with flow rates 5-400 Nm3/h. Proper composition of the flue gas was obtained by introducing such components as SO2, NO and NH3 to the gas stream. The installation consists of: inlet system (two boilers - house heating furnace, boiler pressure regulator, SO2, NO and NH3 dosage system, analytical equipment); reaction vessel where the electron beam from ILU 6 accelerator and microwave streams from the pulse and c.w. generators can be introduced simultaneously or separately and plasma diagnostic pulsed microwave stream can be applied; outlet system (retention chamber, filtration unit, fan, off-take duct of gas, analytical equipment). The experiments have demonstrated that it is possible to investigate the removal process in the presence of NH3 by separate or simultaneous application of the electron beam and of microwave energy streams under stable experimental conditions. (author). 15 refs, 26 figs, 5 tabs

  6. Experimental facility for investigation of gaseous pollutants removal process stimulated by electron beam and microwave energy

    Energy Technology Data Exchange (ETDEWEB)

    Zimek, Z.; Chmielewski, A.G.; Bulka, S.; Roman, K.; Licki, J. [Institute of Nuclear Chemistry and Technology, Warsaw (Poland)

    1994-12-31

    A laboratory unit for the investigation of toxic gases removal from flue gases based on an ILU 6 accelerator has been built at the Institute of Nuclear Chemistry and Technology. This installation was provided with independent pulsed and continuous wave (c.w.) microwave generators to create electrical discharge and another pulsed microwave generator for plasma diagnostics. This allows to investigate a combined removal process based on the simultaneous use of the electron beam and streams of microwave energy in one reaction vessel. Two heating furnaces, each of them being a water-tube boiler with 100 kW thermal power, were applied for the production of combustion gas with flow rates 5-400 Nm{sup 3}/h. Proper composition of the flue gas was obtained by introducing such components as SO{sub 2}, NO and NH{sub 3} to the gas stream. The installation consists of: inlet system (two boilers - house heating furnace, boiler pressure regulator, SO{sub 2}, NO and NH{sub 3} dosage system, analytical equipment); reaction vessel where the electron beam from ILU 6 accelerator and microwave streams from the pulse and c.w. generators can be introduced simultaneously or separately and plasma diagnostic pulsed microwave stream can be applied; outlet system (retention chamber, filtration unit, fan, off-take duct of gas, analytical equipment). The experiments have demonstrated that it is possible to investigate the removal process in the presence of NH{sub 3} by separate or simultaneous application of the electron beam and of microwave energy streams under stable experimental conditions. (author). 15 refs, 26 figs, 5 tabs.

  7. An examination of electronic file transfer between host and microcomputers for the AMPMODNET/AIMNET (Army Material Plan Modernization Network/Acquisition Information Management Network) classified network environment

    Energy Technology Data Exchange (ETDEWEB)

    Hake, K.A.

    1990-11-01

    This report presents the results of investigation and testing conducted by Oak Ridge National Laboratory (ORNL) for the Project Manager -- Acquisition Information Management (PM-AIM), and the United States Army Materiel Command Headquarters (HQ-AMC). It concerns the establishment of file transfer capabilities on the Army Materiel Plan Modernization (AMPMOD) classified computer system. The discussion provides a general context for micro-to-mainframe connectivity and focuses specifically upon two possible solutions for file transfer capabilities. The second section of this report contains a statement of the problem to be examined, a brief description of the institutional setting of the investigation, and a concise declaration of purpose. The third section lays a conceptual foundation for micro-to-mainframe connectivity and provides a more detailed description of the AMPMOD computing environment. It gives emphasis to the generalized International Business Machines, Inc. (IBM) standard of connectivity because of the predominance of this vendor in the AMPMOD computing environment. The fourth section discusses two test cases as possible solutions for file transfer. The first solution used is the IBM 3270 Control Program telecommunications and terminal emulation software. A version of this software was available on all the IBM Tempest Personal Computer 3s. The second solution used is Distributed Office Support System host electronic mail software with Personal Services/Personal Computer microcomputer e-mail software running with IBM 3270 Workstation Program for terminal emulation. Test conditions and results are presented for both test cases. The fifth section provides a summary of findings for the two possible solutions tested for AMPMOD file transfer. The report concludes with observations on current AMPMOD understanding of file transfer and includes recommendations for future consideration by the sponsor.

  8. The Effect of Fragaria vesca Extract on Smear Layer Removal: A Scanning Electron Microscopic Evaluation

    Science.gov (United States)

    Davoudi, Amin; Razavi, Sayed Alireza; Mosaddeghmehrjardi, Mohammad Hossein; Tabrizizadeh, Mehdi

    2015-01-01

    Introduction: Successful endodontic treatment depends on elimination of the microorganisms through chemomechanical debridement. The aim of this in vitro study was to evaluate the effectiveness of Fragaria vesca (wild strawberry) extract (FVE) on the removal of smear layer (SL). Methods and Materials: In this analytical-observational study, 40 extracted mandibular and maxillary human teeth were selected. After canal preparation with standard step-back technique, the teeth were randomly divided into 4 groups according to the irrigation solution: saline (negative control), 5.25% NaOCl+EDTA (positive control), FVE and FVE+EDTA. The teeth were split longitudinally so that scanning electron microscopy (SEM) photomicrographs could be taken to evaluate the amount of remnant SL in coronal, middle and apical thirds. The data were analyzed statistically by the Kruskal-Wallis and Mann Whitney U tests and the level of significance was set at 0.05. Results: Significant differences were found among the groups (P<0.001). The use of NaOCl+EDTA was the most effective regimen for removing the SL followed by FVE+EDTA. FVE alone was significantly more effective than saline (P<0.001). Conclusion: FVE with and without EDTA could effectively remove the smear layer; however, compared to NaOCl group it was less effective. PMID:26526069

  9. Removal of brominated flame retardant from electrical and electronic waste plastic by solvothermal technique

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Cong-Cong [Research Center For Eco-Environmental Sciences, Chinese Academy of Sciences, 18 Shuangqing Road, Beijing 100085 (China); Zhang, Fu-Shen, E-mail: fszhang@rcees.ac.cn [Research Center For Eco-Environmental Sciences, Chinese Academy of Sciences, 18 Shuangqing Road, Beijing 100085 (China)

    2012-06-30

    Highlights: Black-Right-Pointing-Pointer A process for brominated flame retardants (BFRs) removal in plastic was established. Black-Right-Pointing-Pointer The plastic became bromine-free with the structure maintained after this treatment. Black-Right-Pointing-Pointer BFRs transferred into alcohol solvent were easily debrominated by metallic copper. - Abstract: Brominated flame retardants (BFRs) in electrical and electronic (E and E) waste plastic are toxic, bioaccumulative and recalcitrant. In the present study, tetrabromobisphenol A (TBBPA) contained in this type of plastic was tentatively subjected to solvothermal treatment so as to obtain bromine-free plastic. Methanol, ethanol and isopropanol were examined as solvents for solvothermal treatment and it was found that methanol was the optimal solvent for TBBPA removal. The optimum temperature, time and liquid to solid ratio for solvothermal treatment to remove TBBPA were 90 Degree-Sign C, 2 h and 15:1, respectively. After the treatment with various alcohol solvents, it was found that TBBPA was finally transferred into the solvents and bromine in the extract was debrominated catalyzed by metallic copper. Bisphenol A and cuprous bromide were the main products after debromination. The morphology and FTIR properties of the plastic were generally unchanged after the solvothermal treatment indicating that the structure of the plastic maintained after the process. This work provides a clean and applicable process for BFRs-containing plastic disposal.

  10. Single- and Multiple-Electron Removal Processes in Proton-Water Vapor Collisions

    Science.gov (United States)

    Murakami, Mitsuko; Kirchner, Tom; Horbatsch, Marko; Jürgen Lüdde, Hans

    2012-06-01

    Charge-state correlated cross sections for single- and multiple-electron removal processes due to capture and ionization in proton-H2O collisions are calculated by using the non-perturbative basis generator method adapted for ion-molecule collisions [1]. Orbital-specific cross sections for vacancy production are evaluated using this method to predict the yields of charged fragments (H2O^+, OH^+, H^+, O^+) according to branching ratios known to be valid at high impact energies. At intermediate and low energies, we obtain fragmentation results on the basis of predicted multi-electron removal cross sections, and explain most of the available experimental data [2]. The cross sections for charge transfer and for ionization are also compared with recent multi-center classical-trajectory Monte Carlo calculations [3] for impact energies from 20keV to several MeV. [4pt] [1] H.J. L"udde et al, Phys. Rev. A 80, 060702(R) (2009)[0pt] [2] M. Murakami et al, to be submitted to Phys. Rev. A (2012)[0pt] [3] C. Illescas et al, Phys. Rev. A 83, 052704 (2011)

  11. Removal of diclofenac from surface water by electron beam irradiation combined with a biological aerated filter

    Science.gov (United States)

    He, Shijun; Wang, Jianlong; Ye, Longfei; Zhang, Youxue; Yu, Jiang

    2014-12-01

    The degradation of DCF was investigated in aqueous solution by using electron beam (EB) technology. When the initial concentration was between 10 and 40 mg/L, almost 100% of the DCF was degraded at a dose of 0.5 kGy. However, only about 6.5% of DCF was mineralized even at 2 kGy according to total organic carbon (TOC) measurements. A combined process of EB and biological aerated filter (BAF) was therefore developed to enhance the treatment of DCF contaminated surface water. The effluent quality of combined process was substantially improved by EB pretreatment due to the degradation of DCF and related intermediates. Both irradiation and biological treatment reduced the toxicity of the treated water. The experimental results showed that EB is effective for removing DCF from artificial aqueous solution and real surface water.

  12. Removal of impurities from metallurgical grade silicon by electron beam melting

    International Nuclear Information System (INIS)

    Solar cells are currently fabricated from a variety of silicon-based materials. Now the major silicon material for solar cells is the scrap of electronic grade silicon (EG-Si). But in the current market it is difficult to secure a steady supply of this material. Therefore, alternative production processes are needed to increase the feedstock. In this paper, EBM is used to purify silicon. MG-Si particles after leaching with an initial purity of 99.88% in mass as starting materials were used. The final purity of the silicon disk obtained after EBM was above 99.995% in mass. This result demonstrates that EBM can effectively remove impurities from silicon. This paper mainly studies the impurity distribution in the silicon disk after EBM. (semiconductor materials)

  13. Evaluation of sustained release polylactate electron donors for removal of hexavalent chromium from contaminated groundwater

    Energy Technology Data Exchange (ETDEWEB)

    Brodie, E.L.; Joyner, D. C.; Faybishenko, B.; Conrad, M. E.; Rios-Velazquez, C.; Mork, B.; Willet, A.; Koenigsberg, S.; Herman, D.; Firestone, M. K.; Hazen, T. C.; Malave, Josue; Martinez, Ramon

    2011-02-15

    To evaluate the efficacy of bioimmobilization of Cr(VI) in groundwater at the Department of Energy Hanford site, we conducted a series of microcosm experiments using a range of commercial electron donors with varying degrees of lactate polymerization (polylactate). These experiments were conducted using Hanford Formation sediments (coarse sand and gravel) immersed in Hanford groundwater, which were amended with Cr(VI) and several types of lactate-based electron donors (Hydrogen Release Compound, HRC; primer-HRC, pHRC; extended release HRC) and the polylactate-cysteine form (Metal Remediation Compound, MRC). The results showed that polylactate compounds stimulated an increase in bacterial biomass and activity to a greater extent than sodium lactate when applied at equivalent carbon concentrations. At the same time, concentrations of headspace hydrogen and methane increased and correlated with changes in the microbial community structure. Enrichment of Pseudomonas spp. occurred with all lactate additions, and enrichment of sulfate-reducing Desulfosporosinus spp. occurred with almost complete sulfate reduction. The results of these experiments demonstrate that amendment with the pHRC and MRC forms result in effective removal of Cr(VI) from solution most likely by both direct (enzymatic) and indirect (microbially generated reductant) mechanisms.

  14. Quantum-mechanical calculation of multiple electron removal and fragmentation cross sections in He+-H2O collisions

    Science.gov (United States)

    Murakami, Mitsuko; Kirchner, Tom; Horbatsch, Marko; Lüdde, Hans Jürgen

    2012-08-01

    Electron removal and fragmentation cross sections are calculated for He+(1s)-H2O collisions at impact energies from 20 keV/amu to several MeV/amu by using the nonperturbative basis generator method for ion-molecule collisions. Previous work for proton impact is extended to deal with the dressed projectile in the present case. The effects from the active projectile electron are taken into account by applying the same single-particle Hamiltonian to all electrons and by using the inclusive-probability formalism in the final-state analysis. Fragment-ion yields are evaluated from the single-, double-, and triple-electron removal cross sections, and the results are compared with the available experimental data. Very reasonable agreement is obtained for fragmentation caused by direct ionization, while some discrepancies remain in the capture and loss data.

  15. Enhancing the Electron Transfer Capacity and Subsequent Color Removal in Bioreactors by Applying Thermophilic Anaerobic Treatment and Redox Mediators

    NARCIS (Netherlands)

    Santos, dos A.B.; Traverse, J.; Cervantes, F.J.; Lier, van J.B.

    2005-01-01

    The effect of temperature, hydraulic retention time (HRT) and the redox mediator anthraquinone-2,6-disulfonate (AQDS), on electron transfer and subsequent color removal from textile wastewater was assessed in mesophilic and thermophilic anaerobic bioreactors. The results clearly show that compared w

  16. Scanning electron microscopic study of the surface of feline gastric epithelium: a simple method of removing the coating material.

    Science.gov (United States)

    Al-Tikriti, M; Henry, R W; Al-Bagdadi, F K; Hoskins, J; Titkemeyer, C

    1986-01-01

    Scanning electron microscopic examination of the gastric surface epithelial cells is often hindered by the presence of a coating material. Several methods for removal of coating material on feline gastric mucosa were utilized. The cleansed tissues were evaluated using the scanning electron microscope to assess damage caused by the use of various cleansing methods to surface epithelial cells. The stretched stomach washed several times, including rubbing the mucosal surface with gloved fingers, yielded the best results with no apparent damage to the surface epithelial cells. Flushing unstretched stomachs with saline only did not adequately remove coating material. Flushing unstretched stomachs with saline while stroking the surface with a cotton tipped applicator stick removed debris but damaged the surface epithelium.

  17. Role of aqueous electron and hydroxyl radical in the removal of endosulfan from aqueous solution using gamma irradiation

    International Nuclear Information System (INIS)

    Highlights: • Removal of endosulfan was assessed by gamma irradiation under different conditions. • Removal of endosulfan by gamma irradiation was mainly due to reaction of aqueous electron. • The radiation yield value decreased while dose constant increased with increasing gamma-ray dose-rate. • Second-order rate constant of endosulfan with aqueous electron was determined by competition kinetic method. • Degradation pathways were proposed from the nature of identified by-products. - Abstract: The removal of endosulfan, an emerging water pollutant, from water was investigated using gamma irradiation based advanced oxidation and reduction processes (AORPs). A significant removal, 97% of initially 1.0 μM endosulfan was achieved at an absorbed dose of 1020 Gy. The removal of endosulfan by gamma-rays irradiation was influenced by an absorbed dose and significantly increased in the presence of aqueous electron (eaq−). However, efficiency of the process was inhibited in the presence of eaq− scavengers, such as N2O, NO3−, acid, and Fe3+. The observed dose constant decreased while radiation yield (G-value) increased with increasing initial concentrations of the target contaminant and decreasing dose-rate. The removal efficiency of endosulfan II was lower than endosulfan I. The degradation mechanism of endosulfan by the AORPs was proposed showing that reductive pathways involving eaq− started at the chlorine attached to the ring while oxidative pathway was initiated due to attack of hydroxyl radical at the S=O bond. The mass balance showed 95% loss of chloride from endosulfan at an absorbed dose of 1020 Gy. The formation of chloride and acetate suggest that gamma irradiation based AORPs are potential methods for the removal of endosulfan and its by-products from contaminated water

  18. Diagnosis of cervical cancer cell taken from scanning electron and atomic force microscope images of the same patients using discrete wavelet entropy energy and Jensen Shannon, Hellinger, Triangle Measure classifier

    Science.gov (United States)

    Aytac Korkmaz, Sevcan

    2016-05-01

    The aim of this article is to provide early detection of cervical cancer by using both Atomic Force Microscope (AFM) and Scanning Electron Microscope (SEM) images of same patient. When the studies in the literature are examined, it is seen that the AFM and SEM images of the same patient are not used together for early diagnosis of cervical cancer. AFM and SEM images can be limited when using only one of them for the early detection of cervical cancer. Therefore, multi-modality solutions which give more accuracy results than single solutions have been realized in this paper. Optimum feature space has been obtained by Discrete Wavelet Entropy Energy (DWEE) applying to the 3 × 180 AFM and SEM images. Then, optimum features of these images are classified with Jensen Shannon, Hellinger, and Triangle Measure (JHT) Classifier for early diagnosis of cervical cancer. However, between classifiers which are Jensen Shannon, Hellinger, and triangle distance have been validated the measures via relationships. Afterwards, accuracy diagnosis of normal, benign, and malign cervical cancer cell was found by combining mean success rates of Jensen Shannon, Hellinger, and Triangle Measure which are connected with each other. Averages of accuracy diagnosis for AFM and SEM images by averaging the results obtained from these 3 classifiers are found as 98.29% and 97.10%, respectively. It has been observed that AFM images for early diagnosis of cervical cancer have higher performance than SEM images. Also in this article, surface roughness of malign AFM images in the result of the analysis made for the AFM images, according to the normal and benign AFM images is observed as larger, If the volume of particles has found as smaller. She has been a Faculty Member at Fırat University in the Electrical- Electronic Engineering Department since 2007. Her research interests include image processing, computer vision systems, pattern recognition, data fusion, wavelet theory, artificial neural

  19. Enhanced biological phosphorus removal. Carbon sources, nitrate as electron acceptor, and characterization of the sludge community

    Energy Technology Data Exchange (ETDEWEB)

    Christensson, M.

    1997-10-01

    Enhanced biological phosphorus removal (EBPR) was studied in laboratory scale experiments as well as in a full scale EBPR process. The studies were focused on carbon source transformations, the use of nitrate as an electron acceptor and characterisation of the microflora. A continuous anaerobic/aerobic laboratory system was operated on synthetic wastewater with acetate as sole carbon source. An efficient EBPR was obtained and mass balances over the anaerobic reactor showed a production of 1.45 g poly-{beta}-hydroxyalcanoic acids (PHA), measured as chemical oxygen demand (COD), per g of acetic acid (as COD) taken up. Furthermore, phosphate was released in the anaerobic reactor in a ratio of 0.33 g phosphorus (P) per g PHA (COD) formed and 0.64 g of glycogen (COD) was consumed per g of acetic acid (COD) taken up. Microscopic investigations revealed a high amount of polyphosphate accumulating organisms (PAO) in the sludge. Isolation and characterisation of bacteria indicated Acinetobacter spp. to be abundant in the sludge, while sequencing of clones obtained in a 16S rDNA clone library showed a large part of the bacteria to be related to the high mole % G+C Gram-positive bacteria and only a minor fraction to be related to the gamma-subclass of proteobacteria to which Acinetobacter belongs. Operation of a similar anaerobic/aerobic laboratory system with ethanol as sole carbon source showed that a high EBPR can be achieved with this compound as carbon source. However, a prolonged detention time in the anaerobic reactor was required. PHA were produced in the anaerobic reactor in an amount of 1.24 g COD per g of soluble DOC taken up, phosphate was released in an amount of 0.4-0.6 g P per g PHA (COD) produced and 0.46 g glycogen (COD) was consumed per g of soluble COD taken up. Studies of the EBPR in the UCT process at the sewage treatment plant in Helsingborg, Sweden, showed the amount of volatile fatty acids (VFA) available to the PAO in the anaerobic stage to be

  20. Recognition Using Hybrid Classifiers.

    Science.gov (United States)

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply. PMID:26959677

  1. Dynamic system classifier

    CERN Document Server

    Pumpe, Daniel; Müller, Ewald; Enßlin, Torsten A

    2016-01-01

    Stochastic differential equations describe well many physical, biological and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of DSC to oscillation processes with a time dependent frequency {\\omega}(t) and damping factor {\\gamma}(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The {\\omega} and {\\gamma} timelines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiment...

  2. New method to remove the electronic noise for absolutely calibrating low gain photomultiplier tubes with a higher precision

    Science.gov (United States)

    Zhang, Xiaodong; Hayward, Jason P.; Laubach, Mitchell A.

    2014-08-01

    A new method to remove the electronic noise in order to absolutely calibrate low gain photomultiplier tubes with a higher precision is proposed and validated with experiments using a digitizer-based data acquisition system. This method utilizes the fall time difference between the electronic noise (about 0.5 ns) and the real PMT signal (about 2.4 ns for Hamamatsu H10570 PMT assembly). Using this technique along with a convolution algorithm, the electronic noise and the real signals are separated very well, even including the very small signals heavily influenced by the electronic noise. One application that this method allows is for us to explore the energy relationship for gamma sensing in Cherenkov radiators while maintaining the fastest possible timing performance and high dynamic range.

  3. Increased electric sail thrust through removal of trapped shielding electrons by orbit chaotisation due to spacecraft body

    Directory of Open Access Journals (Sweden)

    P. Janhunen

    2009-08-01

    Full Text Available An electric solar wind sail is a recently introduced propellantless space propulsion method whose technical development has also started. The electric sail consists of a set of long, thin, centrifugally stretched and conducting tethers which are charged positively and kept in a high positive potential of order 20 kV by an onboard electron gun. The positively charged tethers deflect solar wind protons, thus tapping momentum from the solar wind stream and producing thrust. The amount of obtained propulsive thrust depends on how many electrons are trapped by the potential structures of the tethers, because the trapped electrons tend to shield the charged tether and reduce its effect on the solar wind. Here we present physical arguments and test particle calculations indicating that in a realistic three-dimensional electric sail spacecraft there exist a natural mechanism which tends to remove the trapped electrons by chaotising their orbits and causing them to eventually collide with the conducting tethers. We present calculations which indicate that if these mechanisms were able to remove trapped electrons nearly completely, the electric sail performance could be about five times higher than previously estimated, about 500 nN/m, corresponding to 1 N thrust for a baseline construction with 2000 km total tether length.

  4. E-Nose Vapor Identification Based on Dempster-Shafer Fusion of Multiple Classifiers

    Science.gov (United States)

    Li, Winston; Leung, Henry; Kwan, Chiman; Linnell, Bruce R.

    2005-01-01

    Electronic nose (e-nose) vapor identification is an efficient approach to monitor air contaminants in space stations and shuttles in order to ensure the health and safety of astronauts. Data preprocessing (measurement denoising and feature extraction) and pattern classification are important components of an e-nose system. In this paper, a wavelet-based denoising method is applied to filter the noisy sensor measurements. Transient-state features are then extracted from the denoised sensor measurements, and are used to train multiple classifiers such as multi-layer perceptions (MLP), support vector machines (SVM), k nearest neighbor (KNN), and Parzen classifier. The Dempster-Shafer (DS) technique is used at the end to fuse the results of the multiple classifiers to get the final classification. Experimental analysis based on real vapor data shows that the wavelet denoising method can remove both random noise and outliers successfully, and the classification rate can be improved by using classifier fusion.

  5. Study on decomposition and removal of organic pollutants in gases using electron beams

    International Nuclear Information System (INIS)

    Volatile organic compounds (VOC) used as solvents and de-oil reagents have been emitted to the atmosphere and oxidized subsequently into toxic photochemical oxidants in the atmosphere. Reduction of the emission of VOC has been required under law and regulations for factories/plants at which huge amounts of VOC are used. The electron beam (EB) treatment is suitable for purification of high flow-rate ventilation air containing dilute VOC emitted from such factories/plants. The purification processes of such ventilation air have been developed based on the decomposition reactions and property changes of VOC. The results for chloro-ethylenes and aromatic hydrocarbons, which have been emitted with abundant quantities, are introduced in the present paper. Chloroethylenes, except for monochloroethylene, were oxidized into water-soluble primary products through chain reactions in EB irradiated humid air. The chain oxidation reactions of such chloro-ethylenes were initiated exclusively by a reaction with OH radicals, but electron-attachment dissociation under EB irradiation. Gas-phase termination reactions involved the bimolecular reaction of alkylperoxyl radicals for tri- and di-chloroethylenes, and the reaction of alkylperoxyl radicals and alkyl radicals beside such a bimolecular reaction for tetrachloroethylene. The deposition of the alkyl-peroxyl radicals on an irradiation vessel wall also terminated the chain oxidation reactions. The solid-phase termination reaction was negligible to the gas-phase termination reactions under irradiation with high-dose rate so that the oxidation of chloro-ethylenes was achieved with lower doses under high-dose rate irradiation like EB irradiation. The hydrolysis of the primary products combined with EB irradiation is prospective to be applied to the purification of chloroethylenes/air mixtures with lower doses. Under irradiation of aromatic hydrocarbons/air mixtures, toxic and oxidation-resistant particles with mean diameters of a few

  6. Classifying Cereal Data

    Science.gov (United States)

    The DSQ includes questions about cereal intake and allows respondents up to two responses on which cereals they consume. We classified each cereal reported first by hot or cold, and then along four dimensions: density of added sugars, whole grains, fiber, and calcium.

  7. Energy-Efficient Neuromorphic Classifiers.

    Science.gov (United States)

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

  8. Energy-Efficient Neuromorphic Classifiers.

    Science.gov (United States)

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered. PMID:27557100

  9. Removal of cadmium ions from wastewater using innovative electronic waste-derived material

    International Nuclear Information System (INIS)

    Highlights: • A novel developed adsorbent material derived from waste printed circuit boards’ component. • The innovative adsorbent material can effectively remove cadmium ions from aqueous solutions. • The maximum capacity for cadmium ion removal is 2.1 mmol/g. • Cadmium removal capacity is either equivalent or better than commercial resins. - Abstract: Cadmium is a highly toxic heavy metal even at a trace level. In this study, a novel material derived from waste PCBs has been applied as an adsorbent to remove cadmium ions from aqueous solutions. The effects of various factors including contact time, initial cadmium ion concentration, pH and adsorbent dosage have been evaluated. The maximum uptake capacity of the newly derived material for cadmium ions has reached 2.1 mmol/g at an initial pH 4. This value shows that this material can effectively remove cadmium ions from effluent. The equilibrium isotherm has been analyzed using several isotherm equations and is best described by the Redlich–Peterson model. Furthermore, different commercial adsorbent resins have been studied for comparison purposes. The results further confirm that this activated material is highly competitive with its commercial counterparts

  10. Intelligent Garbage Classifier

    OpenAIRE

    Ignacio Rodríguez Novelle; Javier Pérez Cid; Alvaro Salmador

    2008-01-01

    IGC (Intelligent Garbage Classifier) is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  11. Classifying Linear Canonical Relations

    OpenAIRE

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  12. Intelligent Garbage Classifier

    Directory of Open Access Journals (Sweden)

    Ignacio Rodríguez Novelle

    2008-12-01

    Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  13. Impact of the amount of working fluid in loop heat pipe to remove waste heat from electronic component

    Science.gov (United States)

    Smitka, Martin; Kolková, Z.; Nemec, Patrik; Malcho, M.

    2014-03-01

    One of the options on how to remove waste heat from electronic components is using loop heat pipe. The loop heat pipe (LHP) is a two-phase device with high effective thermal conductivity that utilizes change phase to transport heat. It was invented in Russia in the early 1980's. The main parts of LHP are an evaporator, a condenser, a compensation chamber and a vapor and liquid lines. Only the evaporator and part of the compensation chamber are equipped with a wick structure. Inside loop heat pipe is working fluid. As a working fluid can be used distilled water, acetone, ammonia, methanol etc. Amount of filling is important for the operation and performance of LHP. This work deals with the design of loop heat pipe and impact of filling ratio of working fluid to remove waste heat from insulated gate bipolar transistor (IGBT).

  14. Impact of the amount of working fluid in loop heat pipe to remove waste heat from electronic component

    Directory of Open Access Journals (Sweden)

    Smitka Martin

    2014-03-01

    Full Text Available One of the options on how to remove waste heat from electronic components is using loop heat pipe. The loop heat pipe (LHP is a two-phase device with high effective thermal conductivity that utilizes change phase to transport heat. It was invented in Russia in the early 1980’s. The main parts of LHP are an evaporator, a condenser, a compensation chamber and a vapor and liquid lines. Only the evaporator and part of the compensation chamber are equipped with a wick structure. Inside loop heat pipe is working fluid. As a working fluid can be used distilled water, acetone, ammonia, methanol etc. Amount of filling is important for the operation and performance of LHP. This work deals with the design of loop heat pipe and impact of filling ratio of working fluid to remove waste heat from insulated gate bipolar transistor (IGBT.

  15. Ionic Polymer-Based Removable and Charge-Dissipative Coatings for Space Electronic Applications Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Protection of critical electronic systems in spacecraft and satellites is imperative for NASA's future missions to high-energy, outer-planet environments. The...

  16. Method and apparatus for removing heat from electronic devices using synthetic jets

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Rajdeep; Weaver, Jr., Stanton Earl; Seeley, Charles Erklin; Arik, Mehmet; Icoz, Tunc; Wolfe, Jr., Charles Franklin; Utturkar, Yogen Vishwas

    2014-04-15

    An apparatus for removing heat comprises a heat sink having a cavity, and a synthetic jet stack comprising at least one synthetic jet mounted within the cavity. At least one rod and at least one engaging structure to provide a rigid positioning of the at least one synthetic jet with respect to the at least one rod. The synthetic jet comprises at least one orifice through which a fluid is ejected.

  17. Method and apparatus for removing heat from electronic devices using synthetic jets

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Rajdeep; Weaver, Stanton Earl; Seeley, Charles Erklin; Arik, Mehmet; Icoz, Tunc; Wolfe, Jr., Charles Franklin; Utturkar, Yogen Vishwas

    2015-11-24

    An apparatus for removing heat comprises a heat sink having a cavity, and a synthetic jet stack comprising at least one synthetic jet mounted within the cavity. At least one rod and at least one engaging structure to provide a rigid positioning of the at least one synthetic jet with respect to the at least one rod. The synthetic jet comprises at least one orifice through which a fluid is ejected.

  18. Effect of high electron donor supply on dissimilatory nitrate reduction pathways in a bioreactor for nitrate removal

    DEFF Research Database (Denmark)

    Behrendt, Anna; Tarre, Sheldon; Beliavski, Michael;

    2014-01-01

    The possible shift of a bioreactor for NO3- removal from predominantly denitrification (DEN) to dissimilatory nitrate reduction to ammonium (DNRA) by elevated electron donor supply was investigated. By increasing the C/NO3- ratio in one of two initially identical reactors, the production of high...... sulfide concentrations was induced. The response of the dissimilatory NO3- reduction processes to the increased availability of organic carbon and sulfide was monitored in a batch incubation system. The expected shift from a DEN- towards a DNRA-dominated bioreactor was not observed, also not under...

  19. Organic substrates as electron donors in permeable reactive barriers for removal of heavy metals from acid mine drainage.

    Science.gov (United States)

    Kijjanapanich, P; Pakdeerattanamint, K; Lens, P N L; Annachhatre, A P

    2012-12-01

    This research was conducted to select suitable natural organic substrates as potential carbon sources for use as electron donors for biological sulphate reduction in a permeable reactive barrier (PRB). A number of organic substrates were assessed through batch and continuous column experiments under anaerobic conditions with acid mine drainage (AMD) obtained from an abandoned lignite coal mine. To keep the heavy metal concentration at a constant level, the AMD was supplemented with heavy metals whenever necessary. Under anaerobic conditions, sulphate-reducing bacteria (SRB) converted sulphate into sulphide using the organic substrates as electron donors. The sulphide that was generated precipitated heavy metals as metal sulphides. Organic substrates, which yielded the highest sulphate reduction in batch tests, were selected for continuous column experiments which lasted over 200 days. A mixture of pig-farm wastewater treatment sludge, rice husk and coconut husk chips yielded the best heavy metal (Fe, Cu, Zn and Mn) removal efficiencies of over 90%. PMID:23437664

  20. Efficient electron-induced removal of oxalate ions and formation of copper nanoparticles from copper(II) oxalate precursor layers.

    Science.gov (United States)

    Rückriem, Kai; Grotheer, Sarah; Vieker, Henning; Penner, Paul; Beyer, André; Gölzhäuser, Armin; Swiderek, Petra

    2016-01-01

    Copper(II) oxalate grown on carboxy-terminated self-assembled monolayers (SAM) using a step-by-step approach was used as precursor for the electron-induced synthesis of surface-supported copper nanoparticles. The precursor material was deposited by dipping the surfaces alternately in ethanolic solutions of copper(II) acetate and oxalic acid with intermediate thorough rinsing steps. The deposition of copper(II) oxalate and the efficient electron-induced removal of the oxalate ions was monitored by reflection absorption infrared spectroscopy (RAIRS). Helium ion microscopy (HIM) reveals the formation of spherical nanoparticles with well-defined size and X-ray photoelectron spectroscopy (XPS) confirms their metallic nature. Continued irradiation after depletion of oxalate does not lead to further particle growth giving evidence that nanoparticle formation is primarily controlled by the available amount of precursor. PMID:27547602

  1. Influence of wick properties in a vertical LHP on remove waste heat from electronic equipment

    Energy Technology Data Exchange (ETDEWEB)

    Smitka, Martin, E-mail: martin.smitka@fstroj.uniza.sk, E-mail: patrik.nemec@fstroj.uniza.sk, E-mail: milan.malcho@fstroj.uniza.sk; Nemec, Patrik, E-mail: martin.smitka@fstroj.uniza.sk, E-mail: patrik.nemec@fstroj.uniza.sk, E-mail: milan.malcho@fstroj.uniza.sk; Malcho, Milan, E-mail: martin.smitka@fstroj.uniza.sk, E-mail: patrik.nemec@fstroj.uniza.sk, E-mail: milan.malcho@fstroj.uniza.sk [University of Žilina, Faculty of Mechanical Engineering, Department of Power Engeneering, Univerzitna 1, 010 26 Žilina (Slovakia)

    2014-08-06

    The loop heat pipe is a vapour-liquid phase-change device that transfers heat from evaporator to condenser. One of the most important parts of the LHP is the porous wick structure. The wick structure provides capillary force to circulate the working fluid. To achieve good thermal performance of LHP, capillary wicks with high permeability and porosity and fine pore radius are expected. The aim of this work is to develop porous wick of sintered nickel powder with different grain sizes. These porous wicks were used in LHP and there were performed a series of measurements to remove waste heat from the insulated gate bipolar transistor (IGBT)

  2. Influence of wick properties in a vertical LHP on remove waste heat from electronic equipment

    Science.gov (United States)

    Smitka, Martin; Nemec, Patrik; Malcho, Milan

    2014-08-01

    The loop heat pipe is a vapour-liquid phase-change device that transfers heat from evaporator to condenser. One of the most important parts of the LHP is the porous wick structure. The wick structure provides capillary force to circulate the working fluid. To achieve good thermal performance of LHP, capillary wicks with high permeability and porosity and fine pore radius are expected. The aim of this work is to develop porous wick of sintered nickel powder with different grain sizes. These porous wicks were used in LHP and there were performed a series of measurements to remove waste heat from the insulated gate bipolar transistor (IGBT).

  3. Influence of wick properties in a vertical LHP on remove waste heat from electronic equipment

    International Nuclear Information System (INIS)

    The loop heat pipe is a vapour-liquid phase-change device that transfers heat from evaporator to condenser. One of the most important parts of the LHP is the porous wick structure. The wick structure provides capillary force to circulate the working fluid. To achieve good thermal performance of LHP, capillary wicks with high permeability and porosity and fine pore radius are expected. The aim of this work is to develop porous wick of sintered nickel powder with different grain sizes. These porous wicks were used in LHP and there were performed a series of measurements to remove waste heat from the insulated gate bipolar transistor (IGBT)

  4. Evaluation of toxicity and removal of color in textile effluent treated with electron beam

    International Nuclear Information System (INIS)

    The textile industry is among the main activities Brazil, being relevant in number of jobs, quantity and diversity of products and mainly by the volume of water used in industrial processes and effluent generation. These effluents are complex mixtures which are characterized by the presence of dyes, surfactants, metal sequestering agents, salts and other potentially toxic chemicals for the aquatic biota. Considering the lack of adequate waste management to these treatments, new technologies are essential in highlighting the advanced oxidation processes such as ionizing radiation electron beam. This study includes the preparation of a standard textile effluent chemical laboratory and its treatment by electron beam from electron accelerator in order to reduce the toxicity and intense staining resulting from Cl. Blue 222 dye. The treatment caused a reduction in toxicity to exposed organisms with 34.55% efficiency for the Daphnia similis micro crustacean and 47.83% for Brachionus plicatilis rotifer at a dose of 2.5 kGy. The Vibrio fischeri bacteria obtained better results after treatment with a dose of 5 kGy showing 57.29% efficiency. Color reduction was greater than 90% at a dose of 2.5 kGy. This experiment has also carried out some preliminary tests on the sensitivity of the D. similis and V. fischeri organisms to exposure of some of the products used in this bleaching and dyeing and two water reuse simulations in new textile processing after the treating the effluent with electron beam. (author)

  5. A comparative scanning electron microscopy evaluation of smear layer removal with apple vinegar and sodium hypochlorite associated with EDTA

    Directory of Open Access Journals (Sweden)

    George Táccio de Miranda Candeiro

    2011-12-01

    Full Text Available OBJECTIVE: The purpose of this study was to evaluate by scanning electron microscopy (SEM the removal of smear layer from the middle and apical root thirds after use of different irrigating solutions. MATERIAL AND METHODS: Forty roots of permanent human teeth had their canals instrumented and were randomly assigned to 4 groups (n=10, according to the irrigating solution: apple vinegar (group A, apple vinegar finished with 17% ethylenediaminetetraacetic acid (EDTA (group B, 1% sodium hypochlorite (NaOCl finished with 17% EDTA (group C and saline (group D - control. After chemomechanical preparation, the roots were cleaved longitudinally and their middle and apical thirds were examined by SEM at ×1,000 magnification. Two calibrated examiners (kappa=0.92 analyzed the SEM micrographs qualitatively attributing scores that indicated the efficacy of the solutions in removing the smear layer from the surface of the dentin tubules (1 - poor, 2 - good and 3 - excellent. Data from the control and experimental groups were analyzed by the Kruskal-Wallis and Dunn's test, while the Wilcoxon test was used to compare the middle and apical thirds of the canals within the same group (a=0.05. RESULTS: The middle third presented less amount of smear layer than the apical third, regardless of the irrigant. There was statistically significant difference (p=0.0402 among the groups in the middle third. In the apical third, the apple vinegar/EDTA group showed the greatest removal of smear layer (p=0.0373. CONCLUSION: Apple vinegar associated or not with EDTA was effective in removing smear layer when used as an endodontic irrigant.

  6. Educating Health Professionals about the Electronic Health Record (EHR): Removing the Barriers to Adoption

    OpenAIRE

    Paule Bellwood; Brian Armstrong; Ronald S. Joe; Elizabeth Borycki; Rebecca Campbell

    2011-01-01

    In the healthcare industry we have had a significant rise in the use of electronic health records (EHRs) in health care settings (e.g. hospital, clinic, physician office and home). There are three main barriers that have arisen to the adoption of these technologies: (1) a shortage of health professional faculty who are familiar with EHRs and related technologies, (2) a shortage of health informatics specialists who can implement these technologies, and (3) poor access to differing types of EH...

  7. Botnet analysis using ensemble classifier

    Directory of Open Access Journals (Sweden)

    Anchit Bijalwan

    2016-09-01

    Full Text Available This paper analyses the botnet traffic using Ensemble of classifier algorithm to find out bot evidence. We used ISCX dataset for training and testing purpose. We extracted the features of both training and testing datasets. After extracting the features of this dataset, we bifurcated these features into two classes, normal traffic and botnet traffic and provide labelling. Thereafter using modern data mining tool, we have applied ensemble of classifier algorithm. Our experimental results show that the performance for finding bot evidence using ensemble of classifiers is better than single classifier. Ensemble based classifiers perform better than single classifier by either combining powers of multiple algorithms or introducing diversification to the same classifier by varying input in bot analysis. Our results are showing that by using voting method of ensemble based classifier accuracy is increased up to 96.41% from 93.37%.

  8. [Effects of carbon sources, temperature and electron acceptors on biological phosphorus removal].

    Science.gov (United States)

    Han, Yun; Xu, Song; Dong, Tao; Wang, Bin-Fan; Wang, Xian-Yao; Peng, Dang-Cong

    2015-02-01

    Effects of carbon sources, temperature and electron acceptors on phosphorus uptake and release were investigated in a pilot-scale oxidation ditch. Phosphorus uptake and release rates were measured with different carbon sources (domestic sewage, sodium acetate, glucose) at 25 degrees C. The results showed that the minimum phosphorus uptake and release rates of glucose were 5.12 mg x (g x h)(-1) and 6.43 mg x (g x h)(-1), respectively, and those of domestic sewage are similar to those of sodium acetate. Phosphorus uptake and release rates increased with the increase of temperature (12, 16, 20 and 25 degrees C) using sodium acetate as carbon sources. Anoxic phosphorus uptake rate decreased with added COD. Electron acceptors (oxygen, nitrate, nitrite) had significant effects on phosphorus uptake rate and their order was in accordance with oxygen > nitrate > nitrite. The mass ratio of anoxic P uptake and N consumption (P(uptake)/N (consumption)) of nitrate and nitrite were 0.96 and 0.65, respectively. PMID:26031087

  9. Educating Health Professionals about the Electronic Health Record (EHR: Removing the Barriers to Adoption

    Directory of Open Access Journals (Sweden)

    Paule Bellwood

    2011-03-01

    Full Text Available In the healthcare industry we have had a significant rise in the use of electronic health records (EHRs in health care settings (e.g. hospital, clinic, physician office and home. There are three main barriers that have arisen to the adoption of these technologies: (1 a shortage of health professional faculty who are familiar with EHRs and related technologies, (2 a shortage of health informatics specialists who can implement these technologies, and (3 poor access to differing types of EHR software. In this paper we outline a novel solution to these barriers: the development of a web portal that provides facility and health professional students with access to multiple differing types of EHRs over the WWW. The authors describe how the EHR is currently being used in educational curricula and how it has overcome many of these barriers. The authors also briefly describe the strengths and limitations of the approach.

  10. Development of removal technology for volatile organic compounds (VOCs) using electron beam

    International Nuclear Information System (INIS)

    The Air Pollution Control Law was revised on May 2004 for reduction of the emission of VOCs from factories. Reduction of the emission of VOCs to the atmosphere will be required for existing and new factories and plants at which huge amounts of VOCs are utilized after the enforcement of the revised Air Pollution Control Law. In case of the existing large-scale factories and plants, high-concentrated VOCs in ventilation gases has already been treated from a few % to a few hundreds ppmv with the absorption by activated carbons, the thermal incineration, and the catalytic oxidations, etc. Additional compact treatment systems are required for the purification of high flow-rate ventilation air mixtures containing dilute VOCs. The electron beam (EB) treatment is suitable for such a time-saving purification of ventilation air mixtures, because dilute VOCs can be quickly decomposed by EB induced free radicals at high concentrations. In our groups, the purification process using EB irradiation has been developed based on the decomposition reactions and property changes of organics. Aerosols and gaseous organics were produced from aromatic hydrocarbons in air by EB irradiation. The yields of aerosols and gaseous organics relative to decomposed chlorobenzene were 39-43% and 26-28%, respectively, at doses of 4-8 kGy, respectively. The filter was clogged with aerosols in filtration of aerosols, because of the stick aerosols absorbing gaseous water in air mixtures. The collection treatments of aerosols, for example an electro-precipitator after EB irradiation, is regarded as one of possible purification treatments of the aromatic hydrocarbons/air mixtures. Chloroethylenes except for monochloroethylene are decomposed into water-soluble gaseous primary products, such as chloroacetyl chlorides, carbonyl chloride, formyl chloride, through Cl-atom chain oxidation in air mixtures by EB irradiation. The hydrolysis of these gaseous products in irradiated air mixtures is prospective to be

  11. Effect of residual chips on the material removal process of the bulk metallic glass studied by in situ scratch testing inside the scanning electron microscope

    OpenAIRE

    Hu Huang; Hongwei Zhao; Chengli Shi; Boda Wu; Zunqiang Fan; Shunguang Wan; Chunyang Geng

    2012-01-01

    Research on material removal mechanism is meaningful for precision and ultra-precision manufacturing. In this paper, a novel scratch device was proposed by integrating the parasitic motion principle linear actuator. The device has a compact structure and it can be installed on the stage of the scanning electron microscope (SEM) to carry out in situ scratch testing. Effect of residual chips on the material removal process of the bulk metallic glass (BMG) was studied by in situ scratch testing ...

  12. Emergent behaviors of classifier systems

    Energy Technology Data Exchange (ETDEWEB)

    Forrest, S.; Miller, J.H.

    1989-01-01

    This paper discusses some examples of emergent behavior in classifier systems, describes some recently developed methods for studying them based on dynamical systems theory, and presents some initial results produced by the methodology. The goal of this work is to find techniques for noticing when interesting emergent behaviors of classifier systems emerge, to study how such behaviors might emerge over time, and make suggestions for designing classifier systems that exhibit preferred behaviors. 20 refs., 1 fig.

  13. Electronic structure calculations of mercury mobilization from mineral phases and photocatalytic removal from water and the atmosphere

    International Nuclear Information System (INIS)

    Mercury is a hazardous environmental pollutant mobilized from natural sources, and anthropogenically contaminated and disturbed areas. Current methods to assess mobility and environmental impact are mainly based on field measurements, soil monitoring, and kinetic modelling. In order to understand in detail the extent to which different mineral sources can give rise to mercury release it is necessary to investigate the complexity at the microscopic level and the possible degradation/dissolution processes. In this work, we investigated the potential for mobilization of mercury structurally trapped in three relevant minerals occurring in hot spring environments and mining areas, namely, cinnabar (α-HgS), corderoite (α-Hg3S2Cl2), and mercuric chloride (HgCl2). Quantum chemical methods based on density functional theory as well as more sophisticated approaches are used to assess the possibility of a) direct photoreduction and formation of elemental Hg at the surface of the minerals, providing a path for ready release in the environment; and b) reductive dissolution of the minerals in the presence of solutions containing halogens. Furthermore, we study the use of TiO2 as a potential photocatalyst for decontamination of polluted waters (mainly Hg2+-containing species) and air (atmospheric Hg0). Our results partially explain the observed pathways of Hg mobilization from relevant minerals and the microscopic mechanisms behind photocatalytic removal of Hg-based pollutants. Possible sources of disagreement with observations are discussed and further improvements to our approach are suggested. - Highlights: • Mercury mobilization pathways from three Hg bearing minerals were studied. • Their electronic properties were analysed using quantum mechanical modelling. • Cinnabar and corderoite are not photodegradable, but mercuric chloride is. • The trend is reversed for dissolution induced by the presence of halogen couples. • Photocatalytic removal of Hg from air and

  14. Application of ultrasound and air stripping for the removal of aromatic hydrocarbons from spent sulfidic caustic for use in autotrophic denitrification as an electron donor.

    Science.gov (United States)

    Lee, Jae-Ho; Park, Jeung-Jin; Choi, Gi-Choong; Byun, Im-Gyu; Park, Tae-Joo; Lee, Tae-Ho

    2013-01-01

    Spent sulfidic caustic (SSC) produced from petroleum industry can be reused to denitrify nitrate-nitrogen via a biological nitrogen removal process as an electron donor for sulfur-based autotrophic denitrification, because it has a large amount of dissolved sulfur. However, SSC has to be refined because it also contains some aromatic hydrocarbons, typically benzene, toluene, ethylbenzene, xylene (BTEX) and phenol that are recalcitrant organic compounds. In this study, laboratory-scale ultrasound irradiation and air stripping treatment were applied in order to remove these aromatic hydrocarbons. In the ultrasound system, both BTEX and phenol were exponentially removed by ultrasound irradiation during 60 min of reaction time to give the greatest removal efficiency of about 80%. Whereas, about 95% removal efficiency of BTEX was achieved, but not any significant phenol removal, within 30 min in the air stripping system, indicating that air stripping was a more efficient method than ultrasound irradiation. However, since air stripping did not remove any significant phenol, an additional process for degrading phenol was required. Accordingly, we applied a combined ultrasound and air stripping process. In these experiments, the removal efficiencies of BTEX and phenol were improved compared to the application of ultrasound and air stripping alone. Thus, the combined ultrasound and air stripping treatment is appropriate for refining SSC.

  15. Degradation and acute toxicity removal of the antidepressant Fluoxetine (Prozac(®)) in aqueous systems by electron beam irradiation.

    Science.gov (United States)

    Silva, Vanessa Honda Ogihara; Dos Santos Batista, Ana Paula; Silva Costa Teixeira, Antonio Carlos; Borrely, Sueli Ivone

    2016-06-01

    Electron beam irradiation (EBI) has been considered an advanced technology for the treatment of water and wastewater, whereas very few previous investigations reported its use for removing pharmaceutical pollutants. In this study, the degradation of fluoxetine (FLX), an antidepressant marketed as Prozac(®), was investigated by using EBI at FLX initial concentration of 19.4 ± 0.2 mg L(-1). More than 90 % FLX degradation was achieved at 0.5 kGy, with FLX below the detection limit (0.012 mg L(-1)) at doses higher than 2.5 kGy. The elucidation of organic byproducts performed using direct injection mass spectrometry, along with the results of ion chromatography, indicated hydroxylation of FLX molecules with release of fluoride and nitrate anions. Nevertheless, about 80 % of the total organic carbon concentration remained even for 7.5 kGy or higher doses. The decreases in acute toxicity achieved 86.8 and 9.6 % for Daphnia similis and Vibrio fischeri after EBI exposure at 5 kGy, respectively. These results suggest that EBI could be an alternative to eliminate FLX and to decrease residual toxicity from wastewater generated in pharmaceutical formulation facilities, although further investigation is needed for correlating the FLX degradation mechanism with the toxicity results. PMID:26961524

  16. Classified

    CERN Multimedia

    Computer Security Team

    2011-01-01

    In the last issue of the Bulletin, we have discussed recent implications for privacy on the Internet. But privacy of personal data is just one facet of data protection. Confidentiality is another one. However, confidentiality and data protection are often perceived as not relevant in the academic environment of CERN.   But think twice! At CERN, your personal data, e-mails, medical records, financial and contractual documents, MARS forms, group meeting minutes (and of course your password!) are all considered to be sensitive, restricted or even confidential. And this is not all. Physics results, in particular when being preliminary and pending scrutiny, are sensitive, too. Just recently, an ATLAS collaborator copy/pasted the abstract of an ATLAS note onto an external public blog, despite the fact that this document was clearly marked as an "Internal Note". Such an act was not only embarrassing to the ATLAS collaboration, and had negative impact on CERN’s reputation --- i...

  17. Efficacy of various root canal irrigants on removal of smear layer in the primary root canals after hand instrumentation: A scanning electron microscopy study

    Directory of Open Access Journals (Sweden)

    Hariharan V

    2010-01-01

    Full Text Available Aim: The purpose of this in-vitro study is to determine the efficacy of various irrigants in removing the smear layer in primary teeth root canals after hand instrumentation. Materials and Methods: The present study consisted of 30 human primary incisors which were sectioned at the cementoenamel junction horizontally. The specimens were divided randomly into four experimental and one control group having six teeth each and each group was treated with the specific irrigant. 5.25% NaOCl, 5.25% NaOCl + 10% EDTA, 6% citric acid, 2% chlorhexidine, saline (control were the irrigants evaluated for efficacy in removal of smear layer. The specimens were split along the longitudinal axis using a chisel after placing superficial grooves in cementum not extending to the root canal. The exposed surface was subjected to scanning electron microscopic analysis to reveal the efficacy of irrigants in removal of smear layer. The representative areas were evaluated twice at 15 days interval by a single evaluator. The scale for the smear layer removal by Rome et al was modified and used in the present study. Results: The pictures from the scanning electron microscopy showed that among the tested irrigants, citric acid has the best efficacy to remove the smear layer without altering the normal dentinal structures, which was supported by the lowest mean smear scores. The pictures from the 10%EDTA + 5.25% sodium hypochlorite group showed that even though it removed the smear layer, it adversely affected the dentine structure. SEM pictures of the other groups like sodium hypochlorite, chlorhexidine revealed that these irrigants does not have the capacity to remove the smear layer in primary teeth. Conclusions: The results of the present study clearly indicate the superior efficacy of 6% citric acid than the other tested irrigants on removing the smear layer in primary teeth root canals.

  18. Optimally Training a Cascade Classifier

    CERN Document Server

    Shen, Chunhua; Hengel, Anton van den

    2010-01-01

    Cascade classifiers are widely used in real-time object detection. Different from conventional classifiers that are designed for a low overall classification error rate, a classifier in each node of the cascade is required to achieve an extremely high detection rate and moderate false positive rate. Although there are a few reported methods addressing this requirement in the context of object detection, there is no a principled feature selection method that explicitly takes into account this asymmetric node learning objective. We provide such an algorithm here. We show a special case of the biased minimax probability machine has the same formulation as the linear asymmetric classifier (LAC) of \\cite{wu2005linear}. We then design a new boosting algorithm that directly optimizes the cost function of LAC. The resulting totally-corrective boosting algorithm is implemented by the column generation technique in convex optimization. Experimental results on object detection verify the effectiveness of the proposed bo...

  19. Subsurface Biogeochemical Heterogeneity (Field-scale removal of U(VI) from groundwater in an alluvial aquifer by electron donor amendment)

    Energy Technology Data Exchange (ETDEWEB)

    Long, Philip E.; Lovley, Derek R.; N' Guessan, A. L.; Nevin, Kelly; Resch, C. T.; Arntzen, Evan; Druhan, Jenny; Peacock, Aaron; Baldwin, Brett; Dayvault, Dick; Holmes, Dawn; Williams, Ken; Hubbard, Susan; Yabusaki, Steve; Fang, Yilin; White, D. C.; Komlos, John; Jaffe, Peter

    2006-06-01

    Determine if biostimulation of alluvial aquifers by electron donor amendment can effectively remove U(VI) from groundwater at the field scale. Uranium contamination in groundwater is a significant problem at several DOE sites. In this project, the possibility of accelerating bioreduction of U(VI) to U(IV) as a means of decreasing U(VI) concentrations in groundwater is directly addressed by conducting a series of field-scale experiments. Scientific goals include demonstrating the quantitative linkage between microbial activity and U loss from groundwater and relating the dominant terminal electron accepting processes to the rate of U loss. The project is currently focused on understanding the mechanisms for unexpected long-term ({approx}2 years) removal of U after stopping electron donor amendment. Results obtained in the project successfully position DOE and others to apply biostimulation broadly to U contamination in alluvial aquifers.

  20. Removal of CO from CO-contaminated hydrogen gas by carbon-supported rhodium porphyrins using water-soluble electron acceptors

    Science.gov (United States)

    Yamazaki, Shin-ichi; Siroma, Zyun; Asahi, Masafumi; Ioroi, Tsutomu

    2016-10-01

    Carbon-supported Rh porphyrins catalyze the oxidation of carbon monoxide by water-soluble electron acceptors. The rate of this reaction is plotted as a function of the redox potential of the electron acceptor. The rate increases with an increase in the redox potential until it reaches a plateau. This profile can be explained in terms of the electrocatalytic CO oxidation activity of the Rh porphyrin. The removal of CO from CO(2%)/H2 by a solution containing a carbon-supported Rh porphyrin and an electron acceptor is examined. The complete conversion of CO to CO2 is achieved with only a slight amount of Rh porphyrins. Rh porphyrin on carbon black gives higher conversion than that dissolved in solution. This reaction can be used not only to remove CO in anode gas of stationary polymer electrolyte fuel cells but also to regenerate a reductant in indirect CO fuel cell systems.

  1. Hybrid classifiers methods of data, knowledge, and classifier combination

    CERN Document Server

    Wozniak, Michal

    2014-01-01

    This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.

  2. Effects of brushing in a classifying machine on the cuticles of Fuji and Gala apples

    Directory of Open Access Journals (Sweden)

    Renar João Bender

    2009-06-01

    Full Text Available The cuticle, a layer that covers the fruit epidermis, has a protective function against environmental stresses such as wind, temperature, chemicals and drought, not only when the fruit is attached to the plant, but also after harvest. Some postharvest procedures may influence the external layers of the fruit, like the cuticle. The objective of this work was to evaluate the effects of brushing in a classifying machine on the cuticles of apples under scanning electron microscopy (SEM. Two experiments were conducted to test brushing on the cultivars Fuji and Gala using heavy and smooth brushes. The experiments consisted of three replicates of three apples each, with three samples taken from the equatorial area of the fruit to be analyzed under SEM. The brushes of the classifying machine altered the cuticular layer, dragging it, modifying the structure and removing crystalloids of the cuticular wax layer, and forming cracks. There were no differences between the effects of the two types of brushes tested on the cuticles of the apples. The classifying machine used commercially is capable of producing similar effects to those encountered in the brushing experiments conducted on the prototype in the laboratory, removing partially the protective wax content of the apple’s cuticle.

  3. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  4. Removal of SO2 and NO/sub x/ from flue gas by means of a spray dryer/electron beam combination: a feasibility study

    International Nuclear Information System (INIS)

    This study examines the feasibility of adding an electron beam between the spray dryer and the fabric filter of dry scrubber flue gas desulfurization (FGD) systems. The beam promises effective removal of nitrogen oxides (NO/sub x/) and sulfur dioxide (SO2), even at higher coal-sulfur levels than usually economic for dry scrubbers. The beam excites gas molecules, promoting reactions that convert SO2 and NO/sub x/ to acids that then react with calcium compounds and are removed by the filter. Concerns examined here are feasibility and waste disposal. The cost findings are promising for both manufacture and operation. The system uses commercially available components. The relatively low temperatures and high humidity downstream of the spray dryer favor economic beam operation. The beam removes SO2, so the dryer can be run for economy, not high removal. The beam's incidental heating effect reduces reheat cost. Safe landfilling of the nitrate-rich waste appears practical, with leachate carrying no more nitrate than natural rain and dustfall. We expect natural pozzolanic reactions between alumina-silica compounds in the fly ash and lime compounds from the spray dryer to form an impermeable concrete-like material within 10 days after landfilling. Dry scrubber with electron beam appears competitive with commercial FGD systems, and we recommend a pilot scale operation

  5. A comparative evaluation of smear layer removal by using edta, etidronic acid, and maleic acid as root canal irrigants: An in vitro scanning electron microscopic study

    OpenAIRE

    Aby Kuruvilla; Bharath Makonahalli Jaganath; Sahadev Chickmagaravalli Krishnegowda; Praveen Kumar Makonahalli Ramachandra; Dexton Antony Johns; Aby Abraham

    2015-01-01

    Aim: The purpose of this study is to evaluate and compare the efficacy of 17% EDTA, 18% etidronic acid, and 7% maleic acid in smear layer removal using scanning electron microscopic image analysis. Materials and Methods: Thirty, freshly extracted mandibular premolars were used. The teeth were decoronated to obtain working length of 17mm and instrumentation up to 40 size (K file) with 2.5% NaOCl irrigation between each file. The samples were divided into Groups I (17% ethylenediaminetetraa...

  6. A comparative evaluation of smear layer removal by using edta, etidronic acid, and maleic acid as root canal irrigants: An in vitro scanning electron microscopic study

    OpenAIRE

    Kuruvilla, Aby; Jaganath, Bharath Makonahalli; Krishnegowda, Sahadev Chickmagaravalli; Ramachandra, Praveen Kumar Makonahalli; Johns, Dexton Antony; Abraham, Aby

    2015-01-01

    Aim: The purpose of this study is to evaluate and compare the efficacy of 17% EDTA, 18% etidronic acid, and 7% maleic acid in smear layer removal using scanning electron microscopic image analysis. Materials and Methods: Thirty, freshly extracted mandibular premolars were used. The teeth were decoronated to obtain working length of 17mm and instrumentation up to 40 size (K file) with 2.5% NaOCl irrigation between each file. The samples were divided into Groups I (17% ethylenediaminetetraaceti...

  7. Effect of residual chips on the material removal process of the bulk metallic glass studied by in situ scratch testing inside the scanning electron microscope

    Directory of Open Access Journals (Sweden)

    Hu Huang

    2012-12-01

    Full Text Available Research on material removal mechanism is meaningful for precision and ultra-precision manufacturing. In this paper, a novel scratch device was proposed by integrating the parasitic motion principle linear actuator. The device has a compact structure and it can be installed on the stage of the scanning electron microscope (SEM to carry out in situ scratch testing. Effect of residual chips on the material removal process of the bulk metallic glass (BMG was studied by in situ scratch testing inside the SEM. The whole removal process of the BMG during the scratch was captured in real time. Formation and growth of lamellar chips on the rake face of the Cube-Corner indenter were observed dynamically. Experimental results indicate that when lots of chips are accumulated on the rake face of the indenter and obstruct forward flow of materials, materials will flow laterally and downward to find new location and direction for formation of new chips. Due to similar material removal processes, in situ scratch testing is potential to be a powerful research tool for studying material removal mechanism of single point diamond turning, single grit grinding, mechanical polishing and grating fabrication.

  8. Effect of different final irrigating solutions on smear layer removal in apical third of root canal: A scanning electron microscope study

    Directory of Open Access Journals (Sweden)

    Sayesh Vemuri

    2016-01-01

    Full Text Available Aim: The aim of this in vitro study is to compare the smear layer removal efficacy of different irrigating solutions at the apical third of the root canal. Materials and Methods: Forty human single-rooted mandibular premolar teeth were taken and decoronated to standardize the canal length to 14 mm. They were prepared by ProTaper rotary system to an apical preparation of file size F3. Prepared teeth were randomly divided into four groups (n = 10; saline (Group 1; negative control, ethylenediaminetetraacetic acid (Group 2, BioPure MTAD (Group 3, and QMix 2 in 1 (Group 4. After final irrigation with tested irrigants, the teeth were split into two halves longitudinally and observed under a scanning electron microscope (SEM for the removal of smear layer. The SEM images were then analyzed for the amount of smear layer present using a three score system. Statistical Analysis: Data are analyzed using the Kruskal-Wallis test and Mann-Whitney U-test. Results: Intergroup comparison of groups showed statistically significant difference in the smear layer removal efficacy of irrigants tested. QMix 2 in 1 is most effective in removal of smear layer when compared to other tested irrigants. Conclusion: QMix 2 in 1 is the most effective final irrigating solution for smear layer removal.

  9. Classifying unstructured text using structured training instances and ensemble classifiers

    OpenAIRE

    Lianos, Andreas; Yang, Yanyan

    2015-01-01

    Typical supervised classification techniques require training instances similar to the values that need to be classified. This research proposes a methodology that can utilize training instances found in a different format. The benefit of this approach is that it allows the use of traditional classification techniques, without the need to hand-tag training instances if the information exists in other data sources. The proposed approach is presented through a practical classification applicati...

  10. Aggregation Operator Based Fuzzy Pattern Classifier Design

    DEFF Research Database (Denmark)

    Mönks, Uwe; Larsen, Henrik Legind; Lohweg, Volker

    2009-01-01

    This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable. The...

  11. 75 FR 705 - Classified National Security Information

    Science.gov (United States)

    2010-01-05

    ... Executive Order 13526--Classified National Security Information Memorandum of December 29, 2009--Implementation of the Executive Order ``Classified National Security Information'' Order of December 29, 2009... ] Executive Order 13526 of December 29, 2009 Classified National Security Information This order prescribes...

  12. 76 FR 34761 - Classified National Security Information

    Science.gov (United States)

    2011-06-14

    ... Classified National Security Information AGENCY: Marine Mammal Commission. ACTION: Notice. SUMMARY: This... information, as directed by Information Security Oversight Office regulations. FOR FURTHER INFORMATION CONTACT..., ``Classified National Security Information,'' and 32 CFR part 2001, ``Classified National Security......

  13. Classifying self-gravitating radiations

    CERN Document Server

    Kim, Hyeong-Chan

    2016-01-01

    We study static systems of self-gravitating radiations confined in a sphere by using numerical and analytic calculations. We classify and analyze the solutions systematically. Due to the scaling symmetry, any solution can be represented as a segment of a solution curve on a plane of two-dimensional scale invariant variables. We find that a system can be conveniently parametrized by three parameters representing the solution curve, the scaling, and the system size, instead of the parameters defined at the outer boundary. The solution curves are classified to three types representing regular solutions, conically singular solutions with, and without an object which resembles an event horizon up to causal disconnectedness. For the last type, the behavior of a self-gravitating system is simple enough to allow analytic calculations.

  14. A comparative evaluation of smear layer removal by using edta, etidronic acid, and maleic acid as root canal irrigants: An in vitro scanning electron microscopic study

    Directory of Open Access Journals (Sweden)

    Aby Kuruvilla

    2015-01-01

    Full Text Available Aim: The purpose of this study is to evaluate and compare the efficacy of 17% EDTA, 18% etidronic acid, and 7% maleic acid in smear layer removal using scanning electron microscopic image analysis. Materials and Methods: Thirty, freshly extracted mandibular premolars were used. The teeth were decoronated to obtain working length of 17mm and instrumentation up to 40 size (K file with 2.5% NaOCl irrigation between each file. The samples were divided into Groups I (17% ethylenediaminetetraacetic acid (EDTA, II (18% etidronic acid, and III (7% maleic acid containing 10 samples each. Longitudinal sectioning of the samples was done. Then the samples were observed under scanning electron microscope (SEM at apical, middle, and coronal levels. The images were scored according to the criteria: 1. No smear layer, 2. moderate smear layer, and 3 heavy smear layer. Statistical Analysis: Data was analyzed statistically using Kruskal-Wallis analysis of variance (ANOVA followed by Mann-Whitney U test for individual comparisons. The level for significance was set at 0.05. Results: The present study showed that all the three experimental irrigants removed the smear layer from different tooth levels (coronal, middle, and apical. Final irrigation with 7% maleic acid is more efficient than 17% EDTA and 18% etidronic acid in the removal of smear layer from the apical third of root canal.

  15. Comparative evaluation of 15% ethylenediamine tetra-acetic acid plus cetavlon and 5% chlorine dioxide in removal of smear layer: A scanning electron microscope study

    Directory of Open Access Journals (Sweden)

    Sandeep Singh

    2013-01-01

    Full Text Available Aims: The purpose of this study was to compare the efficacy of smear layer removal by 5% chlorine dioxide and 15% Ethylenediamine Tetra-Acetic Acid plus Cetavlon (EDTAC from the human root canal dentin. Materials >and Methods : Fifty single rooted human mandibular anterior teeth were divided into two groups of 20 teeth each and control group of 10 teeth. The root canals were prepared till F3 protaper and initially irrigated with 2% Sodium hypochlorite followed by 1 min irrigation with 15% EDTAC or 5% Chlorine dioxide respectively. The control group was irrigated with saline. The teeth were longitudinally split and observed under Scanning electron microscope SEM (×2000. Statistical Analysis Used: The statistical analysis was done using General Linear Mixed Model. Results : At the coronal thirds, no statistically significant difference was found between 15% EDTAC and 5% Chlorine dioxide in removing smear layer. In the middle and apical third region 15% EDTAC showed better smear layer removal ability than 5% Chlorine dioxide. Conclusion : Final irrigation with 15% EDTAC is superior to 5% chlorine dioxide in removing smear layer in the middle and apical third of radicular dentin.

  16. Energy Efficient Removal of Volatile Organic Compounds (VOCs) and Organic Hazardous Air Pollutants (o-HAPs) from Industrial Waste Streams by Direct Electron Oxidation

    Energy Technology Data Exchange (ETDEWEB)

    Testoni, A. L.

    2011-10-19

    This research program investigated and quantified the capability of direct electron beam destruction of volatile organic compounds and organic hazardous air pollutants in model industrial waste streams and calculated the energy savings that would be realized by the widespread adoption of the technology over traditional pollution control methods. Specifically, this research determined the quantity of electron beam dose required to remove 19 of the most important non-halogenated air pollutants from waste streams and constructed a technical and economic model for the implementation of the technology in key industries including petroleum refining, organic & solvent chemical production, food & beverage production, and forest & paper products manufacturing. Energy savings of 75 - 90% and green house gas reductions of 66 - 95% were calculated for the target market segments.

  17. Effectiveness of four different final irrigation activation techniques on smear layer removal in curved root canals : a scanning electron microscopy study.

    Directory of Open Access Journals (Sweden)

    Puneet Ahuja

    2014-02-01

    Full Text Available The aim of this study was to assess the efficacy of apical negative pressure (ANP, manual dynamic agitation (MDA, passive ultrasonic irrigation (PUI and needle irrigation (NI as final irrigation activation techniques for smear layer removal in curved root canals.Mesiobuccal root canals of 80 freshly extracted maxillary first molars with curvatures ranging between 25° and 35° were used. A glide path with #08-15 K files was established before cleaning and shaping with Mtwo rotary instruments (VDW, Munich, Germany up to size 35/0.04 taper. During instrumentation, 1 ml of 2.5% NaOCl was used at each change of file. Samples were divided into 4 equal groups (n=20 according to the final irrigation activation technique: group 1, apical negative pressure (ANP (EndoVac; group 2, manual dynamic agitation (MDA; group 3, passive ultrasonic irrigation (PUI; and group 4, needle irrigation (NI. Root canals were split longitudinally and subjected to scanning electron microscopy. The presence of smear layer at coronal, middle and apical levels was evaluated by superimposing 300-μm square grid over the obtained photomicrographs using a four-score scale with X1,000 magnification.Amongst all the groups tested, ANP showed the overall best smear layer removal efficacy (p < 0.05. Removal of smear layer was least effective with the NI technique.ANP (EndoVac system can be used as the final irrigation activation technique for effective smear layer removal in curved root canals.

  18. Scanning electron microscopy analysis of the growth of dental plaque on the surfaces of removable orthodontic aligners after the use of different cleaning methods

    Directory of Open Access Journals (Sweden)

    Levrini L

    2015-12-01

    Full Text Available Luca Levrini, Francesca Novara, Silvia Margherini, Camilla Tenconi, Mario Raspanti Department of Surgical and Morphological Sciences, Dental Hygiene School, Research Centre Cranio Facial Disease and Medicine, University of Insubria, Varese, Italy Background: Advances in orthodontics are leading to the use of minimally invasive technologies, such as transparent removable aligners, and are able to meet high demands in terms of performance and esthetics. However, the most correct method of cleaning these appliances, in order to minimize the effects of microbial colonization, remains to be determined. Purpose: The aim of the present study was to identify the most effective method of cleaning removable orthodontic aligners, analyzing the growth of dental plaque as observed under scanning electron microscopy. Methods: Twelve subjects were selected for the study. All were free from caries and periodontal disease and were candidates for orthodontic therapy with invisible orthodontic aligners. The trial had a duration of 6 weeks, divided into three 2-week stages, during which three sets of aligners were used. In each stage, the subjects were asked to use a different method of cleaning their aligners: 1 running water (control condition; 2 effervescent tablets containing sodium carbonate and sulfate crystals followed by brushing with a toothbrush; and 3 brushing alone (with a toothbrush and toothpaste. At the end of each 2-week stage, the surfaces of the aligners were analyzed under scanning electron microscopy. Results: The best results were obtained with brushing combined with the use of sodium carbonate and sulfate crystals; brushing alone gave slightly inferior results. Conclusion: On the basis of previous literature results relating to devices in resin, studies evaluating the reliability of domestic ultrasonic baths for domestic use should be encouraged. At present, pending the availability of experimental evidence, it can be suggested that dental

  19. Energy-efficient neuromorphic classifiers

    OpenAIRE

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2015-01-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. Neuromorphic engineering promises extremely low energy consumptions, comparable to those of the nervous system. However, until now the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, rendering el...

  20. 15 CFR 4.8 - Classified Information.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily...

  1. Use of information barriers to protect classified information

    International Nuclear Information System (INIS)

    This paper discusses the detailed requirements for an information barrier (IB) for use with verification systems that employ intrusive measurement technologies. The IB would protect classified information in a bilateral or multilateral inspection of classified fissile material. Such a barrier must strike a balance between providing the inspecting party the confidence necessary to accept the measurement while protecting the inspected party's classified information. The authors discuss the structure required of an IB as well as the implications of the IB on detector system maintenance. A defense-in-depth approach is proposed which would provide assurance to the inspected party that all sensitive information is protected and to the inspecting party that the measurements are being performed as expected. The barrier could include elements of physical protection (such as locks, surveillance systems, and tamper indicators), hardening of key hardware components, assurance of capabilities and limitations of hardware and software systems, administrative controls, validation and verification of the systems, and error detection and resolution. Finally, an unclassified interface could be used to display and, possibly, record measurement results. The introduction of an IB into an analysis system may result in many otherwise innocuous components (detectors, analyzers, etc.) becoming classified and unavailable for routine maintenance by uncleared personnel. System maintenance and updating will be significantly simplified if the classification status of as many components as possible can be made reversible (i.e. the component can become unclassified following the removal of classified objects)

  2. Electrons

    International Nuclear Information System (INIS)

    Fast electrons are used to produce isotopes for studying the cooper metabolism: Cu-64 in a cyclotron and Cu-67 in a linear accelerator. Localized electrons are responsible for the chemical and physiological characteristics of the trace elements. Studied are I, Cu, Co, Zn, Mo, Mn, Fe, Se, Mg. The Cu/Mo and Cu/Zn interactions are investigated. The levels of molybdenum, sulfate and zinc in the food are analysed. The role of the electrons in free radicals is discussed. The protection action of peroxidases and super oxidases against electron dangerous effect on normal physiology is also considered. Calculation of radiation damage and radiation protection is made. (author)

  3. Effectiveness of different irrigation techniques on smear layer removal in apical thirds of mesial root canals of permanent mandibular first molar: A scanning electron microscopic study

    Directory of Open Access Journals (Sweden)

    Pranav Khaord

    2015-01-01

    Full Text Available Aim: The aim of this study was to compare smear layer removal after final irrigant activation with sonic irrigation (SI, manual dynamic agitation (MDA, passive ultrasonic irrigation (PUI, and conventional syringe irrigation (CI. Materials and Methods: Forty mesial canals of mandibular first molars (mesial roots were cleaned and shaped by using ProTaper system to size F1 and sodium hypochlorite 3% and 17% ethylenediaminetetraacetic acid. The specimens were divided into 4 equal groups (n = 10 according to the final irrigation activation technique: Group 1, PUI; group 2, manual dynamic activation (MDA; group 3, SI; and group 4, control group (simple irrigation. Samples were split longitudinally and examined under scanning electron microscope for smear layer presence. Results: Control groups had the highest smear scores, which showed the statistically significant highest mean score at P < 0.05. This was followed by ultrasonic, MDA, and finally sonic, with no significant differences between them. Conclusions: Final irrigant activation with sonic and MDA resulted in the better removal of the smear layer than with CI.

  4. A comparative evaluation of different irrigation activation systems on smear layer removal from root canal: An in-vitro scanning electron microscope study

    Directory of Open Access Journals (Sweden)

    Nishi Singh

    2014-01-01

    Full Text Available Aim: The aim of the following study is to compare the evaluation of different irrigation activation system-F-File, CanalBrush (CB and EndoActivator (EA in removing smear layer from root canal. Materials and Methods: Root canals of eighty single rooted decoronated premolar teeth were instrumented using crown-down technique and then equally divided into four groups on basis of irrigation activation methods used: Without irrigation - control group, irrigation with F-File, CB, EA into Group I, II, III respectively. Samples were then longitudinally sectioned and examined under scanning electron microscope by three qualified observers using score from 1 to 4. Data was analyzed using Statistical Package for Social Sciences (SPSS, version 15.0 (SPSS Inc., Chicago IL at significance level of P ≤ 0.05. Results: Minimum mean score was observed in Group II at coronal, apical locations. Group III had minimum score at middle third. Groups difference in score were found to be significant statistically for all three locations as well as for overall assessment (P < 0.001. Conclusion: CB remove smear layer more efficiently from the root canal than F-File and EA in coronal and apical region.

  5. Injector for CESAR (2 MeV electron storage ring): 2-beam, 2 MV van de Graaff generator; tank removed.

    CERN Multimedia

    1968-01-01

    The van de Graaff generator in its tank. For voltage-holding, the tank was filled with pressurized extra-dry nitrogen. 2 beams emanated from 2 separate electron-guns. The left beam, for injection into the CESAR ring, was pulsed at 50 Hz, with currents of up to 1 A for 400 ns. The right beam was sent to a spectrometer line. Its pulselength was also 400 ns, but the pulse current was 12 microA, at a rate variable from 50 kHz to 1 MHz. This allowed stabilization of the top-terminal voltage to an unprecedented stability of +- 100 V, i.e. 6E-5. Although built for a nominal voltage of 2 MV, the operational voltage was limited to 1.75 MV in order to minimize voltage break-down events. CESAR was terminated at the end of 1967 and dismantled in 1968. R.Nettleton (left) and H.Burridge (right) are preparing the van de Graaff for shipment to the University of Swansea.

  6. The pilot plant experiment of electron beam irradiation process for removal of NOx and SOx from sinter plant exhaust gas in the iron and steel industry

    International Nuclear Information System (INIS)

    Air pollution problem has become more important in the progress of industry. Nitrogen oxides (NOx, mostly NO) and sulfur oxides (SOx, mostly SO2) which are contained in a sinter plant exhaust gas, are known as serious air pollutants. In such circumstances, an attempt has been made to simultaneously remove NOx and SOx from the sinter plant exhaust gas by means of a new electron beam irradiation process. The process consists of adding a small amount of NH3 to the exhaust gas, irradiating the gas by electron beam, forming ammonium salts by reactions of NOx and SOx with the NH3 and collecting ammonium salts by dry electrostatic precipitator (E.P.). Basic research on the present process had been performed using heavy oil combustion gas. Based on the results research was launched to study the applicability of the process to the treatment of sinter plant exhaust gas. A pilot plant, capable of treating a gas flow of 3000 Nm3/H was set up, and experiments were performed from July 1977 to June 1978. The plant is described and the results are presented. (author)

  7. Improving hole injection and carrier distribution in InGaN light-emitting diodes by removing the electron blocking layer and including a unique last quantum barrier

    International Nuclear Information System (INIS)

    The effects of removing the AlGaN electron blocking layer (EBL), and using a last quantum barrier (LQB) with a unique design in conventional blue InGaN light-emitting diodes (LEDs), were investigated through simulations. Compared with the conventional LED design that contained a GaN LQB and an AlGaN EBL, the LED that contained an AlGaN LQB with a graded-composition and no EBL exhibited enhanced optical performance and less efficiency droop. This effect was caused by an enhanced electron confinement and hole injection efficiency. Furthermore, when the AlGaN LQB was replaced with a triangular graded-composition, the performance improved further and the efficiency droop was lowered. The simulation results indicated that the enhanced hole injection efficiency and uniform distribution of carriers observed in the quantum wells were caused by the smoothing and thinning of the potential barrier for the holes. This allowed a greater number of holes to tunnel into the quantum wells from the p-type regions in the proposed LED structure

  8. Pavement Crack Classifiers: A Comparative Study

    Directory of Open Access Journals (Sweden)

    S. Siddharth

    2012-12-01

    Full Text Available Non Destructive Testing (NDT is an analysis technique used to inspect metal sheets and components without harming the product. NDT do not cause any change after inspection; this technique saves money and time in product evaluation, research and troubleshooting. In this study the objective is to perform NDT using soft computing techniques. Digital images are taken; Gray Level Co-occurrence Matrix (GLCM extracts features from these images. Extracted features are then fed into the classifiers which classifies them into images with and without cracks. Three major classifiers: Neural networks, Support Vector Machine (SVM and Linear classifiers are taken for the classification purpose. Performances of these classifiers are assessed and the best classifier for the given data is chosen.

  9. Rotary fluidized dryer classifier for coal

    Energy Technology Data Exchange (ETDEWEB)

    Sakaba, M.; Ueki, S.; Matsumoto, T.

    1985-01-01

    The development of equipment is reproted which uses a heat transfer medium and hot air to dry metallurgical coal to a predetermined moisture level, and which simultaneously classifies the dust-producing fine coal content. The integral construction of the drying and classifying zones results in a very compact configuration, with an installation area of 1/2 to 1/3 of that required for systems in which a separate dryer and classifier are combined. 6 references.

  10. 32 CFR 775.5 - Classified actions.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Classified actions. 775.5 Section 775.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY MISCELLANEOUS RULES PROCEDURES FOR IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT § 775.5 Classified actions. (a) The fact that a...

  11. Serefind: A Social Networking Website for Classifieds

    OpenAIRE

    Verma, Pramod

    2014-01-01

    This paper presents the design and implementation of a social networking website for classifieds, called Serefind. We designed search interfaces with focus on security, privacy, usability, design, ranking, and communications. We deployed this site at the Johns Hopkins University, and the results show it can be used as a self-sustaining classifieds site for public or private communities.

  12. A review of learning vector quantization classifiers

    CERN Document Server

    Nova, David

    2015-01-01

    In this work we present a review of the state of the art of Learning Vector Quantization (LVQ) classifiers. A taxonomy is proposed which integrates the most relevant LVQ approaches to date. The main concepts associated with modern LVQ approaches are defined. A comparison is made among eleven LVQ classifiers using one real-world and two artificial datasets.

  13. Adaboost Ensemble Classifiers for Corporate Default Prediction

    Directory of Open Access Journals (Sweden)

    Suresh Ramakrishnan

    2015-01-01

    Full Text Available This study aims to show a substitute technique to corporate default prediction. Data mining techniques have been extensively applied for this task, due to its ability to notice non-linear relationships and show a good performance in presence of noisy information, as it usually happens in corporate default prediction problems. In spite of several progressive methods that have widely been proposed, this area of research is not out dated and still needs further examination. In this study, the performance of multiple classifier systems is assessed in terms of their capability to appropriately classify default and non-default Malaysian firms listed in Bursa Malaysia. Multi-stage combination classifiers provided significant improvements over the single classifiers. In addition, Adaboost shows improvement in performance over the single classifiers.

  14. Designing Kernel Scheme for Classifiers Fusion

    CERN Document Server

    Haghighi, Mehdi Salkhordeh; Vahedian, Abedin; Modaghegh, Hamed

    2009-01-01

    In this paper, we propose a special fusion method for combining ensembles of base classifiers utilizing new neural networks in order to improve overall efficiency of classification. While ensembles are designed such that each classifier is trained independently while the decision fusion is performed as a final procedure, in this method, we would be interested in making the fusion process more adaptive and efficient. This new combiner, called Neural Network Kernel Least Mean Square1, attempts to fuse outputs of the ensembles of classifiers. The proposed Neural Network has some special properties such as Kernel abilities,Least Mean Square features, easy learning over variants of patterns and traditional neuron capabilities. Neural Network Kernel Least Mean Square is a special neuron which is trained with Kernel Least Mean Square properties. This new neuron is used as a classifiers combiner to fuse outputs of base neural network classifiers. Performance of this method is analyzed and compared with other fusion m...

  15. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  16. DFRFT: A Classified Review of Recent Methods with Its Application

    Directory of Open Access Journals (Sweden)

    Ashutosh Kumar Singh

    2013-01-01

    Full Text Available In the literature, there are various algorithms available for computing the discrete fractional Fourier transform (DFRFT. In this paper, all the existing methods are reviewed, classified into four categories, and subsequently compared to find out the best alternative from the view point of minimal computational error, computational complexity, transform features, and additional features like security. Subsequently, the correlation theorem of FRFT has been utilized to remove significantly the Doppler shift caused due to motion of receiver in the DSB-SC AM signal. Finally, the role of DFRFT has been investigated in the area of steganography.

  17. Effect of diode laser and ultrasonics with and without ethylenediaminetetraacetic acid on smear layer removal from the root canals: A scanning electron microscope study

    Science.gov (United States)

    Amin, Khalid; Masoodi, Ajaz; Nabi, Shahnaz; Ahmad, Parvaiz; Farooq, Riyaz; Purra, Aamir Rashid; Ahangar, Fayaz Ahmad

    2016-01-01

    Aim: To evaluate the effect of diode laser and ultrasonics with and without ethylenediaminetetraacetic acid (EDTA) on the smear layer removal from root canals. Materials and Methods: A total of 120 mandibular premolars were decoronated to working the length of 12 mm and prepared with protaper rotary files up to size F3. Group A canals irrigated with 1 ml of 3% sodium hypochlorite (NaOCl) followed by 3 ml of 3% NaOCl. Group B canals irrigated with 1 ml of 17% EDTA followed by 3 ml of 3% NaOCl. Group C canals lased with a diode laser. Group D canals were initially irrigated with 0.8 ml of 17% EDTA the remaining 0.2 ml was used to fill the root canals, and diode laser application was done. Group E canals were irrigated with 1 ml distilled water with passive ultrasonic activation, followed by 3 ml of 3% NaOCl. Group F canals were irrigated with 1 ml EDTA with passive ultrasonic activation, followed by 3 ml of 3% NaOCl. Scanning electron microscope examination of canals was done for remaining smear layer at coronal middle and apical third levels. Results: Ultrasonics with EDTA had the least smear layer scores. Conclusion: Diode laser alone performed significantly better than ultrasonics. PMID:27656060

  18. Text Classification and Classifiers:A Survey

    Directory of Open Access Journals (Sweden)

    Vandana Korde

    2012-03-01

    Full Text Available As most information (over 80% is stored as text, text mining is believed to have a high commercial potential value. knowledge may be discovered from many sources of information; yet, unstructured texts remain the largest readily available source of knowledge .Text classification which classifies the documents according to predefined categories .In this paper we are tried to give the introduction of text classification, process of text classification as well as the overview of the classifiers and tried to compare the some existing classifier on basis of few criteria like time complexity, principal and performance.

  19. Classifier Risk Estimation under Limited Labeling Resources

    OpenAIRE

    Kumar, Anurag; Raj, Bhiksha

    2016-01-01

    In this paper we propose strategies for estimating performance of a classifier when labels cannot be obtained for the whole test set. The number of test instances which can be labeled is very small compared to the whole test data size. The goal then is to obtain a precise estimate of classifier performance using as little labeling resource as possible. Specifically, we try to answer, how to select a subset of the large test set for labeling such that the performance of a classifier estimated ...

  20. Parallelism and programming in classifier systems

    CERN Document Server

    Forrest, Stephanie

    1990-01-01

    Parallelism and Programming in Classifier Systems deals with the computational properties of the underlying parallel machine, including computational completeness, programming and representation techniques, and efficiency of algorithms. In particular, efficient classifier system implementations of symbolic data structures and reasoning procedures are presented and analyzed in detail. The book shows how classifier systems can be used to implement a set of useful operations for the classification of knowledge in semantic networks. A subset of the KL-ONE language was chosen to demonstrate these o

  1. A Sequential Algorithm for Training Text Classifiers

    CERN Document Server

    Lewis, D D; Lewis, David D.; Gale, William A.

    1994-01-01

    The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness.

  2. Dengue—How Best to Classify It

    OpenAIRE

    Srikiatkhachorn, Anon; Rothman, Alan L.; Robert V Gibbons; Sittisombut, Nopporn; Malasit, Prida; Ennis, Francis A.; Nimmannitya, Suchitra; Kalayanarooj, Siripen

    2011-01-01

    Since the 1970s, dengue has been classified as dengue fever and dengue hemorrhagic fever. In 2009, the World Health Organization issued a new, severity-based clinical classification which differs greatly from the previous classification.

  3. Local Component Analysis for Nonparametric Bayes Classifier

    CERN Document Server

    Khademi, Mahmoud; safayani, Meharn

    2010-01-01

    The decision boundaries of Bayes classifier are optimal because they lead to maximum probability of correct decision. It means if we knew the prior probabilities and the class-conditional densities, we could design a classifier which gives the lowest probability of error. However, in classification based on nonparametric density estimation methods such as Parzen windows, the decision regions depend on the choice of parameters such as window width. Moreover, these methods suffer from curse of dimensionality of the feature space and small sample size problem which severely restricts their practical applications. In this paper, we address these problems by introducing a novel dimension reduction and classification method based on local component analysis. In this method, by adopting an iterative cross-validation algorithm, we simultaneously estimate the optimal transformation matrices (for dimension reduction) and classifier parameters based on local information. The proposed method can classify the data with co...

  4. An Efficient and Effective Immune Based Classifier

    Directory of Open Access Journals (Sweden)

    Shahram Golzari

    2011-01-01

    Full Text Available Problem statement: Artificial Immune Recognition System (AIRS is most popular and effective immune inspired classifier. Resource competition is one stage of AIRS. Resource competition is done based on the number of allocated resources. AIRS uses a linear method to allocate resources. The linear resource allocation increases the training time of classifier. Approach: In this study, a new nonlinear resource allocation method is proposed to make AIRS more efficient. New algorithm, AIRS with proposed nonlinear method, is tested on benchmark datasets from UCI machine learning repository. Results: Based on the results of experiments, using proposed nonlinear resource allocation method decreases the training time and number of memory cells and doesn't reduce the accuracy of AIRS. Conclusion: The proposed classifier is an efficient and effective classifier.

  5. Arabic Word Recognition by Classifiers and Context

    Institute of Scientific and Technical Information of China (English)

    Nadir Farah; Labiba Souici; Mokhtar Sellami

    2005-01-01

    Given the number and variety of methods used for handwriting recognition, it has been shown that there is no single method that can be called the "best". In recent years, the combination of different classifiers and the use of contextual information have become major areas of interest in improving recognition results. This paper addresses a case study on the combination of multiple classifiers and the integration of syntactic level information for the recognition of handwritten Arabic literal amounts. To the best of our knowledge, this is the first time either of these methods has been applied to Arabic word recognition. Using three individual classifiers with high level global features, we performed word recognition experiments. A parallel combination method was tested for all possible configuration cases of the three chosen classifiers. A syntactic analyzer makes a final decision on the candidate words generated by the best configuration scheme.The effectiveness of contextual knowledge integration in our application is confirmed by the obtained results.

  6. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  7. Nomograms for Visualization of Naive Bayesian Classifier

    OpenAIRE

    Možina, Martin; Demšar, Janez; Michael W Kattan; Zupan, Blaz

    2004-01-01

    Besides good predictive performance, the naive Bayesian classifier can also offer a valuable insight into the structure of the training data and effects of the attributes on the class probabilities. This structure may be effectively revealed through visualization of the classifier. We propose a new way to visualize the naive Bayesian model in the form of a nomogram. The advantages of the proposed method are simplicity of presentation, clear display of the effects of individual attribute value...

  8. Classifying Genomic Sequences by Sequence Feature Analysis

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Liu; Dian Jiao; Xiao Sun

    2005-01-01

    Traditional sequence analysis depends on sequence alignment. In this study, we analyzed various functional regions of the human genome based on sequence features, including word frequency, dinucleotide relative abundance, and base-base correlation. We analyzed the human chromosome 22 and classified the upstream,exon, intron, downstream, and intergenic regions by principal component analysis and discriminant analysis of these features. The results show that we could classify the functional regions of genome based on sequence feature and discriminant analysis.

  9. Searching and Classifying non-textual information

    OpenAIRE

    Arentz, Will Archer

    2004-01-01

    This dissertation contains a set of contributions that deal with search or classification of non-textual information. Each contribution can be considered a solution to a specific problem, in an attempt to map out a common ground. The problems cover a wide range of research fields, including search in music, classifying digitally sampled music, visualization and navigation in search results, and classifying images and Internet sites.On classification of digitally sample music, as method for ex...

  10. Binary Classifier Calibration: Non-parametric approach

    OpenAIRE

    Naeini, Mahdi Pakdaman; Cooper, Gregory F.; Hauskrecht, Milos

    2014-01-01

    Accurate calibration of probabilistic predictive models learned is critical for many practical prediction and decision-making tasks. There are two main categories of methods for building calibrated classifiers. One approach is to develop methods for learning probabilistic models that are well-calibrated, ab initio. The other approach is to use some post-processing methods for transforming the output of a classifier to be well calibrated, as for example histogram binning, Platt scaling, and is...

  11. Quality Classifiers for Open Source Software Repositories

    OpenAIRE

    Tsatsaronis, George; Halkidi, Maria; Giakoumakis, Emmanouel A.

    2009-01-01

    Open Source Software (OSS) often relies on large repositories, like SourceForge, for initial incubation. The OSS repositories offer a large variety of meta-data providing interesting information about projects and their success. In this paper we propose a data mining approach for training classifiers on the OSS meta-data provided by such data repositories. The classifiers learn to predict the successful continuation of an OSS project. The `successfulness' of projects is defined in terms of th...

  12. A multi-class large margin classifier

    Institute of Scientific and Technical Information of China (English)

    Liang TANG; Qi XUAN; Rong XIONG; Tie-jun WU; Jian CHU

    2009-01-01

    Currently there are two approaches for a multi-class support vector classifier (SVC). One is to construct and combine several binary classifiers while the other is to directly consider all classes of data in one optimization formulation. For a K-class problem (K>2), the first approach has to construct at least K classifiers, and the second approach has to solve a much larger op-timization problem proportional to K by the algorithms developed so far. In this paper, following the second approach, we present a novel multi-class large margin classifier (MLMC). This new machine can solve K-class problems in one optimization formula-tion without increasing the size of the quadratic programming (QP) problem proportional to K. This property allows us to construct just one classifier with as few variables in the QP problem as possible to classify multi-class data, and we can gain the advantage of speed from it especially when K is large. Our experiments indicate that MLMC almost works as well as (sometimes better than) many other multi-class SVCs for some benchmark data classification problems, and obtains a reasonable performance in face recognition application on the AR face database.

  13. COMBINING CLASSIFIERS FOR CREDIT RISK PREDICTION

    Institute of Scientific and Technical Information of China (English)

    Bhekisipho TWALA

    2009-01-01

    Credit risk prediction models seek to predict quality factors such as whether an individual will default (bad applicant) on a loan or not (good applicant). This can be treated as a kind of machine learning (ML) problem. Recently, the use of ML algorithms has proven to be of great practical value in solving a variety of risk problems including credit risk prediction. One of the most active areas of recent research in ML has been the use of ensemble (combining) classifiers. Research indicates that ensemble individual classifiers lead to a significant improvement in classification performance by having them vote for the most popular class. This paper explores the predicted behaviour of five classifiers for different types of noise in terms of credit risk prediction accuracy, and how could such accuracy be improved by using pairs of classifier ensembles. Benchmarking results on five credit datasets and comparison with the performance of each individual classifier on predictive accuracy at various attribute noise levels are presented. The experimental evaluation shows that the ensemble of classifiers technique has the potential to improve prediction accuracy.

  14. What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers?

    CERN Document Server

    Hu, Bao-Gang

    2011-01-01

    In this study, both Bayesian classifiers and mutual information classifiers are examined for binary classifications with or without a reject option. The general decision rules in terms of distinctions on error types and reject types are derived for Bayesian classifiers. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of "non-consistency" for interpreting cost terms. If no data is given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the theoretical differences, including the extremely-class-imbalanced cases. Finally, we briefly summarize the Bayesian classifiers and mutual-info...

  15. Classifying prosthetic use via accelerometry in persons with transtibial amputations

    Directory of Open Access Journals (Sweden)

    Morgan T. Redfield, MSEE

    2013-12-01

    Full Text Available Knowledge of how persons with amputation use their prostheses and how this use changes over time may facilitate effective rehabilitation practices and enhance understanding of prosthesis functionality. Perpetual monitoring and classification of prosthesis use may also increase the health and quality of life for prosthetic users. Existing monitoring and classification systems are often limited in that they require the subject to manipulate the sensor (e.g., attach, remove, or reset a sensor, record data over relatively short time periods, and/or classify a limited number of activities and body postures of interest. In this study, a commercially available three-axis accelerometer (ActiLife ActiGraph GT3X+ was used to characterize the activities and body postures of individuals with transtibial amputation. Accelerometers were mounted on prosthetic pylons of 10 persons with transtibial amputation as they performed a preset routine of actions. Accelerometer data was postprocessed using a binary decision tree to identify when the prosthesis was being worn and to classify periods of use as movement (i.e., leg motion such as walking or stair climbing, standing (i.e., standing upright with limited leg motion, or sitting (i.e., seated with limited leg motion. Classifications were compared to visual observation by study researchers. The classifier achieved a mean +/– standard deviation accuracy of 96.6% +/– 3.0%.

  16. Classifying prosthetic use via accelerometry in persons with transtibial amputations.

    Science.gov (United States)

    Redfield, Morgan T; Cagle, John C; Hafner, Brian J; Sanders, Joan E

    2013-01-01

    Knowledge of how persons with amputation use their prostheses and how this use changes over time may facilitate effective rehabilitation practices and enhance understanding of prosthesis functionality. Perpetual monitoring and classification of prosthesis use may also increase the health and quality of life for prosthetic users. Existing monitoring and classification systems are often limited in that they require the subject to manipulate the sensor (e.g., attach, remove, or reset a sensor), record data over relatively short time periods, and/or classify a limited number of activities and body postures of interest. In this study, a commercially available three-axis accelerometer (ActiLife ActiGraph GT3X+) was used to characterize the activities and body postures of individuals with transtibial amputation. Accelerometers were mounted on prosthetic pylons of 10 persons with transtibial amputation as they performed a preset routine of actions. Accelerometer data was postprocessed using a binary decision tree to identify when the prosthesis was being worn and to classify periods of use as movement (i.e., leg motion such as walking or stair climbing), standing (i.e., standing upright with limited leg motion), or sitting (i.e., seated with limited leg motion). Classifications were compared to visual observation by study researchers. The classifier achieved a mean +/- standard deviation accuracy of 96.6% +/- 3.0%.

  17. Tick Removal

    Science.gov (United States)

    ... ticks Tickborne diseases abroad Borrelia miyamotoi Borrelia mayonii Tick Removal Recommend on Facebook Tweet Share Compartir If ... a tick quite effectively. How to remove a tick Use fine-tipped tweezers to grasp the tick ...

  18. What are the differences between Bayesian classifiers and mutual-information classifiers?

    Science.gov (United States)

    Hu, Bao-Gang

    2014-02-01

    In this paper, both Bayesian and mutual-information classifiers are examined for binary classifications with or without a reject option. The general decision rules are derived for Bayesian classifiers with distinctions on error types and reject types. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of nonconsistency for interpreting cost terms. If no data are given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the differences, including the extremely class-imbalanced cases. Finally, we briefly summarize the Bayesian and mutual-information classifiers in terms of their application advantages and disadvantages, respectively. PMID:24807026

  19. Averaged Extended Tree Augmented Naive Classifier

    Directory of Open Access Journals (Sweden)

    Aaron Meehan

    2015-07-01

    Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.

  20. Adapt Bagging to Nearest Neighbor Classifiers

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Zhou; Yang Yu

    2005-01-01

    It is well-known that in order to build a strong ensemble, the component learners should be with high diversity as well as high accuracy. If perturbing the training set can cause significant changes in the component learners constructed, then Bagging can effectively improve accuracy. However, for stable learners such as nearest neighbor classifiers, perturbing the training set can hardly produce diverse component learners, therefore Bagging does not work well. This paper adapts Bagging to nearest neighbor classifiers through injecting randomness to distance metrics. In constructing the component learners, both the training set and the distance metric employed for identifying the neighbors are perturbed. A large scale empirical study reported in this paper shows that the proposed BagInRand algorithm can effectively improve the accuracy of nearest neighbor classifiers.

  1. Dynamic Bayesian Combination of Multiple Imperfect Classifiers

    CERN Document Server

    Simpson, Edwin; Psorakis, Ioannis; Smith, Arfon

    2012-01-01

    Classifier combination methods need to make best use of the outputs of multiple, imperfect classifiers to enable higher accuracy classifications. In many situations, such as when human decisions need to be combined, the base decisions can vary enormously in reliability. A Bayesian approach to such uncertain combination allows us to infer the differences in performance between individuals and to incorporate any available prior knowledge about their abilities when training data is sparse. In this paper we explore Bayesian classifier combination, using the computationally efficient framework of variational Bayesian inference. We apply the approach to real data from a large citizen science project, Galaxy Zoo Supernovae, and show that our method far outperforms other established approaches to imperfect decision combination. We go on to analyse the putative community structure of the decision makers, based on their inferred decision making strategies, and show that natural groupings are formed. Finally we present ...

  2. Evolving Classifiers: Methods for Incremental Learning

    CERN Document Server

    Hulley, Greg

    2007-01-01

    The ability of a classifier to take on new information and classes by evolving the classifier without it having to be fully retrained is known as incremental learning. Incremental learning has been successfully applied to many classification problems, where the data is changing and is not all available at once. In this paper there is a comparison between Learn++, which is one of the most recent incremental learning algorithms, and the new proposed method of Incremental Learning Using Genetic Algorithm (ILUGA). Learn++ has shown good incremental learning capabilities on benchmark datasets on which the new ILUGA method has been tested. ILUGA has also shown good incremental learning ability using only a few classifiers and does not suffer from catastrophic forgetting. The results obtained for ILUGA on the Optical Character Recognition (OCR) and Wine datasets are good, with an overall accuracy of 93% and 94% respectively showing a 4% improvement over Learn++.MT for the difficult multi-class OCR dataset.

  3. Reinforcement Learning Based Artificial Immune Classifier

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  4. A nonparametric classifier for unsegmented text

    Science.gov (United States)

    Nagy, George; Joshi, Ashutosh; Krishnamoorthy, Mukkai; Lin, Yu; Lopresti, Daniel P.; Mehta, Shashank; Seth, Sharad

    2003-12-01

    Symbolic Indirect Correlation (SIC) is a new classification method for unsegmented patterns. SIC requires two levels of comparisons. First, the feature sequences from an unknown query signal and a known multi-pattern reference signal are matched. Then, the order of the matched features is compared with the order of matches between every lexicon symbol-string and the reference string in the lexical domain. The query is classified according to the best matching lexicon string in the second comparison. Accuracy increases as classified feature-and-symbol strings are added to the reference string.

  5. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads;

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...

  6. Neural Classifier Construction using Regularization, Pruning

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan;

    1998-01-01

    In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunction...

  7. Design and evaluation of neural classifiers

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Pedersen, Morten With; Hansen, Lars Kai;

    1996-01-01

    In this paper we propose a method for the design of feedforward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme we derive a modified form of the entropy error measure and an algebraic estimate of the test error. In conjunction...

  8. 75 FR 37253 - Classified National Security Information

    Science.gov (United States)

    2010-06-28

    ... and Records Administration Information Security Oversight Office 32 CFR Parts 2001 and 2003 Classified National Security Information; Final Rule #0;#0;Federal Register / Vol. 75, No. 123 / Monday, June 28, 2010 / Rules and Regulations#0;#0; ] NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Information...

  9. Adaptively robust filtering with classified adaptive factors

    Institute of Scientific and Technical Information of China (English)

    CUI Xianqiang; YANG Yuanxi

    2006-01-01

    The key problems in applying the adaptively robust filtering to navigation are to establish an equivalent weight matrix for the measurements and a suitable adaptive factor for balancing the contributions of the measurements and the predicted state information to the state parameter estimates. In this paper, an adaptively robust filtering with classified adaptive factors was proposed, based on the principles of the adaptively robust filtering and bi-factor robust estimation for correlated observations. According to the constant velocity model of Kalman filtering, the state parameter vector was divided into two groups, namely position and velocity. The estimator of the adaptively robust filtering with classified adaptive factors was derived, and the calculation expressions of the classified adaptive factors were presented. Test results show that the adaptively robust filtering with classified adaptive factors is not only robust in controlling the measurement outliers and the kinematic state disturbing but also reasonable in balancing the contributions of the predicted position and velocity, respectively, and its filtering accuracy is superior to the adaptively robust filter with single adaptive factor based on the discrepancy of the predicted position or the predicted velocity.

  10. Scanning electron microscopy (SEM) evaluation of sealing ability of MTA and EndoSequence as root-end filling materials with chitosan and carboxymethyl chitosan (CMC) as retrograde smear layer removing agents

    OpenAIRE

    Bolla Nagesh; Eppala Jeevani; Varri Sujana; Bharagavi Damaraju; Kaluvakolanu Sreeha; Penumaka Ramesh

    2016-01-01

    Aim: The purpose of this study was to evaluate the sealing ability of mineral trioxide aggregate (MTA) and EndoSequence with chitosan and carboxymethyl chitosan (CMC) as retrograde smear layer removing agents using scanning electron microscopy (SEM). Materials and Methods: Forty human single rooted teeth were taken. Crowns were decoronated and canals were obturated. Apically roots were resected and retrograde cavities were done. Based on the type of retrograde material placed and the typ...

  11. Enamel Surface Evaluation after Removal of Orthodontic Composite Remnants by Intraoral Sandblasting Technique and Carbide Bur Technique: A Three-Dimensional Surface Profilometry and Scanning Electron Microscopic Study

    OpenAIRE

    Mhatre, Amol C; Tandur, Arundhati P; Reddy, Sumitra S; Karunakara, B C; Baswaraj, H

    2015-01-01

    Background: The purpose of this thesis is to present a practical and efficient clinical method of returning enamel to as near its original condition as possible following removal of bonded orthodontic attachments. The main objective of this study is to evaluate and compare the iatrogenic enamel damage caused by use of two different remnant removal techniques – sandblasting technique and carbide bur technique. Materials and Methods: 40 extracted premolar teeth were selected as sample. Premolar...

  12. Disassembly and Sanitization of Classified Matter

    International Nuclear Information System (INIS)

    The Disassembly Sanitization Operation (DSO) process was implemented to support weapon disassembly and disposition by using recycling and waste minimization measures. This process was initiated by treaty agreements and reconfigurations within both the DOD and DOE Complexes. The DOE is faced with disassembling and disposing of a huge inventory of retired weapons, components, training equipment, spare parts, weapon maintenance equipment, and associated material. In addition, regulations have caused a dramatic increase in the need for information required to support the handling and disposition of these parts and materials. In the past, huge inventories of classified weapon components were required to have long-term storage at Sandia and at many other locations throughout the DoE Complex. These materials are placed in onsite storage unit due to classification issues and they may also contain radiological and/or hazardous components. Since no disposal options exist for this material, the only choice was long-term storage. Long-term storage is costly and somewhat problematic, requiring a secured storage area, monitoring, auditing, and presenting the potential for loss or theft of the material. Overall recycling rates for materials sent through the DSO process have enabled 70 to 80% of these components to be recycled. These components are made of high quality materials and once this material has been sanitized, the demand for the component metals for recycling efforts is very high. The DSO process for NGPF, classified components established the credibility of this technique for addressing the long-term storage requirements of the classified weapons component inventory. The success of this application has generated interest from other Sandia organizations and other locations throughout the complex. Other organizations are requesting the help of the DSO team and the DSO is responding to these requests by expanding its scope to include Work-for- Other projects. For example

  13. Semantic Features for Classifying Referring Search Terms

    Energy Technology Data Exchange (ETDEWEB)

    May, Chandler J.; Henry, Michael J.; McGrath, Liam R.; Bell, Eric B.; Marshall, Eric J.; Gregory, Michelle L.

    2012-05-11

    When an internet user clicks on a result in a search engine, a request is submitted to the destination web server that includes a referrer field containing the search terms given by the user. Using this information, website owners can analyze the search terms leading to their websites to better understand their visitors needs. This work explores some of the features that can be used for classification-based analysis of such referring search terms. We present initial results for the example task of classifying HTTP requests countries of origin. A system that can accurately predict the country of origin from query text may be a valuable complement to IP lookup methods which are susceptible to the obfuscation of dereferrers or proxies. We suggest that the addition of semantic features improves classifier performance in this example application. We begin by looking at related work and presenting our approach. After describing initial experiments and results, we discuss paths forward for this work.

  14. Combining supervised classifiers with unlabeled data

    Institute of Scientific and Technical Information of China (English)

    刘雪艳; 张雪英; 李凤莲; 黄丽霞

    2016-01-01

    Ensemble learning is a wildly concerned issue. Traditional ensemble techniques are always adopted to seek better results with labeled data and base classifiers. They fail to address the ensemble task where only unlabeled data are available. A label propagation based ensemble (LPBE) approach is proposed to further combine base classification results with unlabeled data. First, a graph is constructed by taking unlabeled data as vertexes, and the weights in the graph are calculated by correntropy function. Average prediction results are gained from base classifiers, and then propagated under a regularization framework and adaptively enhanced over the graph. The proposed approach is further enriched when small labeled data are available. The proposed algorithms are evaluated on several UCI benchmark data sets. Results of simulations show that the proposed algorithms achieve satisfactory performance compared with existing ensemble methods.

  15. Comparing cosmic web classifiers using information theory

    Science.gov (United States)

    Leclercq, Florent; Lavaux, Guilhem; Jasche, Jens; Wandelt, Benjamin

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  16. Classifying sows' activity types from acceleration patterns

    DEFF Research Database (Denmark)

    Cornou, Cecile; Lundbye-Christensen, Søren

    2008-01-01

    . This article suggests a method of classifying five types of activity exhibited by group-housed sows. The method involves the measurement of acceleration in three dimensions. The five activities are: feeding, walking, rooting, lying laterally and lying sternally. Four time series of acceleration (the three......, which involves 30 min for each activity. The results show that feeding and lateral/sternal lying activities are best recognized; walking and rooting activities are mostly recognized by a specific axis corresponding to the direction of the sow's movement while performing the activity (horizontal sidewise......An automated method of classifying sow activity using acceleration measurements would allow the individual sow's behavior to be monitored throughout the reproductive cycle; applications for detecting behaviors characteristic of estrus and farrowing or to monitor illness and welfare can be foreseen...

  17. Classifying bed inclination using pressure images.

    Science.gov (United States)

    Baran Pouyan, M; Ostadabbas, S; Nourani, M; Pompeo, M

    2014-01-01

    Pressure ulcer is one of the most prevalent problems for bed-bound patients in hospitals and nursing homes. Pressure ulcers are painful for patients and costly for healthcare systems. Accurate in-bed posture analysis can significantly help in preventing pressure ulcers. Specifically, bed inclination (back angle) is a factor contributing to pressure ulcer development. In this paper, an efficient methodology is proposed to classify bed inclination. Our approach uses pressure values collected from a commercial pressure mat system. Then, by applying a number of image processing and machine learning techniques, the approximate degree of bed is estimated and classified. The proposed algorithm was tested on 15 subjects with various sizes and weights. The experimental results indicate that our method predicts bed inclination in three classes with 80.3% average accuracy.

  18. Comparing cosmic web classifiers using information theory

    CERN Document Server

    Leclercq, Florent; Jasche, Jens; Wandelt, Benjamin

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-web, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  19. Improving 2D Boosted Classifiers Using Depth LDA Classifier for Robust Face Detection

    Directory of Open Access Journals (Sweden)

    Mahmood Rahat

    2012-05-01

    Full Text Available Face detection plays an important role in Human Robot Interaction. Many of services provided by robots depend on face detection. This paper presents a novel face detection algorithm which uses depth data to improve the efficiency of a boosted classifier on 2D data for reduction of false positive alarms. The proposed method uses two levels of cascade classifiers. The classifiers of the first level deal with 2D data and classifiers of the second level use depth data captured by a stereo camera. The first level employs conventional cascade of boosted classifiers which eliminates many of nonface sub windows. The remaining sub windows are used as input to the second level. After calculating the corresponding depth model of the sub windows, a heuristic classifier along with a Linear Discriminant analysis (LDA classifier is applied on the depth data to reject remaining non face sub windows. The experimental results of the proposed method using a Bumblebee-2 stereo vision system on a mobile platform for real time detection of human faces in natural cluttered environments reveal significantly reduction of false positive alarms of 2D face detector.

  20. Use Restricted - Classified information sharing, case NESA

    OpenAIRE

    El-Bash, Amira

    2015-01-01

    This Thesis is written for the Laurea University of Applied Sciences under the Bachelor’s Degree in Security Management. The empirical research of the thesis was supported by the National Emergency Supply Agency as a CASE study, in classified information sharing in the organization. The National Emergency Supply Agency was chosen for the research because of its social significance and distinctively wide operation field. Being one of the country’s administrator’s actors, its range of tasks in ...

  1. Comparing cosmic web classifiers using information theory

    OpenAIRE

    Leclercq, Florent; Lavaux, Guilhem; Jasche, Jens; Wandelt, Benjamin

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative perf...

  2. Deterministic Pattern Classifier Based on Genetic Programming

    Institute of Scientific and Technical Information of China (English)

    LI Jian-wu; LI Min-qiang; KOU Ji-song

    2001-01-01

    This paper proposes a supervised training-test method with Genetic Programming (GP) for pattern classification. Compared and contrasted with traditional methods with regard to deterministic pattern classifiers, this method is true for both linear separable problems and linear non-separable problems. For specific training samples, it can formulate the expression of discriminate function well without any prior knowledge. At last, an experiment is conducted, and the result reveals that this system is effective and practical.

  3. COMBINED CLASSIFIER FOR WEBSITE MESSAGES FILTRATION

    OpenAIRE

    TARASOV VENIAMIN; MEZENCEVA EKATERINA; KARBAEV DANILA

    2015-01-01

    The paper describes a new approach to website messages filtration using combined classifier. Information security standards for the internet resources require user data protection however the increasing volume of spam messages in interactive sections of websites poses a special problem. Unlike many email filtering solutions the proposed approach is based on the effective combination of Bayes and Fisher methods, which allows us to build accurate and stable spam filter. In this paper we conside...

  4. Tattoo removal.

    Science.gov (United States)

    Adatto, Maurice A; Halachmi, Shlomit; Lapidoth, Moshe

    2011-01-01

    Over 50,000 new tattoos are placed each year in the United States. Studies estimate that 24% of American college students have tattoos and 10% of male American adults have a tattoo. The rising popularity of tattoos has spurred a corresponding increase in tattoo removal. Not all tattoos are placed intentionally or for aesthetic reasons though. Traumatic tattoos due to unintentional penetration of exogenous pigments can also occur, as well as the placement of medical tattoos to mark treatment boundaries, for example in radiation therapy. Protocols for tattoo removal have evolved over history. The first evidence of tattoo removal attempts was found in Egyptian mummies, dated to have lived 4,000 years BC. Ancient Greek writings describe tattoo removal with salt abrasion or with a paste containing cloves of white garlic mixed with Alexandrian cantharidin. With the advent of Q-switched lasers in the late 1960s, the outcomes of tattoo removal changed radically. In addition to their selective absorption by the pigment, the extremely short pulse duration of Q-switched lasers has made them the gold standard for tattoo removal.

  5. Tattoo removal.

    Science.gov (United States)

    Adatto, Maurice A; Halachmi, Shlomit; Lapidoth, Moshe

    2011-01-01

    Over 50,000 new tattoos are placed each year in the United States. Studies estimate that 24% of American college students have tattoos and 10% of male American adults have a tattoo. The rising popularity of tattoos has spurred a corresponding increase in tattoo removal. Not all tattoos are placed intentionally or for aesthetic reasons though. Traumatic tattoos due to unintentional penetration of exogenous pigments can also occur, as well as the placement of medical tattoos to mark treatment boundaries, for example in radiation therapy. Protocols for tattoo removal have evolved over history. The first evidence of tattoo removal attempts was found in Egyptian mummies, dated to have lived 4,000 years BC. Ancient Greek writings describe tattoo removal with salt abrasion or with a paste containing cloves of white garlic mixed with Alexandrian cantharidin. With the advent of Q-switched lasers in the late 1960s, the outcomes of tattoo removal changed radically. In addition to their selective absorption by the pigment, the extremely short pulse duration of Q-switched lasers has made them the gold standard for tattoo removal. PMID:21865802

  6. Learnability of min-max pattern classifiers

    Science.gov (United States)

    Yang, Ping-Fai; Maragos, Petros

    1991-11-01

    This paper introduces the class of thresholded min-max functions and studies their learning under the probably approximately correct (PAC) model introduced by Valiant. These functions can be used as pattern classifiers of both real-valued and binary-valued feature vectors. They are a lattice-theoretic generalization of Boolean functions and are also related to three-layer perceptrons and morphological signal operators. Several subclasses of the thresholded min- max functions are shown to be learnable under the PAC model.

  7. Classifying LEP Data with Support Vector Algorithms

    CERN Document Server

    Vannerem, P; Schölkopf, B; Smola, A J; Söldner-Rembold, S

    1999-01-01

    We have studied the application of different classification algorithms in the analysis of simulated high energy physics data. Whereas Neural Network algorithms have become a standard tool for data analysis, the performance of other classifiers such as Support Vector Machines has not yet been tested in this environment. We chose two different problems to compare the performance of a Support Vector Machine and a Neural Net trained with back-propagation: tagging events of the type e+e- -> ccbar and the identification of muons produced in multihadronic e+e- annihilation events.

  8. Support Vector classifiers for Land Cover Classification

    CERN Document Server

    Pal, Mahesh

    2008-01-01

    Support vector machines represent a promising development in machine learning research that is not widely used within the remote sensing community. This paper reports the results of Multispectral(Landsat-7 ETM+) and Hyperspectral DAIS)data in which multi-class SVMs are compared with maximum likelihood and artificial neural network methods in terms of classification accuracy. Our results show that the SVM achieves a higher level of classification accuracy than either the maximum likelihood or the neural classifier, and that the support vector machine can be used with small training datasets and high-dimensional data.

  9. Classifying spaces of degenerating polarized Hodge structures

    CERN Document Server

    Kato, Kazuya

    2009-01-01

    In 1970, Phillip Griffiths envisioned that points at infinity could be added to the classifying space D of polarized Hodge structures. In this book, Kazuya Kato and Sampei Usui realize this dream by creating a logarithmic Hodge theory. They use the logarithmic structures begun by Fontaine-Illusie to revive nilpotent orbits as a logarithmic Hodge structure. The book focuses on two principal topics. First, Kato and Usui construct the fine moduli space of polarized logarithmic Hodge structures with additional structures. Even for a Hermitian symmetric domain D, the present theory is a refinem

  10. Gearbox Condition Monitoring Using Advanced Classifiers

    Directory of Open Access Journals (Sweden)

    P. Večeř

    2010-01-01

    Full Text Available New efficient and reliable methods for gearbox diagnostics are needed in automotive industry because of growing demand for production quality. This paper presents the application of two different classifiers for gearbox diagnostics – Kohonen Neural Networks and the Adaptive-Network-based Fuzzy Interface System (ANFIS. Two different practical applications are presented. In the first application, the tested gearboxes are separated into two classes according to their condition indicators. In the second example, ANFIS is applied to label the tested gearboxes with a Quality Index according to the condition indicators. In both applications, the condition indicators were computed from the vibration of the gearbox housing. 

  11. Accurately Classifying Data Races with Portend

    OpenAIRE

    Kasikci, Baris; Zamfir, Cristian; Candea, George

    2011-01-01

    Even though most data races are harmless, the harmful ones are at the heart of some of the worst concurrency bugs. Eliminating all data races from programs is impractical (e.g., system performance could suffer severely), yet spotting just the harmful ones is like finding a needle in a haystack: state-of-the-art data race detectors and classifiers suffer from high false positive rates of 37%–84%. We present Portend, a technique and system for automatically triaging suspect data races based on ...

  12. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service.

    Science.gov (United States)

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard

    2003-12-01

    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  13. Efficacy of various root canal irrigants on removal of smear layer in the primary root canals after hand instrumentation: A scanning electron microscopy study

    OpenAIRE

    Hariharan V; Nandlal B; Srilatha K

    2010-01-01

    Aim: The purpose of this in-vitro study is to determine the efficacy of various irrigants in removing the smear layer in primary teeth root canals after hand instrumentation. Materials and Methods: The present study consisted of 30 human primary incisors which were sectioned at the cementoenamel junction horizontally. The specimens were divided randomly into four experimental and one control group having six teeth each and each group was treated with the specific irrigant. 5.25% NaOCl,...

  14. Objectively classifying Southern Hemisphere extratropical cyclones

    Science.gov (United States)

    Catto, Jennifer

    2016-04-01

    There has been a long tradition in attempting to separate extratropical cyclones into different classes depending on their cloud signatures, airflows, synoptic precursors, or upper-level flow features. Depending on these features, the cyclones may have different impacts, for example in their precipitation intensity. It is important, therefore, to understand how the distribution of different cyclone classes may change in the future. Many of the previous classifications have been performed manually. In order to be able to evaluate climate models and understand how extratropical cyclones might change in the future, we need to be able to use an automated method to classify cyclones. Extratropical cyclones have been identified in the Southern Hemisphere from the ERA-Interim reanalysis dataset with a commonly used identification and tracking algorithm that employs 850 hPa relative vorticity. A clustering method applied to large-scale fields from ERA-Interim at the time of cyclone genesis (when the cyclone is first detected), has been used to objectively classify identified cyclones. The results are compared to the manual classification of Sinclair and Revell (2000) and the four objectively identified classes shown in this presentation are found to match well. The relative importance of diabatic heating in the clusters is investigated, as well as the differing precipitation characteristics. The success of the objective classification shows its utility in climate model evaluation and climate change studies.

  15. Cross-classified occupational exposure data.

    Science.gov (United States)

    Jones, Rachael M; Burstyn, Igor

    2016-09-01

    We demonstrate the regression analysis of exposure determinants using cross-classified random effects in the context of lead exposures resulting from blasting surfaces in advance of painting. We had three specific objectives for analysis of the lead data, and observed: (1) high within-worker variability in personal lead exposures, explaining 79% of variability; (2) that the lead concentration outside of half-mask respirators was 2.4-fold higher than inside supplied-air blasting helmets, suggesting that the exposure reduction by blasting helmets may be lower than expected by the Assigned Protection Factor; and (3) that lead concentrations at fixed area locations in containment were not associated with personal lead exposures. In addition, we found that, on average, lead exposures among workers performing blasting and other activities was 40% lower than among workers performing only blasting. In the process of obtaining these analyses objectives, we determined that the data were non-hierarchical: repeated exposure measurements were collected for a worker while the worker was a member of several groups, or cross-classified among groups. Since the worker is a member of multiple groups, the exposure data do not adhere to the traditionally assumed hierarchical structure. Forcing a hierarchical structure on these data led to similar within-group and between-group variability, but decreased precision in the estimate of effect of work activity on lead exposure. We hope hygienists and exposure assessors will consider non-hierarchical models in the design and analysis of exposure assessments. PMID:27029937

  16. A systematic comparison of supervised classifiers.

    Directory of Open Access Journals (Sweden)

    Diego Raphael Amancio

    Full Text Available Pattern recognition has been employed in a myriad of industrial, commercial and academic applications. Many techniques have been devised to tackle such a diversity of applications. Despite the long tradition of pattern recognition research, there is no technique that yields the best classification in all scenarios. Therefore, as many techniques as possible should be considered in high accuracy applications. Typical related works either focus on the performance of a given algorithm or compare various classification methods. In many occasions, however, researchers who are not experts in the field of machine learning have to deal with practical classification tasks without an in-depth knowledge about the underlying parameters. Actually, the adequate choice of classifiers and parameters in such practical circumstances constitutes a long-standing problem and is one of the subjects of the current paper. We carried out a performance study of nine well-known classifiers implemented in the Weka framework and compared the influence of the parameter configurations on the accuracy. The default configuration of parameters in Weka was found to provide near optimal performance for most cases, not including methods such as the support vector machine (SVM. In addition, the k-nearest neighbor method frequently allowed the best accuracy. In certain conditions, it was possible to improve the quality of SVM by more than 20% with respect to their default parameter configuration.

  17. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  18. Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model

    Directory of Open Access Journals (Sweden)

    Bogdan Gliwa

    2011-01-01

    Full Text Available The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods. Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented.

  19. Hair removal

    DEFF Research Database (Denmark)

    Haedersdal, Merete; Haak, Christina S

    2011-01-01

    Hair removal with optical devices has become a popular mainstream treatment that today is considered the most efficient method for the reduction of unwanted hair. Photothermal destruction of hair follicles constitutes the fundamental concept of hair removal with red and near-infrared wavelengths...... suitable for targeting follicular and hair shaft melanin: normal mode ruby laser (694 nm), normal mode alexandrite laser (755 nm), pulsed diode lasers (800, 810 nm), long-pulse Nd:YAG laser (1,064 nm), and intense pulsed light (IPL) sources (590-1,200 nm). The ideal patient has thick dark terminal hair......, white skin, and a normal hormonal status. Currently, no method of lifelong permanent hair eradication is available, and it is important that patients have realistic expectations. Substantial evidence has been found for short-term hair removal efficacy of up to 6 months after treatment with the available...

  20. Quantum Hooke's law to classify pulse laser induced ultrafast melting.

    Science.gov (United States)

    Hu, Hao; Ding, Hepeng; Liu, Feng

    2015-02-03

    Ultrafast crystal-to-liquid phase transition induced by femtosecond pulse laser excitation is an interesting material's behavior manifesting the complexity of light-matter interaction. There exist two types of such phase transitions: one occurs at a time scale shorter than a picosecond via a nonthermal process mediated by electron-hole plasma formation; the other at a longer time scale via a thermal melting process mediated by electron-phonon interaction. However, it remains unclear what material would undergo which process and why? Here, by exploiting the property of quantum electronic stress (QES) governed by quantum Hooke's law, we classify the transitions by two distinct classes of materials: the faster nonthermal process can only occur in materials like ice having an anomalous phase diagram characterized with dTm/dP melting temperature and P is pressure, above a high threshold laser fluence; while the slower thermal process may occur in all materials. Especially, the nonthermal transition is shown to be induced by the QES, acting like a negative internal pressure, which drives the crystal into a "super pressing" state to spontaneously transform into a higher-density liquid phase. Our findings significantly advance fundamental understanding of ultrafast crystal-to-liquid phase transitions, enabling quantitative a priori predictions.

  1. Classifying antiarrhythmic actions: by facts or speculation.

    Science.gov (United States)

    Vaughan Williams, E M

    1992-11-01

    Classification of antiarrhythmic actions is reviewed in the context of the results of the Cardiac Arrhythmia Suppression Trials, CAST 1 and 2. Six criticisms of the classification recently published (The Sicilian Gambit) are discussed in detail. The alternative classification, when stripped of speculative elements, is shown to be similar to the original classification. Claims that the classification failed to predict the efficacy of antiarrhythmic drugs for the selection of appropriate therapy have been tested by an example. The antiarrhythmic actions of cibenzoline were classified in 1980. A detailed review of confirmatory experiments and clinical trials during the past decade shows that predictions made at the time agree with subsequent results. Classification of the effects drugs actually have on functioning cardiac tissues provides a rational basis for finding the preferred treatment for a particular arrhythmia in accordance with the diagnosis.

  2. Human Segmentation Using Haar-Classifier

    Directory of Open Access Journals (Sweden)

    Dharani S

    2014-07-01

    Full Text Available Segmentation is an important process in many aspects of multimedia applications. Fast and perfect segmentation of moving objects in video sequences is a basic task in many computer visions and video investigation applications. Particularly Human detection is an active research area in computer vision applications. Segmentation is very useful for tracking and recognition the object in a moving clip. The motion segmentation problem is studied and reviewed the most important techniques. We illustrate some common methods for segmenting the moving objects including background subtraction, temporal segmentation and edge detection. Contour and threshold are common methods for segmenting the objects in moving clip. These methods are widely exploited for moving object segmentation in many video surveillance applications, such as traffic monitoring, human motion capture. In this paper, Haar Classifier is used to detect humans in a moving video clip some features like face detection, eye detection, full body, upper body and lower body detection.

  3. A headband for classifying human postures.

    Science.gov (United States)

    Aloqlah, Mohammed; Lahiji, Rosa R; Loparo, Kenneth A; Mehregany, Mehran

    2010-01-01

    a real-time method using only accelerometer data is developed for classifying basic human static postures, namely sitting, standing, and lying, as well as dynamic transitions between them. The algorithm uses discrete wavelet transform (DWT) in combination with a fuzzy logic inference system (FIS). Data from a single three-axis accelerometer integrated into a wearable headband is transmitted wirelessly, collected and analyzed in real time on a laptop computer, to extract two sets of features for posture classification. The received acceleration signals are decomposed using the DWT to extract the dynamic features; changes in the smoothness of the signal that reflect a transition between postures are detected at finer DWT scales. FIS then uses the previous posture transition and DWT-extracted features to determine the static postures. PMID:21097190

  4. Classifying and ranking DMUs in interval DEA

    Institute of Scientific and Technical Information of China (English)

    GUO Jun-peng; WU Yu-hua; LI Wen-hua

    2005-01-01

    During efficiency evaluating by DEA, the inputs and outputs of DMUs may be intervals because of insufficient information or measure error. For this reason, interval DEA is proposed. To make the efficiency scores more discriminative, this paper builds an Interval Modified DEA (IMDEA) model based on MDEA.Furthermore, models of obtaining upper and lower bounds of the efficiency scores for each DMU are set up.Based on this, the DMUs are classified into three types. Next, a new order relation between intervals which can express the DM' s preference to the three types is proposed. As a result, a full and more eonvietive ranking is made on all the DMUs. Finally an example is given.

  5. Combining Heterogeneous Classifiers for Relational Databases

    CERN Document Server

    Manjunatha, Geetha; Sitaram, Dinkar

    2012-01-01

    Most enterprise data is distributed in multiple relational databases with expert-designed schema. Using traditional single-table machine learning techniques over such data not only incur a computational penalty for converting to a 'flat' form (mega-join), even the human-specified semantic information present in the relations is lost. In this paper, we present a practical, two-phase hierarchical meta-classification algorithm for relational databases with a semantic divide and conquer approach. We propose a recursive, prediction aggregation technique over heterogeneous classifiers applied on individual database tables. The proposed algorithm was evaluated on three diverse datasets, namely TPCH, PKDD and UCI benchmarks and showed considerable reduction in classification time without any loss of prediction accuracy.

  6. A cognitive approach to classifying perceived behaviors

    Science.gov (United States)

    Benjamin, Dale Paul; Lyons, Damian

    2010-04-01

    This paper describes our work on integrating distributed, concurrent control in a cognitive architecture, and using it to classify perceived behaviors. We are implementing the Robot Schemas (RS) language in Soar. RS is a CSP-type programming language for robotics that controls a hierarchy of concurrently executing schemas. The behavior of every RS schema is defined using port automata. This provides precision to the semantics and also a constructive means of reasoning about the behavior and meaning of schemas. Our implementation uses Soar operators to build, instantiate and connect port automata as needed. Our approach is to use comprehension through generation (similar to NLSoar) to search for ways to construct port automata that model perceived behaviors. The generality of RS permits us to model dynamic, concurrent behaviors. A virtual world (Ogre) is used to test the accuracy of these automata. Soar's chunking mechanism is used to generalize and save these automata. In this way, the robot learns to recognize new behaviors.

  7. Learning Vector Quantization for Classifying Astronomical Objects

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The sizes of astronomical surveys in different wavebands are increas-ing rapidly. Therefore, automatic classification of objects is becoming ever moreimportant. We explore the performance of learning vector quantization (LVQ) inclassifying multi-wavelength data. Our analysis concentrates on separating activesources from non-active ones. Different classes of X-ray emitters populate distinctregions of a multidimensional parameter space. In order to explore the distributionof various objects in a multidimensional parameter space, we positionally cross-correlate the data of quasars, BL Lacs, active galaxies, stars and normal galaxiesin the optical, X-ray and infrared bands. We then apply LVQ to classify them withthe obtained data. Our results show that LVQ is an effective method for separatingAGNs from stars and normal galaxies with multi-wavelength data.

  8. A Spiking Neural Learning Classifier System

    CERN Document Server

    Howard, Gerard; Lanzi, Pier-Luca

    2012-01-01

    Learning Classifier Systems (LCS) are population-based reinforcement learners used in a wide variety of applications. This paper presents a LCS where each traditional rule is represented by a spiking neural network, a type of network with dynamic internal state. We employ a constructivist model of growth of both neurons and dendrites that realise flexible learning by evolving structures of sufficient complexity to solve a well-known problem involving continuous, real-valued inputs. Additionally, we extend the system to enable temporal state decomposition. By allowing our LCS to chain together sequences of heterogeneous actions into macro-actions, it is shown to perform optimally in a problem where traditional methods can fail to find a solution in a reasonable amount of time. Our final system is tested on a simulated robotics platform.

  9. Classifying prion and prion-like phenomena.

    Science.gov (United States)

    Harbi, Djamel; Harrison, Paul M

    2014-01-01

    The universe of prion and prion-like phenomena has expanded significantly in the past several years. Here, we overview the challenges in classifying this data informatically, given that terms such as "prion-like", "prion-related" or "prion-forming" do not have a stable meaning in the scientific literature. We examine the spectrum of proteins that have been described in the literature as forming prions, and discuss how "prion" can have a range of meaning, with a strict definition being for demonstration of infection with in vitro-derived recombinant prions. We suggest that although prion/prion-like phenomena can largely be apportioned into a small number of broad groups dependent on the type of transmissibility evidence for them, as new phenomena are discovered in the coming years, a detailed ontological approach might be necessary that allows for subtle definition of different "flavors" of prion / prion-like phenomena.

  10. Automatic Fracture Detection Using Classifiers- A Review

    Directory of Open Access Journals (Sweden)

    S.K.Mahendran

    2011-11-01

    Full Text Available X-Ray is one the oldest and frequently used devices, that makes images of any bone in the body, including the hand, wrist, arm, elbow, shoulder, foot, ankle, leg (shin, knee, thigh, hip, pelvis or spine. A typical bone ailment is the fracture, which occurs when bone cannot withstand outside force like direct blows, twisting injuries and falls. Fractures are cracks in bones and are defined as a medical condition in which there is a break in the continuity of the bone. Detection and correct treatment of fractures are considered important, as a wrong diagnosis often lead to ineffective patient management, increased dissatisfaction and expensive litigation. The main focus of this paper is a review study that discusses about various classification algorithms that can be used to classify x-ray images as normal or fractured.

  11. Classifying supernovae using only galaxy data

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Ryan J. [Astronomy Department, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States); Mandel, Kaisey [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-12-01

    We present a new method for probabilistically classifying supernovae (SNe) without using SN spectral or photometric data. Unlike all previous studies to classify SNe without spectra, this technique does not use any SN photometry. Instead, the method relies on host-galaxy data. We build upon the well-known correlations between SN classes and host-galaxy properties, specifically that core-collapse SNe rarely occur in red, luminous, or early-type galaxies. Using the nearly spectroscopically complete Lick Observatory Supernova Search sample of SNe, we determine SN fractions as a function of host-galaxy properties. Using these data as inputs, we construct a Bayesian method for determining the probability that an SN is of a particular class. This method improves a common classification figure of merit by a factor of >2, comparable to the best light-curve classification techniques. Of the galaxy properties examined, morphology provides the most discriminating information. We further validate this method using SN samples from the Sloan Digital Sky Survey and the Palomar Transient Factory. We demonstrate that this method has wide-ranging applications, including separating different subclasses of SNe and determining the probability that an SN is of a particular class before photometry or even spectra can. Since this method uses completely independent data from light-curve techniques, there is potential to further improve the overall purity and completeness of SN samples and to test systematic biases of the light-curve techniques. Further enhancements to the host-galaxy method, including additional host-galaxy properties, combination with light-curve methods, and hybrid methods, should further improve the quality of SN samples from past, current, and future transient surveys.

  12. Comparative evaluation of 15% ethylenediamine tetra-acetic acid plus cetavlon and 5% chlorine dioxide in removal of smear layer: A scanning electron microscope study

    OpenAIRE

    Sandeep Singh; Vimal Arora; Inderpal Majithia; Rakesh Kumar Dhiman; Dinesh Kumar; Amber Ather

    2013-01-01

    Aims: The purpose of this study was to compare the efficacy of smear layer removal by 5% chlorine dioxide and 15% Ethylenediamine Tetra-Acetic Acid plus Cetavlon (EDTAC) from the human root canal dentin. Materials >and Methods : Fifty single rooted human mandibular anterior teeth were divided into two groups of 20 teeth each and control group of 10 teeth. The root canals were prepared till F3 protaper and initially irrigated with 2% Sodium hypochlorite followed by 1 min irrigation with 15% ED...

  13. Effect of different final irrigating solutions on smear layer removal in apical third of root canal: A scanning electron microscope study

    OpenAIRE

    Sayesh Vemuri; Sreeha Kaluva Kolanu; Sujana Varri; Ravi Kumar Pabbati; Ramesh Penumaka; Nagesh Bolla

    2016-01-01

    Aim: The aim of this in vitro study is to compare the smear layer removal efficacy of different irrigating solutions at the apical third of the root canal. Materials and Methods: Forty human single-rooted mandibular premolar teeth were taken and decoronated to standardize the canal length to 14 mm. They were prepared by ProTaper rotary system to an apical preparation of file size F3. Prepared teeth were randomly divided into four groups (n = 10); saline (Group 1; negative control), ethyle...

  14. Nevus Removal

    Science.gov (United States)

    ... can be fundamental to improving a patient’s overall psychosocial state. Other reasons to remove a nevus may ... This is not commonly done and presents many risks and challenges. Can’t they ... on all these same factors again. Different patients are more prone or less ...

  15. Comparison of removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether)-N, N, N', N'-tetraacetic acid and citric acid in primary teeth: A scanning electron microscopic study

    Science.gov (United States)

    Hegde, Rahul J.; Bapna, Kavita

    2016-01-01

    Background: Root canal irrigants are considered momentous in their tissue dissolving property, eliminating microorganisms, and removing smear layer. The present study was aimed to compare the removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether)-N, N, N', N'-tetraacetic acid (EGTA) and citric acid solutions with saline as a control in primary anterior teeth. Materials and Methods: Thirty primary anterior teeth were chosen for the study. The teeth were distributed into three groups having ten teeth each. Following instrumentation, root canals of the first group were treated with 17% EGTA and the second group with 6% citric acid. Only saline was used as an irrigant for the control group. Then, the teeth were subjected to scanning electron microscopy (SEM) study. The scale given by Rome et al. for the smear layer removal was used in the present study. Results: The pictures from the SEM showed that among the tested irrigants, 17% EGTA + 5% sodium hypochlorite (NaOCl) group showed the best results when compared to other groups. Conclusion: The results advocate that the sequential irrigation of the pulp canal walls with 17% EGTA followed by 5% NaOCl produced efficacious and smear-free root canal walls. PMID:27307670

  16. Comparison of removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether-N, N, N', N'-tetraacetic acid and citric acid in primary teeth: A scanning electron microscopic study

    Directory of Open Access Journals (Sweden)

    Rahul J Hegde

    2016-01-01

    Full Text Available Background: Root canal irrigants are considered momentous in their tissue dissolving property, eliminating microorganisms, and removing smear layer. The present study was aimed to compare the removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether-N, N, N', N'-tetraacetic acid (EGTA and citric acid solutions with saline as a control in primary anterior teeth. Materials and Methods: Thirty primary anterior teeth were chosen for the study. The teeth were distributed into three groups having ten teeth each. Following instrumentation, root canals of the first group were treated with 17% EGTA and the second group with 6% citric acid. Only saline was used as an irrigant for the control group. Then, the teeth were subjected to scanning electron microscopy (SEM study. The scale given by Rome et al. for the smear layer removal was used in the present study. Results: The pictures from the SEM showed that among the tested irrigants, 17% EGTA + 5% sodium hypochlorite (NaOCl group showed the best results when compared to other groups. Conclusion: The results advocate that the sequential irrigation of the pulp canal walls with 17% EGTA followed by 5% NaOCl produced efficacious and smear-free root canal walls.

  17. Gene-expression Classifier in Papillary Thyroid Carcinoma: Validation and Application of a Classifier for Prognostication

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Jespersen, Marie Louise; Krogdahl, Annelise;

    2016-01-01

    frozen tissue from 38 patients was collected between the years 1986 and 2009. Validation cohort: formalin-fixed paraffin-embedded tissues were collected from 183 consecutively treated patients. RESULTS: A 17-gene classifier was identified based on the expression values in patients with and without...

  18. Classifying gauge anomalies through SPT orders and classifying anomalies through topological orders

    CERN Document Server

    Wen, Xiao-Gang

    2013-01-01

    In this paper, we systematically study gauge anomalies in bosonic and fermionic weak-coupling gauge theories with gauge group G (which can be continuous or discrete). We argue that, in d space-time dimensions, the gauge anomalies are described by the elements in Free[H^{d+1}(G,R/Z)]\\oplus H_\\pi^{d+1}(BG,R/Z). The well known Adler-Bell-Jackiw anomalies are classified by the free part of the group cohomology class H^{d+1}(G,R/Z) of the gauge group G (denoted as Free[H^{d+1}(G,\\R/\\Z)]). We refer other kinds of gauge anomalies beyond Adler-Bell-Jackiw anomalies as nonABJ gauge anomalies, which include Witten SU(2) global gauge anomaly. We introduce a notion of \\pi-cohomology group, H_\\pi^{d+1}(BG,R/Z), for the classifying space BG, which is an Abelian group and include Tor[H^{d+1}(G,R/Z)] and topological cohomology group H^{d+1}(BG,\\R/\\Z) as subgroups. We argue that H_\\pi^{d+1}(BG,R/Z) classifies the bosonic nonABJ gauge anomalies, and partially classifies fermionic nonABJ anomalies. We also show a very close rel...

  19. Classifying gauge anomalies through symmetry-protected trivial orders and classifying gravitational anomalies through topological orders

    Science.gov (United States)

    Wen, Xiao-Gang

    2013-08-01

    In this paper, we systematically study gauge anomalies in bosonic and fermionic weak-coupling gauge theories with gauge group G (which can be continuous or discrete) in d space-time dimensions. We show a very close relation between gauge anomalies for gauge group G and symmetry-protected trivial (SPT) orders (also known as symmetry-protected topological (SPT) orders) with symmetry group G in one-higher dimension. The SPT phases are classified by group cohomology class Hd+1(G,R/Z). Through a more careful consideration, we argue that the gauge anomalies are described by the elements in Free[Hd+1(G,R/Z)]⊕Hπ˙d+1(BG,R/Z). The well known Adler-Bell-Jackiw anomalies are classified by the free part of Hd+1(G,R/Z) (denoted as Free[Hd+1(G,R/Z)]). We refer to other kinds of gauge anomalies beyond Adler-Bell-Jackiw anomalies as non-ABJ gauge anomalies, which include Witten SU(2) global gauge anomalies. We introduce a notion of π-cohomology group, Hπ˙d+1(BG,R/Z), for the classifying space BG, which is an Abelian group and include Tor[Hd+1(G,R/Z)] and topological cohomology group Hd+1(BG,R/Z) as subgroups. We argue that Hπ˙d+1(BG,R/Z) classifies the bosonic non-ABJ gauge anomalies and partially classifies fermionic non-ABJ anomalies. Using the same approach that shows gauge anomalies to be connected to SPT phases, we can also show that gravitational anomalies are connected to topological orders (i.e., patterns of long-range entanglement) in one-higher dimension.

  20. Making classifying selectors work for foam elimination in the activated-sludge process.

    Science.gov (United States)

    Parker, Denny; Geary, Steve; Jones, Garr; McIntyre, Lori; Oppenheim, Stuart; Pedregon, Vick; Pope, Rod; Richards, Tyler; Voigt, Christine; Volpe, Gary; Willis, John; Witzgall, Robert

    2003-01-01

    Classifying selectors are used to control the population of foam-causing organisms in activated-sludge plants to prevent the development of nuisance foams. The term, classifying selector, refers to the physical mechanism by which these organisms are selected against; foam-causing organisms are enriched into the solids in the foam and their rapid removal controls their population at low levels in the mixed liquor. Foam-causing organisms are wasted "first" rather than accumulating on the surface of tanks and thereby being wasted "last", which is typical of the process. This concept originated in South Africa, where pilot studies showed that placement of a flotation tank for foam removal prior to secondary clarifiers would eliminate foam-causing organisms. It was later simplified in the United States by using the aeration in aeration tanks or aerated channels coupled with simple baffling and adjustable weirs to make continuous separation of nuisance organisms from the mixed liquor. PMID:12683467

  1. Removal of

    OpenAIRE

    Roohan Rakhshaee; Zahra Zamiraee; Somaieh Baghipour; Mohammad Panahandeh

    2013-01-01

    Background and Objectives: Azolla Filiculoides as a non-living fern was used in a batch system to remove "Basic Blue 3", which is a cationic dye and a carcinogenic agent.Materials and Methods: We used a batch system by applying certain concentrations of dye contaminant and in the presence of a certain amount of adsorbent under optimum conditions. The main groups presenting in the Azolla cell wall were evaluated by acidification and alkalization of Azolla's media and then potentiometric titrat...

  2. Hair Removal

    DEFF Research Database (Denmark)

    Hædersdal, Merete

    2011-01-01

    Hair removal with optical devices has become a popular mainstream treatment that today is considered the most efficient method for the reduction of unwanted hair. Photothermal destruction of hair follicles constitutes the fundamental concept of hair removal with red and near-infrared wavelengths ...... treatment procedures are evolving. Consumer-based treatments with portable home devices are rapidly evolving, and presently include low-level diode lasers and IPL devices....... suitable for targeting follicular and hair shaft melanin: normal mode ruby laser (694 nm), normal mode alexandrite laser (755 nm), pulsed diode lasers (800, 810 nm), long-pulse Nd:YAG laser (1,064 nm), and intense pulsed light (IPL) sources (590-1,200 nm). The ideal patient has thick dark terminal hair...... systems. Evidence has been found for long-term hair removal efficacy beyond 6 months after repetitive treatments with alexandrite, diode, and long-pulse Nd:YAG lasers, whereas the current long-term evidence is sparse for IPL devices. Treatment parameters must be adjusted to patient skin type...

  3. Removal of

    Directory of Open Access Journals (Sweden)

    Roohan Rakhshaee

    2013-02-01

    Full Text Available Background and Objectives: Azolla Filiculoides as a non-living fern was used in a batch system to remove "Basic Blue 3", which is a cationic dye and a carcinogenic agent.Materials and Methods: We used a batch system by applying certain concentrations of dye contaminant and in the presence of a certain amount of adsorbent under optimum conditions. The main groups presenting in the Azolla cell wall were evaluated by acidification and alkalization of Azolla's media and then potentiometric titration with standard basic and acidic solutions. Results: It was observed that the removal efficiency of dye using non-living Azolla in accordance with the Langmuir isotherms was 82% for the initial dye concentration of 200 mg/lit under reaction conditions consisting of contact time 6 h, pH= 6, temperature 25 ˚C, and dose 5 g/lit. Qmax (maximum uptake capacity by the activated Azolla at three temperatures 5, 25 and 50 ˚C was 0.732, 0.934, and 1.176 mmol/g respectively. ΔG (Gibbs free energy changes was obtained for these temperatures as -0.457, -0.762, and -1.185 kJ/mol respectively.Conclusion: Removal of basic blue 3 using Azolla is an economically and effective method.

  4. A Neural Network Classifier of Volume Datasets

    CERN Document Server

    Zukić, Dženan; Kolb, Andreas

    2009-01-01

    Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets,...

  5. Is it important to classify ischaemic stroke?

    LENUS (Irish Health Repository)

    Iqbal, M

    2012-02-01

    Thirty-five percent of all ischemic events remain classified as cryptogenic. This study was conducted to ascertain the accuracy of diagnosis of ischaemic stroke based on information given in the medical notes. It was tested by applying the clinical information to the (TOAST) criteria. Hundred and five patients presented with acute stroke between Jan-Jun 2007. Data was collected on 90 patients. Male to female ratio was 39:51 with age range of 47-93 years. Sixty (67%) patients had total\\/partial anterior circulation stroke; 5 (5.6%) had a lacunar stroke and in 25 (28%) the mechanism of stroke could not be identified. Four (4.4%) patients with small vessel disease were anticoagulated; 5 (5.6%) with atrial fibrillation received antiplatelet therapy and 2 (2.2%) patients with atrial fibrillation underwent CEA. This study revealed deficiencies in the clinical assessment of patients and treatment was not tailored to the mechanism of stroke in some patients.

  6. Stress fracture development classified by bone scintigraphy

    International Nuclear Information System (INIS)

    There is no consensus on classifying stress fractures (SF) appearing on bone scans. The authors present a system of classification based on grading the severity and development of bone lesions by visual inspection, according to three main scintigraphic criteria: focality and size, intensity of uptake compare to adjacent bone, and local medular extension. Four grades of development (I-IV) were ranked, ranging from ill defined slightly increased cortical uptake to well defined regions with markedly increased uptake extending transversely bicortically. 310 male subjects aged 19-2, suffering several weeks from leg pains occurring during intensive physical training underwent bone scans of the pelvis and lower extremities using Tc-99-m-MDP. 76% of the scans were positive with 354 lesions, of which 88% were in th4e mild (I-II) grades and 12% in the moderate (III) and severe (IV) grades. Post-treatment scans were obtained in 65 cases having 78 lesions during 1- to 6-month intervals. Complete resolution was found after 1-2 months in 36% of the mild lesions but in only 12% of the moderate and severe ones, and after 3-6 months in 55% of the mild lesions and 15% of the severe ones. 75% of the moderate and severe lesions showed residual uptake in various stages throughout the follow-up period. Early recognition and treatment of mild SF lesions in this study prevented protracted disability and progression of the lesions and facilitated complete healing

  7. Colorization by classifying the prior knowledge

    Institute of Scientific and Technical Information of China (English)

    DU Weiwei

    2011-01-01

    When a one-dimensional luminance scalar is replaced by a vector of a colorful multi-dimension for every pixel of a monochrome image,the process is called colorization.However,colorization is under-constrained.Therefore,the prior knowledge is considered and given to the monochrome image.Colorization using optimization algorithm is an effective algorithm for the above problem.However,it cannot effectively do with some images well without repeating experiments for confirming the place of scribbles.In this paper,a colorization algorithm is proposed,which can automatically generate the prior knowledge.The idea is that firstly,the prior knowledge crystallizes into some points of the prior knowledge which is automatically extracted by downsampling and upsampling method.And then some points of the prior knowledge are classified and given with corresponding colors.Lastly,the color image can be obtained by the color points of the prior knowledge.It is demonstrated that the proposal can not only effectively generate the prior knowledge but also colorize the monochrome image according to requirements of user with some experiments.

  8. Classifying Unidentified Gamma-ray Sources

    CERN Document Server

    Salvetti, David

    2016-01-01

    During its first 2 years of mission the Fermi-LAT instrument discovered more than 1,800 gamma-ray sources in the 100 MeV to 100 GeV range. Despite the application of advanced techniques to identify and associate the Fermi-LAT sources with counterparts at other wavelengths, about 40% of the LAT sources have no a clear identification remaining "unassociated". The purpose of my Ph.D. work has been to pursue a statistical approach to identify the nature of each Fermi-LAT unassociated source. To this aim, we implemented advanced machine learning techniques, such as logistic regression and artificial neural networks, to classify these sources on the basis of all the available gamma-ray information about location, energy spectrum and time variability. These analyses have been used for selecting targets for AGN and pulsar searches and planning multi-wavelength follow-up observations. In particular, we have focused our attention on the search of possible radio-quiet millisecond pulsar (MSP) candidates in the sample of...

  9. MISR Level 2 TOA/Cloud Classifier parameters V003

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 TOA/Cloud Classifiers Product. It contains the Angular Signature Cloud Mask (ASCM), Regional Cloud Classifiers, Cloud Shadow Mask, and...

  10. Optimized Radial Basis Function Classifier for Multi Modal Biometrics

    Directory of Open Access Journals (Sweden)

    Anand Viswanathan

    2014-07-01

    Full Text Available Biometric systems can be used for the identification or verification of humans based on their physiological or behavioral features. In these systems the biometric characteristics such as fingerprints, palm-print, iris or speech can be recorded and are compared with the samples for the identification or verification. Multimodal biometrics is more accurate and solves spoof attacks than the single modal bio metrics systems. In this study, a multimodal biometric system using fingerprint images and finger-vein patterns is proposed and also an optimized Radial Basis Function (RBF kernel classifier is proposed to identify the authorized users. The extracted features from these modalities are selected by PCA and kernel PCA and combined to classify by RBF classifier. The parameters of RBF classifier is optimized by using BAT algorithm with local search. The performance of the proposed classifier is compared with the KNN classifier, Naïve Bayesian classifier and non-optimized RBF classifier.

  11. Comparative study of smear layer removal by different etching modalities and Er:YAG laser irradiation on the root surface: a scanning electron microscopy study

    International Nuclear Information System (INIS)

    The aim of this study was to compare the effects of citric acid, EDTA, citric acid with tetracycline, and Er:YAG laser to smear layer removal on the root surface after scaling with manual instruments by SEM. Thirty specimens (n=30) of root surface before scaling were divided into 6 groups (n=5). The Control Group (G1) was not treated; Group 2 (G2) was conditioned with citric acid gel 24%, pH1, during 2 minutes; Group 3 (G3) was conditioned with EDTA gel 24%, pH 7, during 2 minutes; Group 4 (G4) was conditioned with citric acid and tetracycline gel 50%, pH1 during 2 minutes; Group 5 (G5) was irradiated with Er:YAG laser (2.94 μm), 47 mJ/10 Hz, focused, under water spray during 15 seconds and fluence of 0.58 J/cm2; Group 6 (G6) was irradiated with Er:YAG laser (2.94μm), 83 mJ/10 Hz, focused, under water spray during 15 seconds and fluence of 1.03 J/cm2. The micrographic were analyzed by scores and following the statistical analysis with Kruskal Wallis (p<0.05) H=20,31. The G1 was significantly different of all groups (28.0); the G2 (13.4), G3 (11.7), and G4 (13.6) showed no difference in relation to G5 (20.3) and G6 (6.0), but the G6 was significantly different from G5. From the results, it can be conclude that: 1) there was intensity smear layer after scaling and root planing; 2) all treatments were effective to smear layer remove with significantly difference to G2, G3, G4, G5 and G6; G2, G3 and G4 were not statistically different from G5 and G6; 3) G6 was more effective in the smear layer remotion in relation to G5 and both presented irregular root surface. (author)

  12. Method of generating features optimal to a dataset and classifier

    Energy Technology Data Exchange (ETDEWEB)

    Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.

    2016-10-18

    A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.

  13. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages. PMID:17431300

  14. Oxygen-Content-Controllable Graphene Oxide from Electron-Beam-Irradiated Graphite: Synthesis, Characterization, and Removal of Aqueous Lead [Pb(II)].

    Science.gov (United States)

    Bai, Jing; Sun, Huimin; Yin, Xiaojie; Yin, Xianqiang; Wang, Shengsen; Creamer, Anne Elise; Xu, Lijun; Qin, Zhi; He, Feng; Gao, Bin

    2016-09-28

    A high-energy electron beam was applied to irradiate graphite for the preparation of graphene oxide (GO) with a controllable oxygen content. The obtained GO sheets were analyzed with various characterization tools. The results revealed that the oxygen-containing groups of GO increased with increasing irradiation dosages. Hence, oxygen-content-controllable synthesis of GO can be realized by changing the irradiation dosages. The GO sheets with different irradiation dosages were then used to adsorb aqueous Pb(II). The effects of contact time, pH, initial lead ion concentration, and ionic strength on Pb(II) sorption onto different GO sheets were examined. The sorption process was found to be very fast (completed within 20 min) at pH 5.0. Except ionic strength, which showed no/little effect on lead sorption, the other factors affected the sorption of aqueous Pb(II) onto GO. The maximum Pb(II) sorption capacities of GO increased with irradiation dosages, confirming that electron-beam irradiation was an effective way to increase the oxygen content of GO. These results suggested that irradiated GO with a controllable oxygen content is a promising nanomaterial for environmental cleanup, particularly for the treatment of cationic metal ions, such as Pb(II).

  15. Counting, Measuring And The Semantics Of Classifiers

    Directory of Open Access Journals (Sweden)

    Susan Rothstein

    2010-12-01

    Full Text Available This paper makes two central claims. The first is that there is an intimate and non-trivial relation between the mass/count distinction on the one hand and the measure/individuation distinction on the other: a (if not the defining property of mass nouns is that they denote sets of entities which can be measured, while count nouns denote sets of entities which can be counted. Crucially, this is a difference in grammatical perspective and not in ontological status. The second claim is that the mass/count distinction between two types of nominals has its direct correlate at the level of classifier phrases: classifier phrases like two bottles of wine are ambiguous between a counting, or individuating, reading and a measure reading. On the counting reading, this phrase has count semantics, on the measure reading it has mass semantics.ReferencesBorer, H. 1999. ‘Deconstructing the construct’. In K. Johnson & I. Roberts (eds. ‘Beyond Principles and Parameters’, 43–89. Dordrecht: Kluwer publications.Borer, H. 2008. ‘Compounds: the view from Hebrew’. In R. Lieber & P. Stekauer (eds. ‘The Oxford Handbook of Compounds’, 491–511. Oxford: Oxford University Press.Carlson, G. 1977b. Reference to Kinds in English. Ph.D. thesis, University of Massachusetts at Amherst.Carlson, G. 1997. Quantifiers and Selection. Ph.D. thesis, University of Leiden.Carslon, G. 1977a. ‘Amount relatives’. Language 53: 520–542.Chierchia, G. 2008. ‘Plurality of mass nouns and the notion of ‘semantic parameter”. In S. Rothstein (ed. ‘Events and Grammar’, 53–103. Dordrecht: Kluwer.Danon, G. 2008. ‘Definiteness spreading in the Hebrew construct state’. Lingua 118: 872–906.http://dx.doi.org/10.1016/j.lingua.2007.05.012Gillon, B. 1992. ‘Toward a common semantics for English count and mass nouns’. Linguistics and Philosophy 15: 597–640.http://dx.doi.org/10.1007/BF00628112Grosu, A. & Landman, F. 1998. ‘Strange relatives of the third kind

  16. A Novel Design of 4-Class BCI Using Two Binary Classifiers and Parallel Mental Tasks

    Directory of Open Access Journals (Sweden)

    Tao Geng

    2008-01-01

    Full Text Available A novel 4-class single-trial brain computer interface (BCI based on two (rather than four or more binary linear discriminant analysis (LDA classifiers is proposed, which is called a “parallel BCI.” Unlike other BCIs where mental tasks are executed and classified in a serial way one after another, the parallel BCI uses properly designed parallel mental tasks that are executed on both sides of the subject body simultaneously, which is the main novelty of the BCI paradigm used in our experiments. Each of the two binary classifiers only classifies the mental tasks executed on one side of the subject body, and the results of the two binary classifiers are combined to give the result of the 4-class BCI. Data was recorded in experiments with both real movement and motor imagery in 3 able-bodied subjects. Artifacts were not detected or removed. Offline analysis has shown that, in some subjects, the parallel BCI can generate a higher accuracy than a conventional 4-class BCI, although both of them have used the same feature selection and classification algorithms.

  17. Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers

    Science.gov (United States)

    Favorskaya, M.; Nosov, A.; Popov, A.

    2015-05-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case). Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset "Multi-modal Gesture Recognition Challenge 2013: Dataset and Results" including 393 dynamic hand-gestures was chosen. The proposed method yielded 84-91% recognition accuracy, in average, for restricted set of dynamic gestures.

  18. Scanning electron microscopy (SEM evaluation of sealing ability of MTA and EndoSequence as root-end filling materials with chitosan and carboxymethyl chitosan (CMC as retrograde smear layer removing agents

    Directory of Open Access Journals (Sweden)

    Bolla Nagesh

    2016-01-01

    Full Text Available Aim: The purpose of this study was to evaluate the sealing ability of mineral trioxide aggregate (MTA and EndoSequence with chitosan and carboxymethyl chitosan (CMC as retrograde smear layer removing agents using scanning electron microscopy (SEM. Materials and Methods: Forty human single rooted teeth were taken. Crowns were decoronated and canals were obturated. Apically roots were resected and retrograde cavities were done. Based on the type of retrograde material placed and the type of smear layer removal agent used for retrograde cavities, they were divided into four groups (N = 10: Group I chitosan with EndoSequence, group II chitosan with MTA, group III CMC with EndoSequence, and Group IV CMC with MTA. All the samples were longitudinally sectioned, and the SEM analysis was done for marginal adaptation. Statistical Analysis: Kruskal-Wallis and Mann-Witney analysis tests. Results: SEM images showed the presence of less gaps in group III, i.e., CMC with EndoSequence when compared to other groups with statistically significant difference. Conclusion: Within the limited scope of this study, it was concluded that EndoSequence as retrograde material showed better marginal sealing ability.

  19. 运用电子鼻检测活血化瘀中药物质基础研究%Study on the Material Foundation of Stasis-removing Chinese Medicine by Electronic Nose Detection

    Institute of Scientific and Technical Information of China (English)

    王光耀; 盛良; 王兴华; 汪宇; Te Kian Keong; Teh Siew Hoon; Ooi Ciat Hui

    2015-01-01

    Objective: To investigate whether there is a common material basis among Chinese medicines with similar effects, and whether the electronic nose can be used to quantify the property of Chinese medicines. Methods: Twelve kinds of Chinese medicinal herbs, which have the effect of promoting blood circulation and removing blood stasis, were tested by electronic nose. Principal component analysis(PCA) and the characteristic fingerprint were analysed together with differential index di and discriminant index. Results: The 12 kinds of Chinese medicinal herbs with the function of promoting blood circulation and removing blood stasis had similar PCA map and characteristic fingerprint. Conclusion: The 12 kinds of Chinese medicines have a common material basis.%目的 探讨具有相同功效的中药材是否具有共同的物质基础,运用电子鼻能否对中药的药性进行初步量化.方法 运用电子鼻检测常用具有活血化瘀功效的12种中药材,分析其PCA图和特征指纹图谱的相似性,并通过差异指数和判别指数对其作用作进一步说明.结果 检测的12种具有活血化瘀功效的中药材具有相似的PCA图和特征指纹图谱.结论 12种活血化瘀的中药具有共同的物质基础.

  20. Arrhythmia management after device removal.

    Science.gov (United States)

    Nishii, Nobuhiro

    2016-08-01

    Arrhythmic management is needed after removal of cardiac implantable electronic devices (CIEDs). Patients completely dependent on CIEDs need temporary device back-up until new CIEDs are implanted. Various methods are available for device back-up, and the appropriate management varies among patients. The duration from CIED removal to implantation of a new CIED also differs among patients. Temporary pacing is needed for patients with bradycardia, a wearable cardioverter defibrillator (WCD) or catheter ablation is needed for patients with tachyarrhythmia, and sequential pacing is needed for patients dependent on cardiac resynchronization therapy. The present review focuses on arrhythmic management after CIED removal. PMID:27588151

  1. Image Classifying Registration and Dynamic Region Merging

    Directory of Open Access Journals (Sweden)

    Himadri Nath Moulick

    2013-07-01

    Full Text Available In this paper, we address a complex image registration issue arising when the dependencies between intensities of images to be registered are not spatially homogeneous. Such a situation is frequentlyencountered in medical imaging when a pathology present in one of the images modifies locally intensity dependencies observed on normal tissues. Usual image registration models, which are based on a single global intensity similarity criterion, fail to register such images, as they are blind to local deviations of intensity dependencies. Such a limitation is also encountered in contrast enhanced images where there exist multiple pixel classes having different properties of contrast agent absorption. In this paper, we propose a new model in which the similarity criterion is adapted locally to images by classification of image intensity dependencies. Defined in a Bayesian framework, the similarity criterion is a mixture of probability distributions describing dependencies on two classes. The model also includes a class map which locates pixels of the two classes and weights the two mixture components. The registration problem is formulated both as an energy minimization problem and as a Maximum A Posteriori (MAP estimation problem. It is solved using a gradient descent algorithm. In the problem formulation and resolution, the image deformation and the class map are estimated at the same time, leading to an original combination of registration and classification that we call image classifying registration. Whenever sufficient information about class location is available in applications, the registration can also be performed on its own by fixing a given class map. Finally, we illustrate the interest of our model on two real applications from medical imaging: template-based segmentation of contrast-enhanced images and lesion detection in mammograms. We also conduct an evaluation of our model on simulated medical data and show its ability to take into

  2. Rule Based Ensembles Using Pair Wise Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    Moslem Mohammadi Jenghara

    2015-03-01

    Full Text Available In value estimation, the inexperienced people's estimation average is good approximation to true value, provided that the answer of these individual are independent. Classifier ensemble is the implementation of mentioned principle in classification tasks that are investigated in two aspects. In the first aspect, feature space is divided into several local regions and each region is assigned with a highly competent classifier and in the second, the base classifiers are applied in parallel and equally experienced in some ways to achieve a group consensus. In this paper combination of two methods are used. An important consideration in classifier combination is that much better results can be achieved if diverse classifiers, rather than similar classifiers, are combined. To achieve diversity in classifiers output, the symmetric pairwise weighted feature space is used and the outputs of trained classifiers over the weighted feature space are combined to inference final result. In this paper MLP classifiers are used as the base classifiers. The Experimental results show that the applied method is promising.

  3. Affine Invariant Character Recognition by Progressive Removing

    Science.gov (United States)

    Iwamura, Masakazu; Horimatsu, Akira; Niwa, Ryo; Kise, Koichi; Uchida, Seiichi; Omachi, Shinichiro

    Recognizing characters in scene images suffering from perspective distortion is a challenge. Although there are some methods to overcome this difficulty, they are time-consuming. In this paper, we propose a set of affine invariant features and a new recognition scheme called “progressive removing” that can help reduce the processing time. Progressive removing gradually removes less feasible categories and skew angles by using multiple classifiers. We observed that progressive removing and the use of the affine invariant features reduced the processing time by about 60% in comparison to a trivial one without decreasing the recognition rate.

  4. To fuse or not to fuse: Fuser versus best classifier

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.

    1998-04-01

    A sample from a class defined on a finite-dimensional Euclidean space and distributed according to an unknown distribution is given. The authors are given a set of classifiers each of which chooses a hypothesis with least misclassification error from a family of hypotheses. They address the question of choosing the classifier with the best performance guarantee versus combining the classifiers using a fuser. They first describe a fusion method based on isolation property such that the performance guarantee of the fused system is at least as good as the best of the classifiers. For a more restricted case of deterministic classes, they present a method based on error set estimation such that the performance guarantee of fusing all classifiers is at least as good as that of fusing any subset of classifiers.

  5. Taxonomy grounded aggregation of classifiers with different label sets

    OpenAIRE

    SAHA, AMRITA; Indurthi, Sathish; Godbole, Shantanu; Rongali, Subendhu; Raykar, Vikas C.

    2015-01-01

    We describe the problem of aggregating the label predictions of diverse classifiers using a class taxonomy. Such a taxonomy may not have been available or referenced when the individual classifiers were designed and trained, yet mapping the output labels into the taxonomy is desirable to integrate the effort spent in training the constituent classifiers. A hierarchical taxonomy representing some domain knowledge may be different from, but partially mappable to, the label sets of the individua...

  6. Customer-Classified Algorithm Based onFuzzy Clustering Analysis

    Institute of Scientific and Technical Information of China (English)

    郭蕴华; 祖巧红; 陈定方

    2004-01-01

    A customer-classified evaluation system is described with the customization-supporting tree of evaluation indexes, in which users can determine any evaluation index independently. Based on this system, a customer-classified algorithm based on fuzzy clustering analysis is proposed to implement the customer-classified management. A numerical example is presented, which provides correct results,indicating that the algorithm can be used in the decision support system of CRM.

  7. The analysis of cross-classified categorical data

    CERN Document Server

    Fienberg, Stephen E

    2007-01-01

    A variety of biological and social science data come in the form of cross-classified tables of counts, commonly referred to as contingency tables. Until recent years the statistical and computational techniques available for the analysis of cross-classified data were quite limited. This book presents some of the recent work on the statistical analysis of cross-classified data using longlinear models, especially in the multidimensional situation.

  8. Construction of unsupervised sentiment classifier on idioms resources

    Institute of Scientific and Technical Information of China (English)

    谢松县; 王挺

    2014-01-01

    Sentiment analysis is the computational study of how opinions, attitudes, emotions, and perspectives are expressed in language, and has been the important task of natural language processing. Sentiment analysis is highly valuable for both research and practical applications. The focuses were put on the difficulties in the construction of sentiment classifiers which normally need tremendous labeled domain training data, and a novel unsupervised framework was proposed to make use of the Chinese idiom resources to develop a general sentiment classifier. Furthermore, the domain adaption of general sentiment classifier was improved by taking the general classifier as the base of a self-training procedure to get a domain self-training sentiment classifier. To validate the effect of the unsupervised framework, several experiments were carried out on publicly available Chinese online reviews dataset. The experiments show that the proposed framework is effective and achieves encouraging results. Specifically, the general classifier outperforms two baselines (a Naïve 50% baseline and a cross-domain classifier), and the bootstrapping self-training classifier approximates the upper bound domain-specific classifier with the lowest accuracy of 81.5%, but the performance is more stable and the framework needs no labeled training dataset.

  9. Facial expression recognition with facial parts based sparse representation classifier

    Science.gov (United States)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  10. Unsupervised Supervised Learning II: Training Margin Based Classifiers without Labels

    CERN Document Server

    Donmez, Pinar; Lebanon, Guy

    2010-01-01

    Many popular linear classifiers, such as logistic regression, boosting, or SVM, are trained by optimizing a margin-based risk function. Traditionally, these risk functions are computed based on a labeled dataset. We develop a novel technique for estimating such risks using only unlabeled data and p(y). We prove that the technique is consistent for high-dimensional linear classifiers and demonstrate it on synthetic and real-world data. In particular, we show how the estimate is used for evaluating classifiers in transfer learning, and for training classifiers with no labeled data whatsoever.

  11. Using Classifiers to Identify Binge Drinkers Based on Drinking Motives.

    Science.gov (United States)

    Crutzen, Rik; Giabbanelli, Philippe

    2013-08-21

    A representative sample of 2,844 Dutch adult drinkers completed a questionnaire on drinking motives and drinking behavior in January 2011. Results were classified using regressions, decision trees, and support vector machines (SVMs). Using SVMs, the mean absolute error was minimal, whereas performance on identifying binge drinkers was high. Moreover, when comparing the structure of classifiers, there were differences in which drinking motives contribute to the performance of classifiers. Thus, classifiers are worthwhile to be used in research regarding (addictive) behaviors, because they contribute to explaining behavior and they can give different insights from more traditional data analytical approaches. PMID:23964957

  12. Cooling system for electronic components

    Science.gov (United States)

    Anderl, William James; Colgan, Evan George; Gerken, James Dorance; Marroquin, Christopher Michael; Tian, Shurong

    2015-12-15

    Embodiments of the present invention provide for non interruptive fluid cooling of an electronic enclosure. One or more electronic component packages may be removable from a circuit card having a fluid flow system. When installed, the electronic component packages are coincident to and in a thermal relationship with the fluid flow system. If a particular electronic component package becomes non-functional, it may be removed from the electronic enclosure without affecting either the fluid flow system or other neighboring electronic component packages.

  13. Cooling system for electronic components

    Energy Technology Data Exchange (ETDEWEB)

    Anderl, William James; Colgan, Evan George; Gerken, James Dorance; Marroquin, Christopher Michael; Tian, Shurong

    2016-05-17

    Embodiments of the present invention provide for non interruptive fluid cooling of an electronic enclosure. One or more electronic component packages may be removable from a circuit card having a fluid flow system. When installed, the electronic component packages are coincident to and in a thermal relationship with the fluid flow system. If a particular electronic component package becomes non-functional, it may be removed from the electronic enclosure without affecting either the fluid flow system or other neighboring electronic component packages.

  14. Selection of effective EEG channels in brain computer interfaces based on inconsistencies of classifiers.

    Science.gov (United States)

    Yang, Huijuan; Guan, Cuntai; Ang, Kai Keng; Phua, Kok Soon; Wang, Chuanchu

    2014-01-01

    This paper proposed a novel method to select the effective Electroencephalography (EEG) channels for the motor imagery tasks based on the inconsistencies from multiple classifiers. The inconsistency criterion for channel selection was designed based on the fluctuation of the classification accuracies among different classifiers when the noisy channels were included. These noisy channels were then identified and removed till a required number of channels was selected or a predefined classification accuracy with reference to baseline was obtained. Experiments conducted on a data set of 13 healthy subjects performing hand grasping and idle revealed that the EEG channels from the motor area were most frequently selected. Furthermore, the mean increases of 4.07%, 3.10% and 1.77% of the averaged accuracies in comparison with the four existing channel selection methods were achieved for the non-feedback, feedback and calibration sessions, respectively, by selecting as low as seven channels. These results further validated the effectiveness of our proposed method.

  15. 21 CFR 1402.4 - Information classified by another agency.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Information classified by another agency. 1402.4 Section 1402.4 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY MANDATORY DECLASSIFICATION REVIEW § 1402.4 Information classified by another agency. When a request is received for information that...

  16. Classifying spaces with virtually cyclic stabilizers for linear groups

    DEFF Research Database (Denmark)

    Degrijse, Dieter Dries; Köhl, Ralf; Petrosyan, Nansen

    2015-01-01

    We show that every discrete subgroup of GL(n, ℝ) admits a finite-dimensional classifying space with virtually cyclic stabilizers. Applying our methods to SL(3, ℤ), we obtain a four-dimensional classifying space with virtually cyclic stabilizers and a decomposition of the algebraic K-theory of its...

  17. 40 CFR 152.175 - Pesticides classified for restricted use.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Pesticides classified for restricted...) PESTICIDE PROGRAMS PESTICIDE REGISTRATION AND CLASSIFICATION PROCEDURES Classification of Pesticides § 152.175 Pesticides classified for restricted use. The following uses of pesticide products containing...

  18. Quantum classifying spaces and universal quantum characteristic classes

    CERN Document Server

    Durdevic, M

    1996-01-01

    A construction of the noncommutative-geometric counterparts of classical classifying spaces is presented, for general compact matrix quantum structure groups. A quantum analogue of the classical concept of the classifying map is introduced and analyzed. Interrelations with the abstract algebraic theory of quantum characteristic classes are discussed. Various non-equivalent approaches to defining universal characteristic classes are outlined.

  19. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    Science.gov (United States)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  20. Algorithm for classifying multiple targets using acoustic signatures

    Science.gov (United States)

    Damarla, Thyagaraju; Pham, Tien; Lake, Douglas

    2004-08-01

    In this paper we discuss an algorithm for classification and identification of multiple targets using acoustic signatures. We use a Multi-Variate Gaussian (MVG) classifier for classifying individual targets based on the relative amplitudes of the extracted harmonic set of frequencies. The classifier is trained on high signal-to-noise ratio data for individual targets. In order to classify and further identify each target in a multi-target environment (e.g., a convoy), we first perform bearing tracking and data association. Once the bearings of the targets present are established, we next beamform in the direction of each individual target to spatially isolate it from the other targets (or interferers). Then, we further process and extract a harmonic feature set from each beamformed output. Finally, we apply the MVG classifier on each harmonic feature set for vehicle classification and identification. We present classification/identification results for convoys of three to five ground vehicles.

  1. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    International Nuclear Information System (INIS)

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity

  2. Construction of High-accuracy Ensemble of Classifiers

    Directory of Open Access Journals (Sweden)

    Hedieh Sajedi

    2014-04-01

    Full Text Available There have been several methods developed to construct ensembles. Some of these methods, such as Bagging and Boosting are meta-learners, i.e. they can be applied to any base classifier. The combination of methods should be selected in order that classifiers cover each other weaknesses. In ensemble, the output of several classifiers is used only when they disagree on some inputs. The degree of disagreement is called diversity of the ensemble. Another factor that plays a significant role in performing an ensemble is accuracy of the basic classifiers. It can be said that all the procedures of constructing ensembles seek to achieve a balance between these two parameters, and successful methods can reach a better balance. The diversity of the members of an ensemble is known as an important factor in determining its generalization error. In this paper, we present a new approach for generating ensembles. The proposed approach uses Bagging and Boosting as the generators of base classifiers. Subsequently, the classifiers are partitioned by means of a clustering algorithm. We introduce a selection phase for construction the final ensemble and three different selection methods are proposed for applying in this phase. In the first proposed selection method, a classifier is selected randomly from each cluster. The second method selects the most accurate classifier from each cluster and the third one selects the nearest classifier to the center of each cluster to construct the final ensemble. The results of the experiments on well-known datasets demonstrate the strength of our proposed approach, especially applying the selection of the most accurate classifiers from clusters and employing Bagging generator.

  3. Removal of failed crown and bridge.

    Science.gov (United States)

    Sharma, Ashu; Rahul, G R; Poduval, Soorya T; Shetty, Karunakar

    2012-07-01

    Crown and bridge have life span of many years but they fail for a number of reasons. Over the years, many devices have been designed to remove crowns and bridges from abutment teeth. While the removal of temporary crowns and bridges is usually very straightforward, the removal of a definitive cast crown with unknown cement is more challenging. Removal is often by destructive means. There are a number of circumstances, however, in which conservative disassembly would aid the practitioner in completing restorative/endodontic procedures. There are different mechanisms available to remove a failed crown or bridge. But there is no information published about the classification of available systems for crown and bridge removal. So it is logical to classify these systems into different groups which can help a clinician in choosing a particular type of system depending upon the clinical situation. The aim of this article is to provide a classification for various crown and bridge removal systems; describe how a number of systems work; and when and why they might be used. A PubMed search of English literature was conducted up to January 2010 using the terms: Crown and bridge removal, Crown and bridge disassembly, Crown and bridge failure. Additionally, the bibliographies of 3 previous reviews, their cross references as well as articles published in various journals like International Endodontic Journal, Journal of Endodontics and were manually searched. Key words:Crown and bridge removal, Crown and bridge disassembly, Crown and bridge failure. PMID:24558549

  4. Intelligent Bayes Classifier (IBC for ENT infection classification in hospital environment

    Directory of Open Access Journals (Sweden)

    Dutta Ritabrata

    2006-12-01

    Full Text Available Abstract Electronic Nose based ENT bacteria identification in hospital environment is a classical and challenging problem of classification. In this paper an electronic nose (e-nose, comprising a hybrid array of 12 tin oxide sensors (SnO2 and 6 conducting polymer sensors has been used to identify three species of bacteria, Escherichia coli (E. coli, Staphylococcus aureus (S. aureus, and Pseudomonas aeruginosa (P. aeruginosa responsible for ear nose and throat (ENT infections when collected as swab sample from infected patients and kept in ISO agar solution in the hospital environment. In the next stage a sub-classification technique has been developed for the classification of two different species of S. aureus, namely Methicillin-Resistant S. aureus (MRSA and Methicillin Susceptible S. aureus (MSSA. An innovative Intelligent Bayes Classifier (IBC based on "Baye's theorem" and "maximum probability rule" was developed and investigated for these three main groups of ENT bacteria. Along with the IBC three other supervised classifiers (namely, Multilayer Perceptron (MLP, Probabilistic neural network (PNN, and Radial Basis Function Network (RBFN were used to classify the three main bacteria classes. A comparative evaluation of the classifiers was conducted for this application. IBC outperformed MLP, PNN and RBFN. The best results suggest that we are able to identify and classify three bacteria main classes with up to 100% accuracy rate using IBC. We have also achieved 100% classification accuracy for the classification of MRSA and MSSA samples with IBC. We can conclude that this study proves that IBC based e-nose can provide very strong and rapid solution for the identification of ENT infections in hospital environment.

  5. Malignancy and Abnormality Detection of Mammograms using Classifier Ensembling

    Directory of Open Access Journals (Sweden)

    Nawazish Naveed

    2011-07-01

    Full Text Available The breast cancer detection and diagnosis is a critical and complex procedure that demands high degree of accuracy. In computer aided diagnostic systems, the breast cancer detection is a two stage procedure. First, to classify the malignant and benign mammograms, while in second stage, the type of abnormality is detected. In this paper, we have developed a novel architecture to enhance the classification of malignant and benign mammograms using multi-classification of malignant mammograms into six abnormality classes. DWT (Discrete Wavelet Transformation features are extracted from preprocessed images and passed through different classifiers. To improve accuracy, results generated by various classifiers are ensembled. The genetic algorithm is used to find optimal weights rather than assigning weights to the results of classifiers on the basis of heuristics. The mammograms declared as malignant by ensemble classifiers are divided into six classes. The ensemble classifiers are further used for multiclassification using one-against-all technique for classification. The output of all ensemble classifiers is combined by product, median and mean rule. It has been observed that the accuracy of classification of abnormalities is more than 97% in case of mean rule. The Mammographic Image Analysis Society dataset is used for experimentation.

  6. Glycosylation site prediction using ensembles of Support Vector Machine classifiers

    Directory of Open Access Journals (Sweden)

    Silvescu Adrian

    2007-11-01

    Full Text Available Abstract Background Glycosylation is one of the most complex post-translational modifications (PTMs of proteins in eukaryotic cells. Glycosylation plays an important role in biological processes ranging from protein folding and subcellular localization, to ligand recognition and cell-cell interactions. Experimental identification of glycosylation sites is expensive and laborious. Hence, there is significant interest in the development of computational methods for reliable prediction of glycosylation sites from amino acid sequences. Results We explore machine learning methods for training classifiers to predict the amino acid residues that are likely to be glycosylated using information derived from the target amino acid residue and its sequence neighbors. We compare the performance of Support Vector Machine classifiers and ensembles of Support Vector Machine classifiers trained on a dataset of experimentally determined N-linked, O-linked, and C-linked glycosylation sites extracted from O-GlycBase version 6.00, a database of 242 proteins from several different species. The results of our experiments show that the ensembles of Support Vector Machine classifiers outperform single Support Vector Machine classifiers on the problem of predicting glycosylation sites in terms of a range of standard measures for comparing the performance of classifiers. The resulting methods have been implemented in EnsembleGly, a web server for glycosylation site prediction. Conclusion Ensembles of Support Vector Machine classifiers offer an accurate and reliable approach to automated identification of putative glycosylation sites in glycoprotein sequences.

  7. Representation of classifier distributions in terms of hypergeometric functions

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper derives alternative analytical expressions for classifier product distributions in terms of Gauss hypergeometric function, 2F1, by considering feed distribution defined in terms of Gates-Gaudin-Schumann function and efficiency curve defined in terms of a logistic function. It is shown that classifier distributions under dispersed conditions of classification pivot at a common size and the distributions are difference similar.The paper also addresses an inverse problem of classifier distributions wherein the feed distribution and efficiency curve are identified from the measured product distributions without needing to know the solid flow split of particles to any of the product streams.

  8. Classifying Regularized Sensor Covariance Matrices: An Alternative to CSP.

    Science.gov (United States)

    Roijendijk, Linsey; Gielen, Stan; Farquhar, Jason

    2016-08-01

    Common spatial patterns (CSP) is a commonly used technique for classifying imagined movement type brain-computer interface (BCI) datasets. It has been very successful with many extensions and improvements on the basic technique. However, a drawback of CSP is that the signal processing pipeline contains two supervised learning stages: the first in which class- relevant spatial filters are learned and a second in which a classifier is used to classify the filtered variances. This may lead to potential overfitting issues, which are generally avoided by limiting CSP to only a few filters. PMID:26372428

  9. Remote Sensing Data Binary Classification Using Boosting with Simple Classifiers

    Directory of Open Access Journals (Sweden)

    Nowakowski Artur

    2015-10-01

    Full Text Available Boosting is a classification method which has been proven useful in non-satellite image processing while it is still new to satellite remote sensing. It is a meta-algorithm, which builds a strong classifier from many weak ones in iterative way. We adapt the AdaBoost.M1 boosting algorithm in a new land cover classification scenario based on utilization of very simple threshold classifiers employing spectral and contextual information. Thresholds for the classifiers are automatically calculated adaptively to data statistics.

  10. Computer-aided diagnosis system for classifying benign and malignant thyroid nodules in multi-stained FNAB cytological images

    International Nuclear Information System (INIS)

    An automated computer-aided diagnosis system is developed to classify benign and malignant thyroid nodules using multi-stained fine needle aspiration biopsy (FNAB) cytological images. In the first phase, the image segmentation is performed to remove the background staining information and retain the appropriate foreground cell objects in cytological images using mathematical morphology and watershed transform segmentation methods. Subsequently, statistical features are extracted using two-level discrete wavelet transform (DWT) decomposition, gray level co-occurrence matrix (GLCM) and Gabor filter based methods. The classifiers k-nearest neighbor (k-NN), Elman neural network (ENN) and support vector machine (SVM) are tested for classifying benign and malignant thyroid nodules. The combination of watershed segmentation, GLCM features and k-NN classifier results a lowest diagnostic accuracy of 60 %. The highest diagnostic accuracy of 93.33 % is achieved by ENN classifier trained with the statistical features extracted by Gabor filter bank from the images segmented by morphology and watershed transform segmentation methods. It is also observed that SVM classifier results its highest diagnostic accuracy of 90 % for DWT and Gabor filter based features along with morphology and watershed transform segmentation methods. The experimental results suggest that the developed system with multi-stained thyroid FNAB images would be useful for identifying thyroid cancer irrespective of staining protocol used.

  11. Study on the activity of electron transfer of photosystem Ⅱ removed manganese cluster by using exogenous electron carriers%利用外源电子载体研究去除锰簇的光系统Ⅱ的电子传递活性

    Institute of Scientific and Technical Information of China (English)

    由万胜; 黄海丽; 康阳; 姚明东; 陈钧

    2015-01-01

    光系统Ⅱ(PSⅡ)利用锰簇(OEC)氧化水分解并将电子通过类囊体膜上的电子传递链传递到外源电子受体侧。但是去除锰簇的光系统Ⅱ(T ris‐Wash BBY )是否具有电子传递能力仍是一个值得研究的问题。因此本文引进外源电子供体1,5‐二苯基卡巴肼(DPC)代替锰簇向电子受体侧提供电子,研究Tris‐Wash BBY的电子转移活性。实验表明T ris‐Wash BBY在光照的条件下可以将 DPC的电子传递给外源电子受体2,6‐二氯酚靛酚(DCPIP),并且利用紫外可见光谱的方法,通过分析外源电子受体DCPIP的还原量来表征T ris‐Wash BBY和BBY的电子传递活性。%Photosystem Ⅱ (BBY) could split water on manganese cluster (OEC) and transfer elec‐trons to exogenous electron acceptor side through the thylakoid membrane electron transport chain . But it is still worthy to study whether photosystem Ⅱ (Tris‐Wash BBY) which removed the manga‐nese cluster has capability of the electron transfer .Therefore ,this article introduces an exogenous e‐lectron donor 1 ,5‐Diphenylcarbazide (DPC) insteading of manganese cluster to provide electron for measuring the activity of electron transfer of Tris‐Wash BBY .The results show that electrons could transport from DPC to DCPIP under illumination in Tris‐Wash BBY .We could characterize electron transfer activity of T ris‐Wash BBY and BBY by analyzing the reduction amount of DCPIP based on UV‐visible spectroscopy methods .

  12. 42 CFR 37.50 - Interpreting and classifying chest roentgenograms.

    Science.gov (United States)

    2010-10-01

    ... interpreted and classified in accordance with the ILO Classification system and recorded on a Roentgenographic... under the Act, shall have immediately available for reference a complete set of the ILO...

  13. A NON-PARAMETER BAYESIAN CLASSIFIER FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Liu Qingshan; Lu Hanqing; Ma Songde

    2003-01-01

    A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional density is estimated by KDE and the bandwidthof the kernel function is estimated by Expectation Maximum (EM) algorithm. Two subspaceanalysis methods-linear Principal Component Analysis (PCA) and Kernel-based PCA (KPCA)are respectively used to extract features, and the proposed method is compared with ProbabilisticReasoning Models (PRM), Nearest Center (NC) and NN classifiers which are widely used in facerecognition systems. The experiments are performed on two benchmarks and the experimentalresults show that the KDE outperforms PRM, NC and NN classifiers.

  14. A semi-automated approach to building text summarisation classifiers

    Directory of Open Access Journals (Sweden)

    Matias Garcia-Constantino

    2012-12-01

    Full Text Available An investigation into the extraction of useful information from the free text element of questionnaires, using a semi-automated summarisation extraction technique, is described. The summarisation technique utilises the concept of classification but with the support of domain/human experts during classifier construction. A realisation of the proposed technique, SARSET (Semi-Automated Rule Summarisation Extraction Tool, is presented and evaluated using real questionnaire data. The results of this evaluation are compared against the results obtained using two alternative techniques to build text summarisation classifiers. The first of these uses standard rule-based classifier generators, and the second is founded on the concept of building classifiers using secondary data. The results demonstrate that the proposed semi-automated approach outperforms the other two approaches considered.

  15. NUMERICAL SIMULATION OF PARTICLE MOTION IN TURBO CLASSIFIER

    Institute of Scientific and Technical Information of China (English)

    Ning Xu; Guohua Li; Zhichu Huang

    2005-01-01

    Research on the flow field inside a turbo classifier is complicated though important. According to the stochastic trajectory model of particles in gas-solid two-phase flow, and adopting the PHOENICS code, numerical simulation is carried out on the flow field, including particle trajectory, in the inner cavity of a turbo classifier, using both straight and backward crooked elbow blades. Computation results show that when the backward crooked elbow blades are used, the mixed stream that passes through the two blades produces a vortex in the positive direction which counteracts the attached vortex in the opposite direction due to the high-speed turbo rotation, making the flow steadier, thus improving both the grade efficiency and precision of the turbo classifier. This research provides positive theoretical evidences for designing sub-micron particle classifiers with high efficiency and accuracy.

  16. Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS - R code

    OpenAIRE

    Irawan, Dasapta Erwin; Gio, Prana Ugiana

    2016-01-01

    The following R code was used in this paper "Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS" authors: Prihadi Sumintadireja1, Dasapta Erwin Irawan1, Yuano Rezky2, Prana Ugiana Gio3, Anggita Agustin1

  17. Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS

    OpenAIRE

    Sumintadireja, Prihadi; Irawan, Dasapta Erwin; Rezky, Yuanno; Gio, Prana Ugiana; Agustin, Anggita

    2016-01-01

    This file is the dataset for the following paper "Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS". Authors: Prihadi Sumintadireja1, Dasapta Erwin Irawan1, Yuano Rezky2, Prana Ugiana Gio3, Anggita Agustin1

  18. High speed intelligent classifier of tomatoes by colour, size and weight

    Energy Technology Data Exchange (ETDEWEB)

    Cement, J.; Novas, N.; Gazquez, J. A.; Manzano-Agugliaro, F.

    2012-11-01

    At present most horticultural products are classified and marketed according to quality standards, which provide a common language for growers, packers, buyers and consumers. The standardisation of both product and packaging enables greater speed and efficiency in management and marketing. Of all the vegetables grown in greenhouses, tomatoes are predominant in both surface area and tons produced. This paper will present the development and evaluation of a low investment classification system of tomatoes with these objectives: to put it at the service of producing farms and to classify for trading standards. An intelligent classifier of tomatoes has been developed by weight, diameter and colour. This system has optimised the necessary algorithms for data processing in the case of tomatoes, so that productivity is greatly increased, with the use of less expensive and lower performance electronics. The prototype is able to achieve very high speed classification, 12.5 ratings per second, using accessible and low cost commercial equipment for this. It decreases fourfold the manual sorting time and is not sensitive to the variety of tomato classified. This system facilitates the processes of standardisation and quality control, increases the competitiveness of tomato farms and impacts positively on profitability. The automatic classification system described in this work represents a contribution from the economic point of view, as it is profitable for a farm in the short term (less than six months), while the existing systems, can only be used in large trading centers. (Author) 36 refs.

  19. AUTO CLAIM FRAUD DETECTION USING MULTI CLASSIFIER SYSTEM

    Directory of Open Access Journals (Sweden)

    Luis Alexandre Rodrigues

    2014-06-01

    Full Text Available Through a cost matrix and a combination of classifiers, this work identifies the most economical model to perform the detection of suspected cases of fraud in a dataset of automobile claims. The experiments performed by this work show that working more deeply in sampled data in the training phase and test phase of each classifier is possible obtain a more economic model than other model presented in the literature.

  20. Mining housekeeping genes with a Naive Bayes classifier

    OpenAIRE

    Aitken Stuart; De Ferrari Luna

    2006-01-01

    Abstract Background Traditionally, housekeeping and tissue specific genes have been classified using direct assay of mRNA presence across different tissues, but these experiments are costly and the results not easy to compare and reproduce. Results In this work, a Naive Bayes classifier based only on physical and functional characteristics of genes already available in databases, like exon length and measures of chromatin compactness, has achieved a 97% success rate in classification of human...

  1. Mining housekeeping genes with a Naive Bayes classifier

    OpenAIRE

    Ferrari, Luna De; Aitken, Stuart

    2006-01-01

    BACKGROUND: Traditionally, housekeeping and tissue specific genes have been classified using direct assay of mRNA presence across different tissues, but these experiments are costly and the results not easy to compare and reproduce.RESULTS: In this work, a Naive Bayes classifier based only on physical and functional characteristics of genes already available in databases, like exon length and measures of chromatin compactness, has achieved a 97% success rate in classification of human houseke...

  2. Dealing with contaminated datasets: An approach to classifier training

    Science.gov (United States)

    Homenda, Wladyslaw; Jastrzebska, Agnieszka; Rybnik, Mariusz

    2016-06-01

    The paper presents a novel approach to classification reinforced with rejection mechanism. The method is based on a two-tier set of classifiers. First layer classifies elements, second layer separates native elements from foreign ones in each distinguished class. The key novelty presented here is rejection mechanism training scheme according to the philosophy "one-against-all-other-classes". Proposed method was tested in an empirical study of handwritten digits recognition.

  3. Classifying pedestrian shopping behaviour according to implied heuristic choice rules

    OpenAIRE

    Shigeyuki Kurose; Aloys W J Borgers; Timmermans, Harry J. P.

    2001-01-01

    Our aim in this paper is to build and test a model which classifies and identifies pedestrian shopping behaviour in a shopping centre by using temporal and spatial choice heuristics. In particular, the temporal local-distance-minimising, total-distance-minimising, and global-distance-minimising heuristic choice rules and spatial nearest-destination-oriented, farthest-destination-oriented, and intermediate-destination-oriented choice rules are combined to classify and identify the stop sequenc...

  4. One pass learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance.

  5. A cardiorespiratory classifier of voluntary and involuntary electrodermal activity

    Directory of Open Access Journals (Sweden)

    Sejdic Ervin

    2010-02-01

    Full Text Available Abstract Background Electrodermal reactions (EDRs can be attributed to many origins, including spontaneous fluctuations of electrodermal activity (EDA and stimuli such as deep inspirations, voluntary mental activity and startling events. In fields that use EDA as a measure of psychophysiological state, the fact that EDRs may be elicited from many different stimuli is often ignored. This study attempts to classify observed EDRs as voluntary (i.e., generated from intentional respiratory or mental activity or involuntary (i.e., generated from startling events or spontaneous electrodermal fluctuations. Methods Eight able-bodied participants were subjected to conditions that would cause a change in EDA: music imagery, startling noises, and deep inspirations. A user-centered cardiorespiratory classifier consisting of 1 an EDR detector, 2 a respiratory filter and 3 a cardiorespiratory filter was developed to automatically detect a participant's EDRs and to classify the origin of their stimulation as voluntary or involuntary. Results Detected EDRs were classified with a positive predictive value of 78%, a negative predictive value of 81% and an overall accuracy of 78%. Without the classifier, EDRs could only be correctly attributed as voluntary or involuntary with an accuracy of 50%. Conclusions The proposed classifier may enable investigators to form more accurate interpretations of electrodermal activity as a measure of an individual's psychophysiological state.

  6. LESS: a model-based classifier for sparse subspaces.

    Science.gov (United States)

    Veenman, Cor J; Tax, David M J

    2005-09-01

    In this paper, we specifically focus on high-dimensional data sets for which the number of dimensions is an order of magnitude higher than the number of objects. From a classifier design standpoint, such small sample size problems have some interesting challenges. The first challenge is to find, from all hyperplanes that separate the classes, a separating hyperplane which generalizes well for future data. A second important task is to determine which features are required to distinguish the classes. To attack these problems, we propose the LESS (Lowest Error in a Sparse Subspace) classifier that efficiently finds linear discriminants in a sparse subspace. In contrast with most classifiers for high-dimensional data sets, the LESS classifier incorporates a (simple) data model. Further, by means of a regularization parameter, the classifier establishes a suitable trade-off between subspace sparseness and classification accuracy. In the experiments, we show how LESS performs on several high-dimensional data sets and compare its performance to related state-of-the-art classifiers like, among others, linear ridge regression with the LASSO and the Support Vector Machine. It turns out that LESS performs competitively while using fewer dimensions.

  7. Low rank updated LS-SVM classifiers for fast variable selection.

    Science.gov (United States)

    Ojeda, Fabian; Suykens, Johan A K; De Moor, Bart

    2008-01-01

    Least squares support vector machine (LS-SVM) classifiers are a class of kernel methods whose solution follows from a set of linear equations. In this work we present low rank modifications to the LS-SVM classifiers that are useful for fast and efficient variable selection. The inclusion or removal of a candidate variable can be represented as a low rank modification to the kernel matrix (linear kernel) of the LS-SVM classifier. In this way, the LS-SVM solution can be updated rather than being recomputed, which improves the efficiency of the overall variable selection process. Relevant variables are selected according to a closed form of the leave-one-out (LOO) error estimator, which is obtained as a by-product of the low rank modifications. The proposed approach is applied to several benchmark data sets as well as two microarray data sets. When compared to other related algorithms used for variable selection, simulations applying our approach clearly show a lower computational complexity together with good stability on the generalization error.

  8. A Lightweight Data Preprocessing Strategy with Fast Contradiction Analysis for Incremental Classifier Learning

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2015-01-01

    Full Text Available A prime objective in constructing data streaming mining models is to achieve good accuracy, fast learning, and robustness to noise. Although many techniques have been proposed in the past, efforts to improve the accuracy of classification models have been somewhat disparate. These techniques include, but are not limited to, feature selection, dimensionality reduction, and the removal of noise from training data. One limitation common to all of these techniques is the assumption that the full training dataset must be applied. Although this has been effective for traditional batch training, it may not be practical for incremental classifier learning, also known as data stream mining, where only a single pass of the data stream is seen at a time. Because data streams can amount to infinity and the so-called big data phenomenon, the data preprocessing time must be kept to a minimum. This paper introduces a new data preprocessing strategy suitable for the progressive purging of noisy data from the training dataset without the need to process the whole dataset at one time. This strategy is shown via a computer simulation to provide the significant benefit of allowing for the dynamic removal of bad records from the incremental classifier learning process.

  9. Classifying Emotion in News Sentences: When Machine Classification Meets Human Classification

    Directory of Open Access Journals (Sweden)

    Plaban Kumar Bhowmick

    2010-01-01

    Full Text Available Multiple emotions are often evoked in readers in response to text stimuli like news article. In this paper, we present a method for classifying news sentences into multiple emotion categories. The corpus consists of 1000 news sentences and the emotion tag considered was anger, disgust, fear, happiness, sadness and surprise. We performed different experiments to compare the machine classification with human classification of emotion. In both the cases, it has been observed that combining anger and disgust class results in better classification and removing surprise, which is a highly ambiguous class in human classification, improves the performance. Words present in the sentences and the polarity of the subject, object and verb were used as features. The classifier performs better with the word and polarity feature combination compared to feature set consisting only of words. The best performance has been achieved with the corpus where anger and disgust classes are combined and surprise class is removed. In this experiment, the average precision was computed to be 79.5% and the average class wise micro F1 is found to be 59.52%.

  10. China's Electronic Information Product Energy Consumption Standard

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ The electronic information industry of China is facing increasingly urgent ecological challenges. This year, China will study and advance an electronic information product energy consumption standard, and establish a key list of pollution controls and classified frame system.

  11. Representative Vector Machines: A Unified Framework for Classical Classifiers.

    Science.gov (United States)

    Gui, Jie; Liu, Tongliang; Tao, Dacheng; Sun, Zhenan; Tan, Tieniu

    2016-08-01

    Classifier design is a fundamental problem in pattern recognition. A variety of pattern classification methods such as the nearest neighbor (NN) classifier, support vector machine (SVM), and sparse representation-based classification (SRC) have been proposed in the literature. These typical and widely used classifiers were originally developed from different theory or application motivations and they are conventionally treated as independent and specific solutions for pattern classification. This paper proposes a novel pattern classification framework, namely, representative vector machines (or RVMs for short). The basic idea of RVMs is to assign the class label of a test example according to its nearest representative vector. The contributions of RVMs are twofold. On one hand, the proposed RVMs establish a unified framework of classical classifiers because NN, SVM, and SRC can be interpreted as the special cases of RVMs with different definitions of representative vectors. Thus, the underlying relationship among a number of classical classifiers is revealed for better understanding of pattern classification. On the other hand, novel and advanced classifiers are inspired in the framework of RVMs. For example, a robust pattern classification method called discriminant vector machine (DVM) is motivated from RVMs. Given a test example, DVM first finds its k -NNs and then performs classification based on the robust M-estimator and manifold regularization. Extensive experimental evaluations on a variety of visual recognition tasks such as face recognition (Yale and face recognition grand challenge databases), object categorization (Caltech-101 dataset), and action recognition (Action Similarity LAbeliNg) demonstrate the advantages of DVM over other classifiers.

  12. Evaluating and classifying the readiness of technology specifications for national standardization.

    Science.gov (United States)

    Baker, Dixie B; Perlin, Jonathan B; Halamka, John

    2015-05-01

    The American Recovery and Reinvestment Act (ARRA) of 2009 clearly articulated the central role that health information technology (HIT) standards would play in improving healthcare quality, safety, and efficiency through the meaningful use of certified, standards based, electronic health record (EHR) technology. In 2012, the Office of the National Coordinator (ONC) asked the Nationwide Health Information Network (NwHIN) Power Team of the Health Information Technology Standards Committee (HITSC) to develop comprehensive, objective, and, to the extent practical, quantitative criteria for evaluating technical standards and implementation specifications and classifying their readiness for national adoption. The Power Team defined criteria, attributes, and metrics for evaluating and classifying technical standards and specifications as 'emerging,' 'pilot,' or 'ready for national standardization' based on their maturity and adoptability. The ONC and the HITSC are now using these metrics for assessing the readiness of technical standards for national adoption. PMID:24872342

  13. Teaching with Crystal Structures: Helping Students Recognize and Classify the Smallest Repeating Particle in a Given Substance

    Science.gov (United States)

    Smithenry, Dennis W.

    2009-01-01

    Classifying a particle requires an understanding of the type of bonding that exists within and among the particles, which requires an understanding of atomic structure and electron configurations, which requires an understanding of the elements of periodic properties, and so on. Rather than getting tangled up in all of these concepts at the start…

  14. [Horticultural plant diseases multispectral classification using combined classified methods].

    Science.gov (United States)

    Feng, Jie; Li, Hong-Ning; Yang, Wei-Ping; Hou, De-Dong; Liao, Ning-Fang

    2010-02-01

    The research on multispectral data disposal is getting more and more attention with the development of multispectral technique, capturing data ability and application of multispectral technique in agriculture practice. In the present paper, a cultivated plant cucumber' familiar disease (Trichothecium roseum, Sphaerotheca fuliginea, Cladosporium cucumerinum, Corynespora cassiicola, Pseudoperonospora cubensis) is the research objects. The cucumber leaves multispectral images of 14 visible light channels, near infrared channel and panchromatic channel were captured using narrow-band multispectral imaging system under standard observation and illumination environment, and 210 multispectral data samples which are the 16 bands spectral reflectance of different cucumber disease were obtained. The 210 samples were classified by distance, relativity and BP neural network to discuss effective combination of classified methods for making a diagnosis. The result shows that the classified effective combination of distance and BP neural network classified methods has superior performance than each method, and the advantage of each method is fully used. And the flow of recognizing horticultural plant diseases using combined classified methods is presented. PMID:20384138

  15. What Does(n't) K-theory Classify?

    CERN Document Server

    Evslin, J

    2006-01-01

    We review various K-theory classification conjectures in string theory. Sen conjecture based proposals classify D-brane trajectories in backgrounds with no H flux, while Freed-Witten anomaly based proposals classify conserved RR charges and magnetic RR fluxes in topologically time-independent backgrounds. In exactly solvable CFTs a classification of well-defined boundary states implies that there are branes representing every twisted K-theory class. Some of these proposals fail to respect the self-duality of the RR fields in the democratic formulation of type II supergravity and none respect S-duality in type IIB string theory. We discuss two applications. The twisted K-theory classification has led to a conjecture for the topology of the T-dual of any configuration. In the Klebanov-Strassler geometry twisted K-theory classifies universality classes of baryonic vacua.

  16. A novel statistical method for classifying habitat generalists and specialists

    DEFF Research Database (Denmark)

    Chazdon, Robin L; Chao, Anne; Colwell, Robert K;

    2011-01-01

    We develop a novel statistical approach for classifying generalists and specialists in two distinct habitats. Using a multinomial model based on estimated species relative abundance in two habitats, our method minimizes bias due to differences in sampling intensities between two habitat types......: (1) generalist; (2) habitat A specialist; (3) habitat B specialist; and (4) too rare to classify with confidence. We illustrate our multinomial classification method using two contrasting data sets: (1) bird abundance in woodland and heath habitats in southeastern Australia and (2) tree abundance...... in second-growth (SG) and old-growth (OG) rain forests in the Caribbean lowlands of northeastern Costa Rica. We evaluate the multinomial model in detail for the tree data set. Our results for birds were highly concordant with a previous nonstatistical classification, but our method classified a higher...

  17. A Film Classifier Based on Low-level Visual Features

    Directory of Open Access Journals (Sweden)

    Hui-Yu Huang

    2008-07-01

    Full Text Available We propose an approach to classify the film classes by using low level features and visual features. This approach aims to classify the films into genres. Our current domain of study is using the movie preview. A movie preview often emphasizes the theme of a film and hence provides suitable information for classifying process. In our approach, we categorize films into three broad categories: action, dramas, and thriller films. Four computable video features (average shot length, color variance, motion content and lighting key and visual features (show and fast moving effects are combined in our approach to provide the advantage information to demonstrate the movie category. The experimental results present that visual features are the useful messages for processing the film classification. On the other hand, our approach can also be extended for other potential applications, including the browsing and retrieval of videos on the internet, video-on-demand, and video libraries.

  18. WORD SENSE DISAMBIGUATION BASED ON IMPROVED BAYESIAN CLASSIFIERS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Word Sense Disambiguation (WSD) is to decide the sense of an ambiguous word on particular context. Most of current studies on WSD only use several ambiguous words as test samples, thus leads to some limitation in practical application. In this paper, we perform WSD study based on large scale real-world corpus using two unsupervised learning algorithms based on ±n-improved Bayesian model and Dependency Grammar(DG)-improved Bayesian model. ±n-improved classifiers reduce the window size of context of ambiguous words with close-distance feature extraction method, and decrease the jamming of useless features, thus obviously improve the accuracy, reaching 83.18% (in open test). DG-improved classifier can more effectively conquer the noise effect existing in Naive-Bayesian classifier. Experimental results show that this approach does better on Chinese WSD, and the open test achieved an accuracy of 86.27%.

  19. Iris Recognition Based on LBP and Combined LVQ Classifier

    CERN Document Server

    Shams, M Y; Nomir, O; El-Awady, R M; 10.5121/ijcsit.2011.3506

    2011-01-01

    Iris recognition is considered as one of the best biometric methods used for human identification and verification, this is because of its unique features that differ from one person to another, and its importance in the security field. This paper proposes an algorithm for iris recognition and classification using a system based on Local Binary Pattern and histogram properties as a statistical approaches for feature extraction, and Combined Learning Vector Quantization Classifier as Neural Network approach for classification, in order to build a hybrid model depends on both features. The localization and segmentation techniques are presented using both Canny edge detection and Hough Circular Transform in order to isolate an iris from the whole eye image and for noise detection .Feature vectors results from LBP is applied to a Combined LVQ classifier with different classes to determine the minimum acceptable performance, and the result is based on majority voting among several LVQ classifier. Different iris da...

  20. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    allows usage of such classifiers in large scale problems. We demonstrate its application for segmenting tibial articular cartilage in knee MRI scans, with number of training voxels being more than 2 million. In the next phase of the study we apply the cascaded classifier to a similar but even more......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... image, respectively and this system is referred as triplanar convolutional neural network in the thesis. We applied the triplanar CNN for segmenting articular cartilage in knee MRI and compared its performance with the same state-of-the-art method which was used as a benchmark for cascaded classifier...

  1. A History of Classified Activities at Oak Ridge National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    2001-01-30

    The facilities that became Oak Ridge National Laboratory (ORNL) were created in 1943 during the United States' super-secret World War II project to construct an atomic bomb (the Manhattan Project). During World War II and for several years thereafter, essentially all ORNL activities were classified. Now, in 2000, essentially all ORNL activities are unclassified. The major purpose of this report is to provide a brief history of ORNL's major classified activities from 1943 until the present (September 2000). This report is expected to be useful to the ORNL Classification Officer and to ORNL's Authorized Derivative Classifiers and Authorized Derivative Declassifiers in their classification review of ORNL documents, especially those documents that date from the 1940s and 1950s.

  2. COMPARISON OF SVM AND FUZZY CLASSIFIER FOR AN INDIAN SCRIPT

    Directory of Open Access Journals (Sweden)

    M. J. Baheti

    2012-01-01

    Full Text Available With the advent of technological era, conversion of scanned document (handwritten or printed into machine editable format has attracted many researchers. This paper deals with the problem of recognition of Gujarati handwritten numerals. Gujarati numeral recognition requires performing some specific steps as a part of preprocessing. For preprocessing digitization, segmentation, normalization and thinning are done with considering that the image have almost no noise. Further affine invariant moments based model is used for feature extraction and finally Support Vector Machine (SVM and Fuzzy classifiers are used for numeral classification. . The comparison of SVM and Fuzzy classifier is made and it can be seen that SVM procured better results as compared to Fuzzy Classifier.

  3. A non-parametric 2D deformable template classifier

    DEFF Research Database (Denmark)

    Schultz, Nette; Nielsen, Allan Aasbjerg; Conradsen, Knut;

    2005-01-01

    relaxation in a Bayesian scheme is used. In the Bayesian likelihood a class density function and its estimate hereof is introduced, which is designed to separate the feature space. The method is verified on data collected in Øresund, Scandinavia. The data come from four geographically different areas. Two...... areas, which are homogeneous with respect to bottom type, are used for training of the deformable template classifier, and the classifier is applied to two areas, which are heterogeneous with respect to bottom type. The classification results are good with a correct classification percent above 94 per...... cent for the bottom type classes, and show that the deformable template classifier can be used for interactive on-line sea floor segmentation of RoxAnn echo sounder data....

  4. A Topic Model Approach to Representing and Classifying Football Plays

    KAUST Repository

    Varadarajan, Jagannadan

    2013-09-09

    We address the problem of modeling and classifying American Football offense teams’ plays in video, a challenging example of group activity analysis. Automatic play classification will allow coaches to infer patterns and tendencies of opponents more ef- ficiently, resulting in better strategy planning in a game. We define a football play as a unique combination of player trajectories. To this end, we develop a framework that uses player trajectories as inputs to MedLDA, a supervised topic model. The joint maximiza- tion of both likelihood and inter-class margins of MedLDA in learning the topics allows us to learn semantically meaningful play type templates, as well as, classify different play types with 70% average accuracy. Furthermore, this method is extended to analyze individual player roles in classifying each play type. We validate our method on a large dataset comprising 271 play clips from real-world football games, which will be made publicly available for future comparisons.

  5. Examining the significance of fingerprint-based classifiers

    Directory of Open Access Journals (Sweden)

    Collins Jack R

    2008-12-01

    Full Text Available Abstract Background Experimental examinations of biofluids to measure concentrations of proteins or their fragments or metabolites are being explored as a means of early disease detection, distinguishing diseases with similar symptoms, and drug treatment efficacy. Many studies have produced classifiers with a high sensitivity and specificity, and it has been argued that accurate results necessarily imply some underlying biology-based features in the classifier. The simplest test of this conjecture is to examine datasets designed to contain no information with classifiers used in many published studies. Results The classification accuracy of two fingerprint-based classifiers, a decision tree (DT algorithm and a medoid classification algorithm (MCA, are examined. These methods are used to examine 30 artificial datasets that contain random concentration levels for 300 biomolecules. Each dataset contains between 30 and 300 Cases and Controls, and since the 300 observed concentrations are randomly generated, these datasets are constructed to contain no biological information. A modest search of decision trees containing at most seven decision nodes finds a large number of unique decision trees with an average sensitivity and specificity above 85% for datasets containing 60 Cases and 60 Controls or less, and for datasets with 90 Cases and 90 Controls many DTs have an average sensitivity and specificity above 80%. For even the largest dataset (300 Cases and 300 Controls the MCA procedure finds several unique classifiers that have an average sensitivity and specificity above 88% using only six or seven features. Conclusion While it has been argued that accurate classification results must imply some biological basis for the separation of Cases from Controls, our results show that this is not necessarily true. The DT and MCA classifiers are sufficiently flexible and can produce good results from datasets that are specifically constructed to contain no

  6. Online classifier adaptation for cost-sensitive learning

    OpenAIRE

    Zhang, Junlin; Garcia, Jose

    2015-01-01

    In this paper, we propose the problem of online cost-sensitive clas- sifier adaptation and the first algorithm to solve it. We assume we have a base classifier for a cost-sensitive classification problem, but it is trained with respect to a cost setting different to the desired one. Moreover, we also have some training data samples streaming to the algorithm one by one. The prob- lem is to adapt the given base classifier to the desired cost setting using the steaming training samples online. ...

  7. Learning Continuous Time Bayesian Network Classifiers Using MapReduce

    Directory of Open Access Journals (Sweden)

    Simone Villa

    2014-12-01

    Full Text Available Parameter and structural learning on continuous time Bayesian network classifiers are challenging tasks when you are dealing with big data. This paper describes an efficient scalable parallel algorithm for parameter and structural learning in the case of complete data using the MapReduce framework. Two popular instances of classifiers are analyzed, namely the continuous time naive Bayes and the continuous time tree augmented naive Bayes. Details of the proposed algorithm are presented using Hadoop, an open-source implementation of a distributed file system and the MapReduce framework for distributed data processing. Performance evaluation of the designed algorithm shows a robust parallel scaling.

  8. Classifying depth of anesthesia using EEG features, a comparison.

    Science.gov (United States)

    Esmaeili, Vahid; Shamsollahi, Mohammad Bagher; Arefian, Noor Mohammad; Assareh, Amin

    2007-01-01

    Various EEG features have been used in depth of anesthesia (DOA) studies. The objective of this study was to find the excellent features or combination of them than can discriminate between different anesthesia states. Conducting a clinical study on 22 patients we could define 4 distinct anesthetic states: awake, moderate, general anesthesia, and isoelectric. We examined features that have been used in earlier studies using single-channel EEG signal processing method. The maximum accuracy (99.02%) achieved using approximate entropy as the feature. Some other features could well discriminate a particular state of anesthesia. We could completely classify the patterns by means of 3 features and Bayesian classifier.

  9. Data Stream Classification Based on the Gamma Classifier

    Directory of Open Access Journals (Sweden)

    Abril Valeria Uriarte-Arcia

    2015-01-01

    Full Text Available The ever increasing data generation confronts us with the problem of handling online massive amounts of information. One of the biggest challenges is how to extract valuable information from these massive continuous data streams during single scanning. In a data stream context, data arrive continuously at high speed; therefore the algorithms developed to address this context must be efficient regarding memory and time management and capable of detecting changes over time in the underlying distribution that generated the data. This work describes a novel method for the task of pattern classification over a continuous data stream based on an associative model. The proposed method is based on the Gamma classifier, which is inspired by the Alpha-Beta associative memories, which are both supervised pattern recognition models. The proposed method is capable of handling the space and time constrain inherent to data stream scenarios. The Data Streaming Gamma classifier (DS-Gamma classifier implements a sliding window approach to provide concept drift detection and a forgetting mechanism. In order to test the classifier, several experiments were performed using different data stream scenarios with real and synthetic data streams. The experimental results show that the method exhibits competitive performance when compared to other state-of-the-art algorithms.

  10. Discrimination-Aware Classifiers for Student Performance Prediction

    Science.gov (United States)

    Luo, Ling; Koprinska, Irena; Liu, Wei

    2015-01-01

    In this paper we consider discrimination-aware classification of educational data. Mining and using rules that distinguish groups of students based on sensitive attributes such as gender and nationality may lead to discrimination. It is desirable to keep the sensitive attributes during the training of a classifier to avoid information loss but…

  11. Support vector machines classifiers of physical activities in preschoolers

    Science.gov (United States)

    The goal of this study is to develop, test, and compare multinomial logistic regression (MLR) and support vector machines (SVM) in classifying preschool-aged children physical activity data acquired from an accelerometer. In this study, 69 children aged 3-5 years old were asked to participate in a s...

  12. Building an automated SOAP classifier for emergency department reports.

    Science.gov (United States)

    Mowery, Danielle; Wiebe, Janyce; Visweswaran, Shyam; Harkema, Henk; Chapman, Wendy W

    2012-02-01

    Information extraction applications that extract structured event and entity information from unstructured text can leverage knowledge of clinical report structure to improve performance. The Subjective, Objective, Assessment, Plan (SOAP) framework, used to structure progress notes to facilitate problem-specific, clinical decision making by physicians, is one example of a well-known, canonical structure in the medical domain. Although its applicability to structuring data is understood, its contribution to information extraction tasks has not yet been determined. The first step to evaluating the SOAP framework's usefulness for clinical information extraction is to apply the model to clinical narratives and develop an automated SOAP classifier that classifies sentences from clinical reports. In this quantitative study, we applied the SOAP framework to sentences from emergency department reports, and trained and evaluated SOAP classifiers built with various linguistic features. We found the SOAP framework can be applied manually to emergency department reports with high agreement (Cohen's kappa coefficients over 0.70). Using a variety of features, we found classifiers for each SOAP class can be created with moderate to outstanding performance with F(1) scores of 93.9 (subjective), 94.5 (objective), 75.7 (assessment), and 77.0 (plan). We look forward to expanding the framework and applying the SOAP classification to clinical information extraction tasks.

  13. Recognition of Characters by Adaptive Combination of Classifiers

    Institute of Scientific and Technical Information of China (English)

    WANG Fei; LI Zai-ming

    2004-01-01

    In this paper, the visual feature space based on the long Horizontals, the long Verticals,and the radicals are given. An adaptive combination of classifiers, whose coefficients vary with the input pattern, is also proposed. Experiments show that the approach is promising for character recognition in video sequences.

  14. Diagnostic value of perfusion MRI in classifying stroke

    International Nuclear Information System (INIS)

    Our study was designed to determine whether supplementary information obtained with perfusion MRI can enhance accuracy. We used delayed perfusion, as represented by time to peak map on perfusion MRI, to classify strokes in 39 patients. Strokes were classified as hemodynamic if delayed perfusion extended to a whole territory of the occluded arterial trunk; as embolic if delayed perfusion was absent or restricted to infarcts; as arteriosclerotic if infarcts were small, multiple, and located mainly in the basal ganglias; or as unclassified if the pathophysiology was unclear. We compared these findings with vascular lesions on cerebral angiography, neurological signs, infarction on MRI, ischemia on xenon-enhanced CT (Xe/CT) and collateral pathway development. Delayed perfusion clearly indicated the area of arterial occlusion. Strokes were classified as hemodynamic in 13 patients, embolic in 14 patients, arteriosclerotic in 6 patients and unclassified in 6 patients. Hemodynamic infarcts were seen only in deep white-matter areas such as the centrum semiovale or corona radiata, whereas embolic infarcts were in the cortex, cortex and subjacent white matter, and lenticulo-striatum. Embolic and arteriosclerotic infarcts occurred even in hemo-dynamically compromized hemispheres. Our findings indicate that perfusion MRI, in association with adetailed analysis of T2-weighted MRI of cerebral infarcts in the axial and coronal planes, can accurately classify stroke as hemodynamic, embolic or arteriosclerotic. (author)

  15. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Science.gov (United States)

    Assaleh, Khaled; Al-Rousan, M.

    2005-12-01

    Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  16. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Directory of Open Access Journals (Sweden)

    M. Al-Rousan

    2005-08-01

    Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  17. Gene-expression Classifier in Papillary Thyroid Carcinoma

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Jespersen, Marie Louise; Krogdahl, Annelise;

    2016-01-01

    BACKGROUND: No reliable biomarker for metastatic potential in the risk stratification of papillary thyroid carcinoma exists. We aimed to develop a gene-expression classifier for metastatic potential. MATERIALS AND METHODS: Genome-wide expression analyses were used. Development cohort: freshly...

  18. Packet Payload Inspection Classifier in the Network Flow Level

    Directory of Open Access Journals (Sweden)

    N.Kannaiya Raja

    2012-06-01

    Full Text Available The network have in the world highly congested channels and topology which was dynamicallycreated with high risk. In this we need flow classifier to find the packet movement in the network.In this paper we have to be developed and evaluated TCP/UDP/FTP/ICMP based on payloadinformation and port numbers and number of flags in the packet for highly flow of packets in thenetwork. The primary motivations of this paper all the valuable protocols are used legally toprocess find out the end user by using payload packet inspection, and also used evaluationshypothesis testing approach. The effective use of tamper resistant flow classifier has used in onenetwork contexts domain and developed in a different Berkeley and Cambridge, the classificationand accuracy was easily found through the packet inspection by using different flags in thepackets. While supervised classifier training specific to the new domain results in much betterclassification accuracy, we also formed a new approach to determine malicious packet and find apacket flow classifier and send correct packet to destination address.

  19. Packet Payload Inspection Classifier in the Network Flow Level

    Directory of Open Access Journals (Sweden)

    N.Kannaiya Raja

    2012-06-01

    Full Text Available The network have in the world highly congested channels and topology which was dynamically created with high risk. In this we need flow classifier to find the packet movement in the network. In this paper we have to be developed and evaluated TCP/UDP/FTP/ICMP based on payload information and port numbers and number of flags in the packet for highly flow of packets in the network. The primary motivations of this paper all the valuable protocols are used legally to process find out the end user by using payload packet inspection, and also used evaluations hypothesis testing approach. The effective use of tamper resistant flow classifier has used in one network contexts domain and developed in a different Berkeley and Cambridge, the classification and accuracy was easily found through the packet inspection by using different flags in the packets. While supervised classifier training specific to the new domain results in much better classification accuracy, we also formed a new approach to determine malicious packet and find a packet flow classifier and send correct packet to destination address.

  20. Weighted Hybrid Decision Tree Model for Random Forest Classifier

    Science.gov (United States)

    Kulkarni, Vrushali Y.; Sinha, Pradeep K.; Petare, Manisha C.

    2016-06-01

    Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.

  1. Localizing genes to cerebellar layers by classifying ISH images.

    Directory of Open Access Journals (Sweden)

    Lior Kirsch

    Full Text Available Gene expression controls how the brain develops and functions. Understanding control processes in the brain is particularly hard since they involve numerous types of neurons and glia, and very little is known about which genes are expressed in which cells and brain layers. Here we describe an approach to detect genes whose expression is primarily localized to a specific brain layer and apply it to the mouse cerebellum. We learn typical spatial patterns of expression from a few markers that are known to be localized to specific layers, and use these patterns to predict localization for new genes. We analyze images of in-situ hybridization (ISH experiments, which we represent using histograms of local binary patterns (LBP and train image classifiers and gene classifiers for four layers of the cerebellum: the Purkinje, granular, molecular and white matter layer. On held-out data, the layer classifiers achieve accuracy above 94% (AUC by representing each image at multiple scales and by combining multiple image scores into a single gene-level decision. When applied to the full mouse genome, the classifiers predict specific layer localization for hundreds of new genes in the Purkinje and granular layers. Many genes localized to the Purkinje layer are likely to be expressed in astrocytes, and many others are involved in lipid metabolism, possibly due to the unusual size of Purkinje cells.

  2. Subtractive fuzzy classifier based driver distraction levels classification using EEG.

    Science.gov (United States)

    Wali, Mousa Kadhim; Murugappan, Murugappan; Ahmad, Badlishah

    2013-09-01

    [Purpose] In earlier studies of driver distraction, researchers classified distraction into two levels (not distracted, and distracted). This study classified four levels of distraction (neutral, low, medium, high). [Subjects and Methods] Fifty Asian subjects (n=50, 43 males, 7 females), age range 20-35 years, who were free from any disease, participated in this study. Wireless EEG signals were recorded by 14 electrodes during four types of distraction stimuli (Global Position Systems (GPS), music player, short message service (SMS), and mental tasks). We derived the amplitude spectrum of three different frequency bands, theta, alpha, and beta of EEG. Then, based on fusion of discrete wavelet packet transforms and fast fourier transform yield, we extracted two features (power spectral density, spectral centroid frequency) of different wavelets (db4, db8, sym8, and coif5). Mean ± SD was calculated and analysis of variance (ANOVA) was performed. A fuzzy inference system classifier was applied to different wavelets using the two extracted features. [Results] The results indicate that the two features of sym8 posses highly significant discrimination across the four levels of distraction, and the best average accuracy achieved by the subtractive fuzzy classifier was 79.21% using the power spectral density feature extracted using the sym8 wavelet. [Conclusion] These findings suggest that EEG signals can be used to monitor distraction level intensity in order to alert drivers to high levels of distraction.

  3. 18 CFR 367.18 - Criteria for classifying leases.

    Science.gov (United States)

    2010-04-01

    ... classification of the lease under the criteria in paragraph (a) of this section had the changed terms been in... the lessee) must not give rise to a new classification of a lease for accounting purposes. ... ACT General Instructions § 367.18 Criteria for classifying leases. (a) If, at its inception, a...

  4. Classifying aquatic macrophytes as indicators of eutrophication in European lakes

    NARCIS (Netherlands)

    Penning, W.E.; Mjelde, M.; Dudley, B.; Hellsten, S.; Hanganu, J.; Kolada, A.; van den Berg, Marcel S.; Poikane, S.; Phillips, G.; Willby, N.; Ecke, F.

    2008-01-01

    Aquatic macrophytes are one of the biological quality elements in the Water Framework Directive (WFD) for which status assessments must be defined. We tested two methods to classify macrophyte species and their response to eutrophication pressure: one based on percentiles of occurrence along a phosp

  5. Scoring and Classifying Examinees Using Measurement Decision Theory

    Science.gov (United States)

    Rudner, Lawrence M.

    2009-01-01

    This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…

  6. Enhancing atlas based segmentation with multiclass linear classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR 5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne 69300 (France)

    2015-12-15

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.

  7. Comparison of removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether)-N, N, N', N'-tetraacetic acid and citric acid in primary teeth: A scanning electron microscopic study

    OpenAIRE

    Hegde, Rahul J.; Kavita Bapna

    2016-01-01

    Background: Root canal irrigants are considered momentous in their tissue dissolving property, eliminating microorganisms, and removing smear layer. The present study was aimed to compare the removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether)-N, N, N', N'-tetraacetic acid (EGTA) and citric acid solutions with saline as a control in primary anterior teeth. Materials and Methods: Thirty primary anterior teeth were chosen for the study. The teeth were distribute...

  8. Laser Hair Removal

    Science.gov (United States)

    ... rashes clinical tools newsletter | contact Share | Hair Removal, Laser A A A AFTER: Two laser hair removal treatments were performed. This picture is ... Procedure Overview With just the right type of laser or Intense Pulsed Light (IPL) technology, suitable hairs ...

  9. Comparison of machine learning classifiers for influenza detection from emergency department free-text reports.

    Science.gov (United States)

    López Pineda, Arturo; Ye, Ye; Visweswaran, Shyam; Cooper, Gregory F; Wagner, Michael M; Tsui, Fuchiang Rich

    2015-12-01

    Influenza is a yearly recurrent disease that has the potential to become a pandemic. An effective biosurveillance system is required for early detection of the disease. In our previous studies, we have shown that electronic Emergency Department (ED) free-text reports can be of value to improve influenza detection in real time. This paper studies seven machine learning (ML) classifiers for influenza detection, compares their diagnostic capabilities against an expert-built influenza Bayesian classifier, and evaluates different ways of handling missing clinical information from the free-text reports. We identified 31,268 ED reports from 4 hospitals between 2008 and 2011 to form two different datasets: training (468 cases, 29,004 controls), and test (176 cases and 1620 controls). We employed Topaz, a natural language processing (NLP) tool, to extract influenza-related findings and to encode them into one of three values: Acute, Non-acute, and Missing. Results show that all ML classifiers had areas under ROCs (AUC) ranging from 0.88 to 0.93, and performed significantly better than the expert-built Bayesian model. Missing clinical information marked as a value of missing (not missing at random) had a consistently improved performance among 3 (out of 4) ML classifiers when it was compared with the configuration of not assigning a value of missing (missing completely at random). The case/control ratios did not affect the classification performance given the large number of training cases. Our study demonstrates ED reports in conjunction with the use of ML and NLP with the handling of missing value information have a great potential for the detection of infectious diseases.

  10. PROGRAMMING OFFICE REMOVALS

    CERN Multimedia

    Groupe ST-HM

    2000-01-01

    The Removals Service recommends you to plan your removals well in advance, taking into account the fact that the Transport and Handling Group’s main priority remains the dismantling of LEP and the installation of the LHC. The requests can be made by: http://st.web.cern.ch/st/hm/removal/DEMEE.HTM Thank you for your cooperation.

  11. The electronic couplings in electron transfer and excitation energy transfer.

    Science.gov (United States)

    Hsu, Chao-Ping

    2009-04-21

    The transport of charge via electrons and the transport of excitation energy via excitons are two processes of fundamental importance in diverse areas of research. Characterization of electron transfer (ET) and excitation energy transfer (EET) rates are essential for a full understanding of, for instance, biological systems (such as respiration and photosynthesis) and opto-electronic devices (which interconvert electric and light energy). In this Account, we examine one of the parameters, the electronic coupling factor, for which reliable values are critical in determining transfer rates. Although ET and EET are different processes, many strategies for calculating the couplings share common themes. We emphasize the similarities in basic assumptions between the computational methods for the ET and EET couplings, examine the differences, and summarize the properties, advantages, and limits of the different computational methods. The electronic coupling factor is an off-diagonal Hamiltonian matrix element between the initial and final diabatic states in the transport processes. ET coupling is essentially the interaction of the two molecular orbitals (MOs) where the electron occupancy is changed. Singlet excitation energy transfer (SEET), however, contains a Frster dipole-dipole coupling as its most important constituent. Triplet excitation energy transfer (TEET) involves an exchange of two electrons of different spin and energy; thus, it is like an overlap interaction of two pairs of MOs. Strategies for calculating ET and EET couplings can be classified as (1) energy-gap-based approaches, (2) direct calculation of the off-diagonal matrix elements, or (3) use of an additional operator to describe the extent of charge or excitation localization and to calculate the coupling value. Some of the difficulties in calculating the couplings were recently resolved. Methods were developed to remove the nondynamical correlation problem from the highly precise coupled cluster

  12. Boosting 2-Thresholded Weak Classifiers over Scattered Rectangle Features for Object Detection

    Directory of Open Access Journals (Sweden)

    Weize Zhang

    2009-12-01

    Full Text Available In this paper, we extend Viola and Jones’ detection framework in two aspects. Firstly, by removing the restriction of the geometry adjacency rule over Haarlike feature, we get a richer representation called scattered rectangle feature, which explores much more orientations other than horizontal, vertical and diagonal, as well as misaligned, detached and non-rectangle shape information that is unreachable to Haar-like feature. Secondly, we strengthen the discriminating power of the weak classifiers by expanding them into 2-thresholded ones, which guarantees a better classification with smaller error, by the simple motivation that the bound on the accuracy of the final hypothesis improves when any of the weak hypotheses is improved. An optimal linear online algorithm is also proposed to determine the two thresholds. The comparison experiments on MIT+CMU upright face test set under an objective detection criterion show that the extended method outperforms the original one.

  13. Nonlinear interpolation fractal classifier for multiple cardiac arrhythmias recognition

    Energy Technology Data Exchange (ETDEWEB)

    Lin, C.-H. [Department of Electrical Engineering, Kao-Yuan University, No. 1821, Jhongshan Rd., Lujhu Township, Kaohsiung County 821, Taiwan (China); Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)], E-mail: eechl53@cc.kyu.edu.tw; Du, Y.-C.; Chen Tainsong [Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)

    2009-11-30

    This paper proposes a method for cardiac arrhythmias recognition using the nonlinear interpolation fractal classifier. A typical electrocardiogram (ECG) consists of P-wave, QRS-complexes, and T-wave. Iterated function system (IFS) uses the nonlinear interpolation in the map and uses similarity maps to construct various data sequences including the fractal patterns of supraventricular ectopic beat, bundle branch ectopic beat, and ventricular ectopic beat. Grey relational analysis (GRA) is proposed to recognize normal heartbeat and cardiac arrhythmias. The nonlinear interpolation terms produce family functions with fractal dimension (FD), the so-called nonlinear interpolation function (NIF), and make fractal patterns more distinguishing between normal and ill subjects. The proposed QRS classifier is tested using the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Compared with other methods, the proposed hybrid methods demonstrate greater efficiency and higher accuracy in recognizing ECG signals.

  14. Unascertained measurement classifying model of goaf collapse prediction

    Institute of Scientific and Technical Information of China (English)

    DONG Long-jun; PENG Gang-jian; FU Yu-hua; BAI Yun-fei; LIU You-fang

    2008-01-01

    Based on optimized forecast method of unascertained classifying, a unascertained measurement classifying model (UMC) to predict mining induced goaf collapse was established. The discriminated factors of the model are influential factors including overburden layer type, overburden layer thickness, the complex degree of geologic structure,the inclination angle of coal bed, volume rate of the cavity region, the vertical goaf depth from the surface and space superposition layer of the goaf region. Unascertained measurement (UM) function of each factor was calculated. The unascertained measurement to indicate the classification center and the grade of waiting forecast sample was determined by the UM distance between the synthesis index of waiting forecast samples and index of every classification. The training samples were tested by the established model, and the correct rate is 100%. Furthermore, the seven waiting forecast samples were predicted by the UMC model. The results show that the forecast results are fully consistent with the actual situation.

  15. Using Syntactic-Based Kernels for Classifying Temporal Relations

    Institute of Scientific and Technical Information of China (English)

    Seyed Abolghasem Mirroshandel; Gholamreza Ghassem-Sani; Mahdy Khayyamian

    2011-01-01

    Temporal relation classification is one of contemporary demanding tasks of natural language processing. This task can be used in various applications such as question answering, summarization, and language specific information retrieval. In this paper, we propose an improved algorithm for classifying temporal relations, between events or between events and time, using support vector machines (SVM). Along with gold-standard corpus features, the proposed method aims at exploiting some useful automatically generated syntactic features to improve the accuracy of classification. Accordingly, a number of novel kernel functions are introduced and evaluated. Our evaluations clearly demonstrate that adding syntactic features results in a considerable improvement over the state-of-the-art method of classifying temporal relations.

  16. MAMMOGRAMS ANALYSIS USING SVM CLASSIFIER IN COMBINED TRANSFORMS DOMAIN

    Directory of Open Access Journals (Sweden)

    B.N. Prathibha

    2011-02-01

    Full Text Available Breast cancer is a primary cause of mortality and morbidity in women. Reports reveal that earlier the detection of abnormalities, better the improvement in survival. Digital mammograms are one of the most effective means for detecting possible breast anomalies at early stages. Digital mammograms supported with Computer Aided Diagnostic (CAD systems help the radiologists in taking reliable decisions. The proposed CAD system extracts wavelet features and spectral features for the better classification of mammograms. The Support Vector Machines classifier is used to analyze 206 mammogram images from Mias database pertaining to the severity of abnormality, i.e., benign and malign. The proposed system gives 93.14% accuracy for discrimination between normal-malign and 87.25% accuracy for normal-benign samples and 89.22% accuracy for benign-malign samples. The study reveals that features extracted in hybrid transform domain with SVM classifier proves to be a promising tool for analysis of mammograms.

  17. The fuzzy gene filter: A classifier performance assesment

    CERN Document Server

    Perez, Meir

    2011-01-01

    The Fuzzy Gene Filter (FGF) is an optimised Fuzzy Inference System designed to rank genes in order of differential expression, based on expression data generated in a microarray experiment. This paper examines the effectiveness of the FGF for feature selection using various classification architectures. The FGF is compared to three of the most common gene ranking algorithms: t-test, Wilcoxon test and ROC curve analysis. Four classification schemes are used to compare the performance of the FGF vis-a-vis the standard approaches: K Nearest Neighbour (KNN), Support Vector Machine (SVM), Naive Bayesian Classifier (NBC) and Artificial Neural Network (ANN). A nested stratified Leave-One-Out Cross Validation scheme is used to identify the optimal number top ranking genes, as well as the optimal classifier parameters. Two microarray data sets are used for the comparison: a prostate cancer data set and a lymphoma data set.

  18. Efficient iris recognition via ICA feature and SVM classifier

    Institute of Scientific and Technical Information of China (English)

    Wang Yong; Xu Luping

    2007-01-01

    To improve flexibility and reliability of iris recognition algorithm while keeping iris recognition success rate, an iris recognition approach for combining SVM with ICA feature extraction model is presented. SVM is a kind of classifier which has demonstrated high generalization capabilities in the object recognition problem. And ICA is a feature extraction technique which can be considered a generalization of principal component analysis. In this paper, ICA is used to generate a set of subsequences of feature vectors for iris feature extraction. Then each subsequence is classified using support vector machine sequence kernels. Experiments are made on CASIA iris database, the result indicates combination of SVM and ICA can improve iris recognition flexibility and reliability while keeping recognition success rate.

  19. Evaluation of LDA Ensembles Classifiers for Brain Computer Interface

    Science.gov (United States)

    Arjona, Cristian; Pentácolo, José; Gareis, Iván; Atum, Yanina; Gentiletti, Gerardo; Acevedo, Rubén; Rufiner, Leonardo

    2011-12-01

    The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.

  20. Dendritic spine detection using curvilinear structure detector and LDA classifier.

    Science.gov (United States)

    Zhang, Yong; Zhou, Xiaobo; Witt, Rochelle M; Sabatini, Bernardo L; Adjeroh, Donald; Wong, Stephen T C

    2007-06-01

    Dendritic spines are small, bulbous cellular compartments that carry synapses. Biologists have been studying the biochemical pathways by examining the morphological and statistical changes of the dendritic spines at the intracellular level. In this paper a novel approach is presented for automated detection of dendritic spines in neuron images. The dendritic spines are recognized as small objects of variable shape attached or detached to multiple dendritic backbones in the 2D projection of the image stack along the optical direction. We extend the curvilinear structure detector to extract the boundaries as well as the centerlines for the dendritic backbones and spines. We further build a classifier using Linear Discriminate Analysis (LDA) to classify the attached spines into valid and invalid types to improve the accuracy of the spine detection. We evaluate the proposed approach by comparing with the manual results in terms of backbone length, spine number, spine length, and spine density.

  1. Feasibility study for banking loan using association rule mining classifier

    Directory of Open Access Journals (Sweden)

    Agus Sasmito Aribowo

    2015-03-01

    Full Text Available The problem of bad loans in the koperasi can be reduced if the koperasi can detect whether member can complete the mortgage debt or decline. The method used for identify characteristic patterns of prospective lenders in this study, called Association Rule Mining Classifier. Pattern of credit member will be converted into knowledge and used to classify other creditors. Classification process would separate creditors into two groups: good credit and bad credit groups. Research using prototyping for implementing the design into an application using programming language and development tool. The process of association rule mining using Weighted Itemset Tidset (WIT–tree methods. The results shown that the method can predict the prospective customer credit. Training data set using 120 customers who already know their credit history. Data test used 61 customers who apply for credit. The results concluded that 42 customers will be paying off their loans and 19 clients are decline

  2. The three-dimensional origin of the classifying algebra

    OpenAIRE

    Fuchs, Jurgen; Schweigert, Christoph; Stigner, Carl

    2009-01-01

    It is known that reflection coefficients for bulk fields of a rational conformal field theory in the presence of an elementary boundary condition can be obtained as representation matrices of irreducible representations of the classifying algebra, a semisimple commutative associative complex algebra. We show how this algebra arises naturally from the three-dimensional geometry of factorization of correlators of bulk fields on the disk. This allows us to derive explicit expressions for the str...

  3. Classifying paragraph types using linguistic features: Is paragraph positioning important?

    OpenAIRE

    Scott A. Crossley, Kyle Dempsey & Danielle S. McNamara

    2011-01-01

    This study examines the potential for computational tools and human raters to classify paragraphs based on positioning. In this study, a corpus of 182 paragraphs was collected from student, argumentative essays. The paragraphs selected were initial, middle, and final paragraphs and their positioning related to introductory, body, and concluding paragraphs. The paragraphs were analyzed by the computational tool Coh-Metrix on a variety of linguistic features with correlates to textual cohesion ...

  4. Application of dispersion analysis for determining classifying separation size

    OpenAIRE

    Golomeova, Mirjana; Golomeov, Blagoj; Krstev, Boris; Zendelska, Afrodita; Krstev, Aleksandar

    2009-01-01

    The paper presents the procedure of mathematical modelling the cut point of copper ore classifying by laboratory hydrocyclone. The application of dispersion analysis and planning with Latin square makes possible significant reduction the number of tests. Tests were carried out by D-100 mm hydrocyclone. Variable parameters are as follows: content of solid in pulp, underflow diameter, overflow diameter and inlet pressure. The cut point is determined by partition curve. The obtained mathemat...

  5. Mathematical Modeling and Analysis of Classified Marketing of Agricultural Products

    Institute of Scientific and Technical Information of China (English)

    Fengying; WANG

    2014-01-01

    Classified marketing of agricultural products was analyzed using the Logistic Regression Model. This method can take full advantage of information in agricultural product database,to find factors influencing best selling degree of agricultural products,and make quantitative analysis accordingly. Using this model,it is also able to predict sales of agricultural products,and provide reference for mapping out individualized sales strategy for popularizing agricultural products.

  6. Classifying and identifying servers for biomedical information retrieval.

    OpenAIRE

    Patrick, T. B.; Springer, G K

    1994-01-01

    Useful retrieval of biomedical information from network information sources requires methods for organized access to those information sources. This access must be organized in terms of the information content of information sources and in terms of the discovery of the network location of those information sources. We have developed an approach to providing organized access to information sources based on a scheme of hierarchical classifiers and identifiers of the servers providing access to ...

  7. Higher operations in string topology of classifying spaces

    OpenAIRE

    Lahtinen, Anssi

    2015-01-01

    Examples of non-trivial higher string topology operations have been regrettably rare in the literature. In this paper, working in the context of string topology of classifying spaces, we provide explicit calculations of a wealth of non-trivial higher string topology operations associated to a number of different Lie groups. As an application of these calculations, we obtain an abundance of interesting homology classes in the twisted homology groups of automorphism groups of free groups, the o...

  8. Management Education: Classifying Business Curricula and Conceptualizing Transfers and Bridges

    OpenAIRE

    Davar Rezania; Mike Henry

    2010-01-01

    Traditionally, higher academic education has favoured acquisition of individualized conceptual knowledge over context-independent procedural knowledge. Applied degrees, on the other hand, favour procedural knowledge. We present a conceptual model for classifying a business curriculum. This classification can inform discussion around difficulties associated with issues such as assessment of prior learning, as well as transfers and bridges from applied degrees to baccalaureate degrees in busine...

  9. Controlled self-organisation using learning classifier systems

    OpenAIRE

    Richter, Urban Maximilian

    2009-01-01

    The complexity of technical systems increases, breakdowns occur quite often. The mission of organic computing is to tame these challenges by providing degrees of freedom for self-organised behaviour. To achieve these goals, new methods have to be developed. The proposed observer/controller architecture constitutes one way to achieve controlled self-organisation. To improve its design, multi-agent scenarios are investigated. Especially, learning using learning classifier systems is addressed.

  10. Learning Classifier Systems: A Complete Introduction, Review, and Roadmap

    OpenAIRE

    Urbanowicz, Ryan J; Jason H Moore

    2009-01-01

    If complexity is your problem, learning classifier systems (LCSs) may offer a solution. These rule-based, multifaceted, machine learning algorithms originated and have evolved in the cradle of evolutionary biology and artificial intelligence. The LCS concept has inspired a multitude of implementations adapted to manage the different problem domains to which it has been applied (e.g., autonomous robotics, classification, knowledge discovery, and modeling). One field that is taking increasing n...

  11. Learning Rates for ${l}^{1}$ -Regularized Kernel Classifiers

    OpenAIRE

    Hongzhi Tong; Di-Rong Chen; Fenghong Yang

    2013-01-01

    We consider a family of classification algorithms generated from a regularization kernel scheme associated with ${l}^{1}$ -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derive...

  12. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    OpenAIRE

    Sang-Hoon Hong; Hyun-Ok Kim; Shimon Wdowinski; Emanuelle Feliciano

    2015-01-01

    The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol) synthetic aperture radar (PolSAR) data for classifying wetland vegetation in the Everglades. We ...

  13. Learning Classifiers from Synthetic Data Using a Multichannel Autoencoder

    OpenAIRE

    Zhang, Xi; Fu, Yanwei; Zang, Andi; Sigal, Leonid; Agam, Gady

    2015-01-01

    We propose a method for using synthetic data to help learning classifiers. Synthetic data, even is generated based on real data, normally results in a shift from the distribution of real data in feature space. To bridge the gap between the real and synthetic data, and jointly learn from synthetic and real data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by suing MCAE, it is possible to learn a better feature representation for classification. To evaluate the proposed a...

  14. Classifying and Visualizing Motion Capture Sequences using Deep Neural Networks

    OpenAIRE

    Cho, Kyunghyun; Chen, Xi

    2013-01-01

    The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition. Currently most systems only classify dataset with a couple of dozens different actions. Moreover, feature extraction from the data is often computational complex. In this paper, we propose a novel system to recognize the actions from skeleton data with simple, but effective, features using deep neural networks. Features are extracted for each frame based on the relative...

  15. Classifying Floating Potential Measurement Unit Data Products as Science Data

    Science.gov (United States)

    Coffey, Victoria; Minow, Joseph

    2015-01-01

    We are Co-Investigators for the Floating Potential Measurement Unit (FPMU) on the International Space Station (ISS) and members of the FPMU operations and data analysis team. We are providing this memo for the purpose of classifying raw and processed FPMU data products and ancillary data as NASA science data with unrestricted, public availability in order to best support science uses of the data.

  16. Image Replica Detection based on Binary Support Vector Classifier

    OpenAIRE

    Maret, Y.; Dufaux, F.; Ebrahimi, T.

    2005-01-01

    In this paper, we present a system for image replica detection. More specifically, the technique is based on the extraction of 162 features corresponding to texture, color and gray-level characteristics. These features are then weighted and statistically normalized. To improve training and performances, the features space dimensionality is reduced. Lastly, a decision function is generated to classify the test image as replica or non-replica of a given reference image. Experimental results sho...

  17. Classifying racist texts using a support vector machine

    OpenAIRE

    Greevy, Edel; Alan F. SMEATON

    2004-01-01

    In this poster we present an overview of the techniques we used to develop and evaluate a text categorisation system to automatically classify racist texts. Detecting racism is difficult because the presence of indicator words is insufficient to indicate racist texts, unlike some other text classification tasks. Support Vector Machines (SVM) are used to automatically categorise web pages based on whether or not they are racist. Different interpretations of what constitutes a term are taken, a...

  18. VIRTUAL MINING MODEL FOR CLASSIFYING TEXT USING UNSUPERVISED LEARNING

    OpenAIRE

    S. Koteeswaran; E. Kannan; P. Visu

    2014-01-01

    In real world data mining is emerging in various era, one of its most outstanding performance is held in various research such as Big data, multimedia mining, text mining etc. Each of the researcher proves their contribution with tremendous improvements in their proposal by means of mathematical representation. Empowering each problem with solutions are classified into mathematical and implementation models. The mathematical model relates to the straight forward rules and formulas that are re...

  19. An alternative educational indicator for classifying Secondary Schools in Portugal

    OpenAIRE

    Gonçalves, A. Manuela; Costa, Marco; De Oliveira, Mário,

    2015-01-01

    The purpose of this paper aims at carrying out a study in the area of Statistics for classifying Portuguese Secondary Schools (both mainland and islands: “Azores” and “Madeira”),taking into account the results achievedby their students in both national examinations and internal assessment. The main according consists of identifying groups of schools with different performance levels by considering the sub-national public and private education systems’ as well as their respective geographic lo...

  20. Face Recognition Combining Eigen Features with a Parzen Classifier

    Institute of Scientific and Technical Information of China (English)

    SUN Xin; LIU Bing; LIU Ben-yong

    2005-01-01

    A face recognition scheme is proposed, wherein a face image is preprocessed by pixel averaging and energy normalizing to reduce data dimension and brightness variation effect, followed by the Fourier transform to estimate the spectrum of the preprocessed image. The principal component analysis is conducted on the spectra of a face image to obtain eigen features. Combining eigen features with a Parzen classifier, experiments are taken on the ORL face database.

  1. A new method for classifying different phenotypes of kidney transplantation.

    Science.gov (United States)

    Zhu, Dong; Liu, Zexian; Pan, Zhicheng; Qian, Mengjia; Wang, Linyan; Zhu, Tongyu; Xue, Yu; Wu, Duojiao

    2016-08-01

    For end-stage renal diseases, kidney transplantation is the most efficient treatment. However, the unexpected rejection caused by inflammation usually leads to allograft failure. Thus, a systems-level characterization of inflammation factors can provide potentially diagnostic biomarkers for predicting renal allograft rejection. Serum of kidney transplant patients with different immune status were collected and classified as transplant patients with stable renal function (ST), impaired renal function with negative biopsy pathology (UNST), acute rejection (AR), and chronic rejection (CR). The expression profiles of 40 inflammatory proteins were measured by quantitative protein microarrays and reduced to a lower dimensional space by the partial least squares (PLS) model. The determined principal components (PCs) were then trained by the support vector machines (SVMs) algorithm for classifying different phenotypes of kidney transplantation. There were 30, 16, and 13 inflammation proteins that showed statistically significant differences between CR and ST, CR and AR, and CR and UNST patients. Further analysis revealed a protein-protein interaction (PPI) network among 33 inflammatory proteins and proposed a potential role of intracellular adhesion molecule-1 (ICAM-1) in CR. Based on the network analysis and protein expression information, two PCs were determined as the major contributors and trained by the PLS-SVMs method, with a promising accuracy of 77.5 % for classification of chronic rejection after kidney transplantation. For convenience, we also developed software packages of GPS-CKT (Classification phenotype of Kidney Transplantation Predictor) for classifying phenotypes. By confirming a strong correlation between inflammation and kidney transplantation, our results suggested that the network biomarker but not single factors can potentially classify different phenotypes in kidney transplantation. PMID:27278387

  2. College students classified with ADHD and the foreign language requirement.

    Science.gov (United States)

    Sparks, Richard L; Javorsky, James; Philips, Lois

    2004-01-01

    The conventional assumption of most disability service providers is that students classified as having attention-deficit/hyperactivity disorder (ADHD) will experience difficulties in foreign language (FL) courses. However, the evidence in support of this assumption is anecdotal. In this empirical investigation, the demographic profiles, overall academic performance, college entrance scores, and FL classroom performance of 68 college students classified as having ADHD were examined. All students had graduated from the same university over a 5-year period. The findings showed that all 68 students had completed the university's FL requirement by passing FL courses. The students' college entrance scores were similar to the middle 50% of freshmen at this university, and their graduating grade point average was similar to the typical graduating senior at the university. The students had participated in both lower (100) and upper (200, 300, 400) level FL courses and had achieved mostly average and above-average grades (A, B, C) in these courses. One student had majored and eight students had minored in an FL. Two thirds of the students passed all of their FL courses without the use of instructional accommodations. In this study, the classification of ADHD did not appear to interfere with participants' performance in FL courses. The findings suggest that students classified as having ADHD should enroll in and fulfill the FL requirement by passing FL courses. PMID:15493238

  3. A Novel Cascade Classifier for Automatic Microcalcification Detection.

    Science.gov (United States)

    Shin, Seung Yeon; Lee, Soochahn; Yun, Il Dong; Jung, Ho Yub; Heo, Yong Seok; Kim, Sun Mi; Lee, Kyoung Mu

    2015-01-01

    In this paper, we present a novel cascaded classification framework for automatic detection of individual and clusters of microcalcifications (μC). Our framework comprises three classification stages: i) a random forest (RF) classifier for simple features capturing the second order local structure of individual μCs, where non-μC pixels in the target mammogram are efficiently eliminated; ii) a more complex discriminative restricted Boltzmann machine (DRBM) classifier for μC candidates determined in the RF stage, which automatically learns the detailed morphology of μC appearances for improved discriminative power; and iii) a detector to detect clusters of μCs from the individual μC detection results, using two different criteria. From the two-stage RF-DRBM classifier, we are able to distinguish μCs using explicitly computed features, as well as learn implicit features that are able to further discriminate between confusing cases. Experimental evaluation is conducted on the original Mammographic Image Analysis Society (MIAS) and mini-MIAS databases, as well as our own Seoul National University Bundang Hospital digital mammographic database. It is shown that the proposed method outperforms comparable methods in terms of receiver operating characteristic (ROC) and precision-recall curves for detection of individual μCs and free-response receiver operating characteristic (FROC) curve for detection of clustered μCs. PMID:26630496

  4. Analysis of classifiers performance for classification of potential microcalcification

    Science.gov (United States)

    M. N., Arun K.; Sheshadri, H. S.

    2013-07-01

    Breast cancer is a significant public health problem in the world. According to the literature early detection improve breast cancer prognosis. Mammography is a screening tool used for early detection of breast cancer. About 10-30% cases are missed during the routine check as it is difficult for the radiologists to make accurate analysis due to large amount of data. The Microcalcifications (MCs) are considered to be important signs of breast cancer. It has been reported in literature that 30% - 50% of breast cancer detected radio graphically show MCs on mammograms. Histologic examinations report 62% to 79% of breast carcinomas reveals MCs. MC are tiny, vary in size, shape, and distribution, and MC may be closely connected to surrounding tissues. There is a major challenge using the traditional classifiers in the classification of individual potential MCs as the processing of mammograms in appropriate stage generates data sets with an unequal amount of information for both classes (i.e., MC, and Not-MC). Most of the existing state-of-the-art classification approaches are well developed by assuming the underlying training set is evenly distributed. However, they are faced with a severe bias problem when the training set is highly imbalanced in distribution. This paper addresses this issue by using classifiers which handle the imbalanced data sets. In this paper, we also compare the performance of classifiers which are used in the classification of potential MC.

  5. Image classifiers for the cell transformation assay: a progress report

    Science.gov (United States)

    Urani, Chiara; Crosta, Giovanni F.; Procaccianti, Claudio; Melchioretto, Pasquale; Stefanini, Federico M.

    2010-02-01

    The Cell Transformation Assay (CTA) is one of the promising in vitro methods used to predict human carcinogenicity. The neoplastic phenotype is monitored in suitable cells by the formation of foci and observed by light microscopy after staining. Foci exhibit three types of morphological alterations: Type I, characterized by partially transformed cells, and Types II and III considered to have undergone neoplastic transformation. Foci recognition and scoring have always been carried visually by a trained human expert. In order to automatically classify foci images one needs to implement some image understanding algorithm. Herewith, two such algorithms are described and compared by performance. The supervised classifier (as described in previous articles) relies on principal components analysis embedded in a training feedback loop to process the morphological descriptors extracted by "spectrum enhancement" (SE). The unsupervised classifier architecture is based on the "partitioning around medoids" and is applied to image descriptors taken from histogram moments (HM). Preliminary results suggest the inadequacy of the HMs as image descriptors as compared to those from SE. A justification derived from elementary arguments of real analysis is provided in the Appendix.

  6. Automative Multi Classifier Framework for Medical Image Analysis

    Directory of Open Access Journals (Sweden)

    R. Edbert Rajan

    2015-04-01

    Full Text Available Medical image processing is the technique used to create images of the human body for medical purposes. Nowadays, medical image processing plays a major role and a challenging solution for the critical stage in the medical line. Several researches have done in this area to enhance the techniques for medical image processing. However, due to some demerits met by some advanced technologies, there are still many aspects that need further development. Existing study evaluate the efficacy of the medical image analysis with the level-set shape along with fractal texture and intensity features to discriminate PF (Posterior Fossa tumor from other tissues in the brain image. To develop the medical image analysis and disease diagnosis, to devise an automotive subjective optimality model for segmentation of images based on different sets of selected features from the unsupervised learning model of extracted features. After segmentation, classification of images is done. The classification is processed by adapting the multiple classifier frameworks in the previous work based on the mutual information coefficient of the selected features underwent for image segmentation procedures. In this study, to enhance the classification strategy, we plan to implement enhanced multi classifier framework for the analysis of medical images and disease diagnosis. The performance parameter used for the analysis of the proposed enhanced multi classifier framework for medical image analysis is Multiple Class intensity, image quality, time consumption.

  7. Exploiting Language Models to Classify Events from Twitter.

    Science.gov (United States)

    Vo, Duc-Thuan; Hai, Vo Thuan; Ock, Cheol-Young

    2015-01-01

    Classifying events is challenging in Twitter because tweets texts have a large amount of temporal data with a lot of noise and various kinds of topics. In this paper, we propose a method to classify events from Twitter. We firstly find the distinguishing terms between tweets in events and measure their similarities with learning language models such as ConceptNet and a latent Dirichlet allocation method for selectional preferences (LDA-SP), which have been widely studied based on large text corpora within computational linguistic relations. The relationship of term words in tweets will be discovered by checking them under each model. We then proposed a method to compute the similarity between tweets based on tweets' features including common term words and relationships among their distinguishing term words. It will be explicit and convenient for applying to k-nearest neighbor techniques for classification. We carefully applied experiments on the Edinburgh Twitter Corpus to show that our method achieves competitive results for classifying events. PMID:26451139

  8. Exploiting Language Models to Classify Events from Twitter

    Directory of Open Access Journals (Sweden)

    Duc-Thuan Vo

    2015-01-01

    Full Text Available Classifying events is challenging in Twitter because tweets texts have a large amount of temporal data with a lot of noise and various kinds of topics. In this paper, we propose a method to classify events from Twitter. We firstly find the distinguishing terms between tweets in events and measure their similarities with learning language models such as ConceptNet and a latent Dirichlet allocation method for selectional preferences (LDA-SP, which have been widely studied based on large text corpora within computational linguistic relations. The relationship of term words in tweets will be discovered by checking them under each model. We then proposed a method to compute the similarity between tweets based on tweets’ features including common term words and relationships among their distinguishing term words. It will be explicit and convenient for applying to k-nearest neighbor techniques for classification. We carefully applied experiments on the Edinburgh Twitter Corpus to show that our method achieves competitive results for classifying events.

  9. A space-based radio frequency transient event classifier

    Energy Technology Data Exchange (ETDEWEB)

    Moore, K.R.; Blain, C.P.; Caffrey, M.P.; Franz, R.C.; Henneke, K.M.; Jones, R.G.

    1998-03-01

    The Department of Energy is currently investigating economical and reliable techniques for space-based nuclear weapon treaty verification. Nuclear weapon detonations produce RF transients that are signatures of illegal nuclear weapons tests. However, there are many other sources of RF signals, both natural and man-made. Direct digitization of RF signals requires rates of 300 MSamples per second and produces 10{sup 13} samples per day of data to analyze. it is impractical to store and downlink all digitized RF data from such a satellite without a prohibitively expensive increase in the number and capacities of ground stations. Reliable and robust data processing and information extraction must be performed onboard the spacecraft in order to reduce downlinked data to a reasonable volume. The FORTE (Fast On-Orbit Recording of Transient Events) satellite records RF transients in space. These transients will be classified onboard the spacecraft with an Event Classifier specialized hardware that performs signal preprocessing and neural network classification. The authors describe the Event Classifier requirements, scientific constraints, design and implementation.

  10. Image Classifying Registration for Gaussian & Bayesian Techniques: A Review

    Directory of Open Access Journals (Sweden)

    Rahul Godghate,

    2014-04-01

    Full Text Available A Bayesian Technique for Image Classifying Registration to perform simultaneously image registration and pixel classification. Medical image registration is critical for the fusion of complementary information about patient anatomy and physiology, for the longitudinal study of a human organ over time and the monitoring of disease development or treatment effect, for the statistical analysis of a population variation in comparison to a so-called digital atlas, for image-guided therapy, etc. A Bayesian Technique for Image Classifying Registration is well-suited to deal with image pairs that contain two classes of pixels with different inter-image intensity relationships. We will show through different experiments that the model can be applied in many different ways. For instance if the class map is known, then it can be used for template-based segmentation. If the full model is used, then it can be applied to lesion detection by image comparison. Experiments have been conducted on both real and simulated data. It show that in the presence of an extra-class, the classifying registration improves both the registration and the detection, especially when the deformations are small. The proposed model is defined using only two classes but it is straightforward to extend it to an arbitrary number of classes.

  11. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    Directory of Open Access Journals (Sweden)

    Sang-Hoon Hong

    2015-07-01

    Full Text Available The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol synthetic aperture radar (PolSAR data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering. We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.

  12. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  13. Self-organizing map classifier for stressed speech recognition

    Science.gov (United States)

    Partila, Pavol; Tovarek, Jaromir; Voznak, Miroslav

    2016-05-01

    This paper presents a method for detecting speech under stress using Self-Organizing Maps. Most people who are exposed to stressful situations can not adequately respond to stimuli. Army, police, and fire department occupy the largest part of the environment that are typical of an increased number of stressful situations. The role of men in action is controlled by the control center. Control commands should be adapted to the psychological state of a man in action. It is known that the psychological changes of the human body are also reflected physiologically, which consequently means the stress effected speech. Therefore, it is clear that the speech stress recognizing system is required in the security forces. One of the possible classifiers, which are popular for its flexibility, is a self-organizing map. It is one type of the artificial neural networks. Flexibility means independence classifier on the character of the input data. This feature is suitable for speech processing. Human Stress can be seen as a kind of emotional state. Mel-frequency cepstral coefficients, LPC coefficients, and prosody features were selected for input data. These coefficients were selected for their sensitivity to emotional changes. The calculation of the parameters was performed on speech recordings, which can be divided into two classes, namely the stress state recordings and normal state recordings. The benefit of the experiment is a method using SOM classifier for stress speech detection. Results showed the advantage of this method, which is input data flexibility.

  14. ASYMBOOST-BASED FISHER LINEAR CLASSIFIER FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Wang Xianji; Ye Xueyi; Li Bin; Li Xin; Zhuang Zhenquan

    2008-01-01

    When using AdaBoost to select discriminant features from some feature space (e.g. Gabor feature space) for face recognition, cascade structure is usually adopted to leverage the asymmetry in the distribution of positive and negative samples. Each node in the cascade structure is a classifier trained by AdaBoost with an asymmetric learning goal of high recognition rate but only moderate low false positive rate. One limitation of AdaBoost arises in the context of skewed example distribution and cascade classifiers: AdaBoost minimizes the classification error, which is not guaranteed to achieve the asymmetric node learning goal. In this paper, we propose to use the asymmetric AdaBoost (Asym-Boost) as a mechanism to address the asymmetric node learning goal. Moreover, the two parts of the selecting features and forming ensemble classifiers are decoupled, both of which occur simultaneously in AsymBoost and AdaBoost. Fisher Linear Discriminant Analysis (FLDA) is used on the selected features to learn a linear discriminant function that maximizes the separability of data among the different classes, which we think can improve the recognition performance. The proposed algorithm is dem onstrated with face recognition using a Gabor based representation on the FERET database. Experimental results show that the proposed algorithm yields better recognition performance than AdaBoost itself.

  15. Comparison of artificial intelligence classifiers for SIP attack data

    Science.gov (United States)

    Safarik, Jakub; Slachta, Jiri

    2016-05-01

    Honeypot application is a source of valuable data about attacks on the network. We run several SIP honeypots in various computer networks, which are separated geographically and logically. Each honeypot runs on public IP address and uses standard SIP PBX ports. All information gathered via honeypot is periodically sent to the centralized server. This server classifies all attack data by neural network algorithm. The paper describes optimizations of a neural network classifier, which lower the classification error. The article contains the comparison of two neural network algorithm used for the classification of validation data. The first is the original implementation of the neural network described in recent work; the second neural network uses further optimizations like input normalization or cross-entropy cost function. We also use other implementations of neural networks and machine learning classification algorithms. The comparison test their capabilities on validation data to find the optimal classifier. The article result shows promise for further development of an accurate SIP attack classification engine.

  16. Early Detection of Breast Cancer using SVM Classifier Technique

    Directory of Open Access Journals (Sweden)

    Y.Ireaneus Anna Rejani

    2009-11-01

    Full Text Available This paper presents a tumor detection algorithm from mammogram. The proposed system focuses on the solution of two problems. One is how to detect tumors as suspicious regions with a very weak contrast to their background and another is how to extract features which categorize tumors. The tumor detection method follows the scheme of (a mammogram enhancement. (b The segmentation of the tumor area. (c The extraction of features from the segmented tumor area. (d The use of SVM classifier. The enhancement can be defined as conversion of the image quality to a better and more understandable level. The mammogram enhancement procedure includes filtering, top hat operation, DWT. Then the contrast stretching is used to increase the contrast of the image. The segmentation of mammogram images has been playing an important role to improve the detection and diagnosis of breast cancer. The most common segmentation method used is thresholding. The features are extracted from the segmented breast area. Next stage include, which classifies the regions using the SVM classifier. The method was tested on 75 mammographic images, from the mini-MIAS database. The methodology achieved a sensitivity of 88.75%.

  17. A Novel Cascade Classifier for Automatic Microcalcification Detection.

    Directory of Open Access Journals (Sweden)

    Seung Yeon Shin

    Full Text Available In this paper, we present a novel cascaded classification framework for automatic detection of individual and clusters of microcalcifications (μC. Our framework comprises three classification stages: i a random forest (RF classifier for simple features capturing the second order local structure of individual μCs, where non-μC pixels in the target mammogram are efficiently eliminated; ii a more complex discriminative restricted Boltzmann machine (DRBM classifier for μC candidates determined in the RF stage, which automatically learns the detailed morphology of μC appearances for improved discriminative power; and iii a detector to detect clusters of μCs from the individual μC detection results, using two different criteria. From the two-stage RF-DRBM classifier, we are able to distinguish μCs using explicitly computed features, as well as learn implicit features that are able to further discriminate between confusing cases. Experimental evaluation is conducted on the original Mammographic Image Analysis Society (MIAS and mini-MIAS databases, as well as our own Seoul National University Bundang Hospital digital mammographic database. It is shown that the proposed method outperforms comparable methods in terms of receiver operating characteristic (ROC and precision-recall curves for detection of individual μCs and free-response receiver operating characteristic (FROC curve for detection of clustered μCs.

  18. Simultaneous removal of NOX and SO2 from flue gases by energizing gases with electrons having energy in the range from 5 eV to 20 eV

    International Nuclear Information System (INIS)

    These notes report the results obtained with an experimental installation able to treat 100 Nm3/h of flue gases, installed at the Thermoelectrical Power Plant at Marghera. The experimental installation, operating on the principle of gas energizing, is able to remove simultaneously 40 to 50% of the NOX and about 100% of the SO2 contained in the flue gases. It is expected to achieve better efficiency in the removal of NOX by including in the system a bag filter which should favour removal reaction in the heterogeneous phase of NOX. Particulate concentration at output is between 2 and 5 mg/Nm3. A pulse generator designed and built by Enel was tested; the results were excellent, so work has begun on the preliminary planning of a 200 kW pulse generator that operates on the same principle. (author)

  19. Freeman Chain Code (FCC Representation in Signature Fraud Detection Based On Nearest Neighbour and Artificial Neural Network (ANN Classifiers

    Directory of Open Access Journals (Sweden)

    Aini Najwa Azmi

    2014-12-01

    Full Text Available This paper presents a signature verification system that used Freeman Chain Code (FCC as directional feature and data representation. There are 47 features were extracted from the signature images from six global features. Before extracting the features, the raw images were undergoing pre-processing stages which were binarization, noise removal by using media filter, cropping and thinning to produce Thinned Binary Image (TBI. Euclidean distance is measured and matched between nearest neighbours to find the result. MCYT-SignatureOff-75 database was used. Based on our experiment, the lowest FRR achieved is 6.67% and lowest FAR is 12.44% with only 1.12 second computational time from nearest neighbour classifier. The results are compared with Artificial Neural Network (ANN classifier.

  20. Removal of heavy metals using waste eggshell

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The removal capacity of toxic heavy metals by the reused eggshell was studied. As a pretreatment process for the preparation of reused material from waste eggshell, calcination was performed in the furnace at 800℃ for 2 h after crushing the dried waste eggshell. Calcination behavior, qualitative and quantitative elemental information, mineral type and surface characteristics before and after calcination of eggshell were examined by thermal gravimetric analysis (TGA), X-ray fluorescence (XRF), X-ray diffraction (XRD) and scanning electron microscopy (SEM), respectively. After calcination, the major inorganic composition was identified as Ca (lime, 99.63%) and K, P and Sr were identified as minor components. When calcined eggshell was applied in the treatment of synthetic wastewater containing heavy metals, a complete removal of Cd as well as above 99% removal of Cr was observed after 10 min. Although the natural eggshell had some removal capacity of Cd and Cr, a complete removal was not accomplished even after 60 min due to quite slower removal rate. However, in contrast to Cd and Cr, an efficient removal of Pb was observed with the natural eggshell rather than the calcined eggshell. From the application of the calcined eggshell in the treatment of real electroplating wastewater, the calcined eggshell showed a promising removal capacity of heavy metal ions as well as had a good neutralization capacity in the treatment of strong acidic wastewater.

  1. Understanding and classifying metabolite space and metabolite-likeness.

    Directory of Open Access Journals (Sweden)

    Julio E Peironcely

    Full Text Available While the entirety of 'Chemical Space' is huge (and assumed to contain between 10(63 and 10(200 'small molecules', distinct subsets of this space can nonetheless be defined according to certain structural parameters. An example of such a subspace is the chemical space spanned by endogenous metabolites, defined as 'naturally occurring' products of an organisms' metabolism. In order to understand this part of chemical space in more detail, we analyzed the chemical space populated by human metabolites in two ways. Firstly, in order to understand metabolite space better, we performed Principal Component Analysis (PCA, hierarchical clustering and scaffold analysis of metabolites and non-metabolites in order to analyze which chemical features are characteristic for both classes of compounds. Here we found that heteroatom (both oxygen and nitrogen content, as well as the presence of particular ring systems was able to distinguish both groups of compounds. Secondly, we established which molecular descriptors and classifiers are capable of distinguishing metabolites from non-metabolites, by assigning a 'metabolite-likeness' score. It was found that the combination of MDL Public Keys and Random Forest exhibited best overall classification performance with an AUC value of 99.13%, a specificity of 99.84% and a selectivity of 88.79%. This performance is slightly better than previous classifiers; and interestingly we found that drugs occupy two distinct areas of metabolite-likeness, the one being more 'synthetic' and the other being more 'metabolite-like'. Also, on a truly prospective dataset of 457 compounds, 95.84% correct classification was achieved. Overall, we are confident that we contributed to the tasks of classifying metabolites, as well as to understanding metabolite chemical space better. This knowledge can now be used in the development of new drugs that need to resemble metabolites, and in our work particularly for assessing the metabolite

  2. Particle adhesion and removal

    CERN Document Server

    Mittal, K L

    2015-01-01

    The book provides a comprehensive and easily accessible reference source covering all important aspects of particle adhesion and removal.  The core objective is to cover both fundamental and applied aspects of particle adhesion and removal with emphasis on recent developments.  Among the topics to be covered include: 1. Fundamentals of surface forces in particle adhesion and removal.2. Mechanisms of particle adhesion and removal.3. Experimental methods (e.g. AFM, SFA,SFM,IFM, etc.) to understand  particle-particle and particle-substrate interactions.4. Mechanics of adhesion of micro- and  n

  3. Removal of silver nanoparticles by coagulation processes

    International Nuclear Information System (INIS)

    Highlights: • This study investigated the removal of AgNP suspensions by four regular coagulants. • The optimal removal efficiencies for the four coagulants were achieved at pH 7.5. • The removal efficiency of AgNPs was affected by the natural water characteristics. • TEM and XRD showed that AgNPs or silver-containing NPs were adsorbed onto the flocs. -- Abstract: Commercial use of silver nanoparticles (AgNPs) will lead to a potential route for human exposure via potable water. Coagulation followed by sedimentation, as a conventional technique in the drinking water treatment facilities, may become an important barrier to prevent human from AgNP exposures. This study investigated the removal of AgNP suspensions by four regular coagulants. In the aluminum sulfate and ferric chloride coagulation systems, the water parameters slightly affected the AgNP removal. However, in the poly aluminum chloride and polyferric sulfate coagulation systems, the optimal removal efficiencies were achieved at pH 7.5, while higher or lower of pH could reduce the AgNP removal. Besides, the increasing natural organic matter (NOM) would reduce the AgNP removal, while Ca2+ and suspended solids concentrations would also affect the AgNP removal. In addition, results from the transmission electron microscopy and X-ray diffraction showed AgNPs or silver-containing nanoparticles were adsorbed onto the flocs. Finally, natural water samples were used to validate AgNP removal by coagulation. This study suggests that in the case of release of AgNPs into the source water, the traditional water treatment process, coagulation/sedimentation, can remove AgNPs and minimize the silver ion concentration under the well-optimized conditions

  4. Classifying the future of universes with dark energy

    International Nuclear Information System (INIS)

    We classify the future of the universe for general cosmological models including matter and dark energy. If the equation of state of dark energy is less then -1, the age of the universe becomes finite. We compute the rest of the age of the universe for such universe models. The behaviour of the future growth of matter density perturbation is also studied. We find that the collapse of the spherical overdensity region is greatly changed if the equation of state of dark energy is less than -1

  5. Classifying Cubic Edge-Transitive Graphs of Order 8

    Indian Academy of Sciences (India)

    Mehdi Alaeiyan; M K Hosseinipoor

    2009-11-01

    A simple undirected graph is said to be semisymmetric if it is regular and edge-transitive but not vertex-transitive. Let be a prime. It was shown by Folkman (J. Combin. Theory 3(1967) 215--232) that a regular edge-transitive graph of order 2 or 22 is necessarily vertex-transitive. In this paper, an extension of his result in the case of cubic graphs is given. It is proved that, every cubic edge-transitive graph of order 8 is symmetric, and then all such graphs are classified.

  6. Colorfulness Enhancement Using Image Classifier Based on Chroma-histogram

    Institute of Scientific and Technical Information of China (English)

    Moon-cheol KIM; Kyoung-won LIM

    2010-01-01

    The paper proposes a colorfulness enhancement of pictorial images using image classifier based on chroma histogram.This ap-poach firstly estimates strength of colorfulness of images and their types.With such determined information,the algorithm automatically adjusts image colorfulness for a better natural image look.With the help of an additional detection of skin colors and a pixel chroma adaptive local processing,the algorithm produces more natural image look.The algorithm performance had been tested with an image quality judgment experiment of 20 persons.The experimental result indicates a better image preference.

  7. Support vector machine classifiers for large data sets.

    Energy Technology Data Exchange (ETDEWEB)

    Gertz, E. M.; Griffin, J. D.

    2006-01-31

    This report concerns the generation of support vector machine classifiers for solving the pattern recognition problem in machine learning. Several methods are proposed based on interior point methods for convex quadratic programming. Software implementations are developed by adapting the object-oriented packaging OOQP to the problem structure and by using the software package PETSc to perform time-intensive computations in a distributed setting. Linear systems arising from classification problems with moderately large numbers of features are solved by using two techniques--one a parallel direct solver, the other a Krylov-subspace method incorporating novel preconditioning strategies. Numerical results are provided, and computational experience is discussed.

  8. On-line computing in a classified environment

    International Nuclear Information System (INIS)

    Westinghouse Hanford Company (WHC) recently developed a Department of Energy (DOE) approved real-time, on-line computer system to control nuclear material. The system simultaneously processes both classified and unclassified information. Implementation of this system required application of many security techniques. The system has a secure, but user friendly interface. Many software applications protect the integrity of the data base from malevolent or accidental errors. Programming practices ensure the integrity of the computer system software. The audit trail and the reports generation capability record user actions and status of the nuclear material inventory

  9. A Fast Scalable Classifier Tightly Integrated with RDBMS

    Institute of Scientific and Technical Information of China (English)

    刘红岩; 陆宏钧; 陈剑

    2002-01-01

    In this paper, we report our success in building efficient scalable classifiers by exploring the capabilities of modern relational database management systems(RDBMS).In addition to high classification accuracy, the unique features of theapproach include its high training speed, linear scalability, and simplicity in implementation. More importantly,the major computation required in the approachcan be implemented using standard functions provided by the modern relational DBMS.Besides, with the effective rule pruning strategy, the algorithm proposed inthis paper can produce a compact set of classification rules. The results of experiments conducted for performance evaluation and analysis are presented.

  10. Brain Computer Interface. Comparison of Neural Networks Classifiers.

    OpenAIRE

    Martínez Pérez, Jose Luis; Barrientos Cruz, Antonio

    2008-01-01

    Brain Computer Interface is an emerging technology that allows new output paths to communicate the user’s intentions without use of normal output ways, such as muscles or nerves (Wolpaw, J. R.; et al., 2002).In order to obtain its objective BCI devices shall make use of classifier which translate the inputs provided by user’s brain signal to commands for external devices. The primary uses of this technology will benefit persons with some kind blocking disease as for example: ALS, brainstem st...

  11. Some factors influencing interobserver variation in classifying simple pneumoconiosis.

    OpenAIRE

    Musch, D C; Higgins, I T; Landis, J R

    1985-01-01

    Three experienced physician readers assessed the chest radiographs of 743 men from a coal mining community in West Virginia for the signs of simple pneumoconiosis, using the ILO U/C 1971 Classification of Radiographs of the Pneumoconioses. The number of films categorised by each reader as showing evidence of simple pneumoconiosis varied from 63 (8.5%) to 114 (15.3%) of the 743 films classified. The effect of film quality and obesity on interobserver agreement was assessed by use of kappa-type...

  12. BIOPHARMACEUTICS CLASSIFICATION SYSTEM: A STRATEGIC TOOL FOR CLASSIFYING DRUG SUBSTANCES

    Directory of Open Access Journals (Sweden)

    Rohilla Seema

    2011-07-01

    Full Text Available The biopharmaceutical classification system (BCS is a scientific approach for classifying drug substances based on their dose/solubility ratio and intestinal permeability. The BCS has been developed to allow prediction of in vivo pharmacokinetic performance of drug products from measurements of permeability and solubility. Moreover, the drugs can be categorized into four classes of BCS on the basis of permeability and solubility namely; high permeability high solubility, high permeability low solubility, low permeability high solubility and low permeability low solubility. The present review summarizes the principles, objectives, benefits, classification and applications of BCS.

  13. Use RAPD Analysis to Classify Tea Trees in Yunnan

    Institute of Scientific and Technical Information of China (English)

    SHAO Wan-fang; PANG Rui-hua; DUAN Hong-xing; WANG Ping-sheng; XU Mei; ZHANG Ya-ping; LI Jia-hua

    2003-01-01

    RAPD assessment on genetic variations of 45 tea trees in Yunnan was carried out. Eight primers selected from 40 random primers were used to amplify 45 tea samples, and a total of 95 DNA bands were amplified, of which 90 (94.7 %) were polymorphism. The average number of DNA bands amplified by each primer was 11.5. Based on the results of UPGMA cluster analysis of 95 DNA bands amplified by 8 primers,all the tested materials could be classified into 7 groups including 5 complex groups and 2 simple groups, which was basically identical with morphological classification. In addition, there were some speciations in 2 simple groups.

  14. Skin lesion removal-aftercare

    Science.gov (United States)

    ... aftercare; Nevi - removal aftercare; Scissor excision aftercare; Skin tag removal aftercare; Mole removal aftercare; Skin cancer removal ... to the principles of the Health on the Net Foundation (www.hon.ch). The information provided herein ...

  15. Deep learning for electronic cleansing in dual-energy CT colonography

    Science.gov (United States)

    Tachibana, Rie; Näppi, Janne J.; Hironakaa, Toru; Kim, Se Hyung; Yoshida, Hiroyuki

    2016-03-01

    The purpose of this study was to develop a novel deep-learning-based electronic cleansing (EC) method for dual-energy CT colonography (DE-CTC). In this method, an ensemble of deep convolutional neural networks (DCNNs) is used to classify each voxel of DE-CTC image volumes into one of five multi-material (MUMA) classes: luminal air, soft tissue, tagged fecal material, or a partial-volume boundary between air and tagging or that of soft tissue and tagging. Each DCNN acts as a voxel classifier. At each voxel, a region-of-interest (ROI) centered at the voxel is extracted. After mapping the pixels of the ROI to the input layer of a DCNN, a series of convolutional and max-pooling layers is used to extract features with increasing levels of abstraction. The output layer produces the probabilities at which the input voxel belongs to each of the five MUMA classes. To develop an ensemble of DCNNs, we trained multiple DCNNs based on multi-spectral image volumes derived from the DE-CTC images, including material decomposition images and virtual monochromatic images. The outputs of these DCNNs were then combined by means of a meta-classifier for precise classification of the voxels. Finally, the electronically cleansed CTC images were generated by removing regions that were classified as other than soft tissue, followed by colon surface reconstruction. Preliminary results based on 184,320 images sampled from 30 clinical CTC cases showed a higher accuracy in labeling these classes than that of our previous machine-learning methods, indicating that deep-learning-based multi-spectral EC can accurately remove residual fecal materials from CTC images without generating major EC artifacts.

  16. Classifying paragraph types using linguistic features: Is paragraph positioning important?

    Directory of Open Access Journals (Sweden)

    Scott A. Crossley, Kyle Dempsey & Danielle S. McNamara

    2011-12-01

    Full Text Available This study examines the potential for computational tools and human raters to classify paragraphs based on positioning. In this study, a corpus of 182 paragraphs was collected from student, argumentative essays. The paragraphs selected were initial, middle, and final paragraphs and their positioning related to introductory, body, and concluding paragraphs. The paragraphs were analyzed by the computational tool Coh-Metrix on a variety of linguistic features with correlates to textual cohesion and lexical sophistication and then modeled using statistical techniques. The paragraphs were also classified by human raters based on paragraph positioning. The performance of the reported model was well above chance and reported an accuracy of classification that was similar to human judgments of paragraph type (66% accuracy for human versus 65% accuracy for our model. The model's accuracy increased when longer paragraphs that provided more linguistic coverage and paragraphs judged by human raters to be of higher quality were examined. The findings support the notions that paragraph types contain specific linguistic features that allow them to be distinguished from one another. The finding reported in this study should prove beneficial in classroom writing instruction and in automated writing assessment.

  17. Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces.

    Directory of Open Access Journals (Sweden)

    Hossein Bashashati

    Full Text Available A problem that impedes the progress in Brain-Computer Interface (BCI research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA as the classifier of choice for BCI systems.

  18. Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces.

    Science.gov (United States)

    Bashashati, Hossein; Ward, Rabab K; Birch, Gary E; Bashashati, Ali

    2015-01-01

    A problem that impedes the progress in Brain-Computer Interface (BCI) research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA) as the classifier of choice for BCI systems.

  19. Deep convolutional neural networks for classifying GPR B-scans

    Science.gov (United States)

    Besaw, Lance E.; Stimac, Philip J.

    2015-05-01

    Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.

  20. Decision Tree Classifiers for Star/Galaxy Separation

    CERN Document Server

    Vasconcellos, E C; Gal, R R; LaBarbera, F L; Capelato, H V; Velho, H F Campos; Trevisan, M; Ruiz, R S R

    2010-01-01

    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of $884,126$ SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: $14\\le r\\le21$ ($85.2%$) and $r\\ge19$ ($82.1%$). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT and Ball et al. (2006). We find that our FT classifier is comparable or better in completeness over the full magnitude range $15\\le r\\le21$, with m...

  1. Integrating language models into classifiers for BCI communication: a review

    Science.gov (United States)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  2. Integrating language models into classifiers for BCI communication: a review

    Science.gov (United States)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain–computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  3. Automatic misclassification rejection for LDA classifier using ROC curves.

    Science.gov (United States)

    Menon, Radhika; Di Caterina, Gaetano; Lakany, Heba; Petropoulakis, Lykourgos; Conway, Bernard A; Soraghan, John J

    2015-08-01

    This paper presents a technique to improve the performance of an LDA classifier by determining if the predicted classification output is a misclassification and thereby rejecting it. This is achieved by automatically computing a class specific threshold with the help of ROC curves. If the posterior probability of a prediction is below the threshold, the classification result is discarded. This method of minimizing false positives is beneficial in the control of electromyography (EMG) based upper-limb prosthetic devices. It is hypothesized that a unique EMG pattern is associated with a specific hand gesture. In reality, however, EMG signals are difficult to distinguish, particularly in the case of multiple finger motions, and hence classifiers are trained to recognize a set of individual gestures. However, it is imperative that misclassifications be avoided because they result in unwanted prosthetic arm motions which are detrimental to device controllability. This warrants the need for the proposed technique wherein a misclassified gesture prediction is rejected resulting in no motion of the prosthetic arm. The technique was tested using surface EMG data recorded from thirteen amputees performing seven hand gestures. Results show the number of misclassifications was effectively reduced, particularly in cases with low original classification accuracy. PMID:26736304

  4. Using Narrow Band Photometry to Classify Stars and Brown Dwarfs

    CERN Document Server

    Mainzer, A K; Sievers, J L; Young, E T; Lean, Ian S. Mc

    2004-01-01

    We present a new system of narrow band filters in the near infrared that can be used to classify stars and brown dwarfs. This set of four filters, spanning the H band, can be used to identify molecular features unique to brown dwarfs, such as H2O and CH4. The four filters are centered at 1.495 um (H2O), 1.595 um (continuum), 1.66 um (CH4), and 1.75 um (H2O). Using two H2O filters allows us to solve for individual objects' reddenings. This can be accomplished by constructing a color-color-color cube and rotating it until the reddening vector disappears. We created a model of predicted color-color-color values for different spectral types by integrating filter bandpass data with spectra of known stars and brown dwarfs. We validated this model by making photometric measurements of seven known L and T dwarfs, ranging from L1 - T7.5. The photometric measurements agree with the model to within +/-0.1 mag, allowing us to create spectral indices for different spectral types. We can classify A through early M stars to...

  5. Classifying and mapping wetlands and peat resources using digital cartography

    Science.gov (United States)

    Cameron, Cornelia C.; Emery, David A.

    1992-01-01

    Digital cartography allows the portrayal of spatial associations among diverse data types and is ideally suited for land use and resource analysis. We have developed methodology that uses digital cartography for the classification of wetlands and their associated peat resources and applied it to a 1:24 000 scale map area in New Hampshire. Classifying and mapping wetlands involves integrating the spatial distribution of wetlands types with depth variations in associated peat quality and character. A hierarchically structured classification that integrates the spatial distribution of variations in (1) vegetation, (2) soil type, (3) hydrology, (4) geologic aspects, and (5) peat characteristics has been developed and can be used to build digital cartographic files for resource and land use analysis. The first three parameters are the bases used by the National Wetlands Inventory to classify wetlands and deepwater habitats of the United States. The fourth parameter, geological aspects, includes slope, relief, depth of wetland (from surface to underlying rock or substrate), wetland stratigraphy, and the type and structure of solid and unconsolidated rock surrounding and underlying the wetland. The fifth parameter, peat characteristics, includes the subsurface variation in ash, acidity, moisture, heating value (Btu), sulfur content, and other chemical properties as shown in specimens obtained from core holes. These parameters can be shown as a series of map data overlays with tables that can be integrated for resource or land use analysis.

  6. Electronics and electronic systems

    CERN Document Server

    Olsen, George H

    1987-01-01

    Electronics and Electronic Systems explores the significant developments in the field of electronics and electronic devices. This book is organized into three parts encompassing 11 chapters that discuss the fundamental circuit theory and the principles of analog and digital electronics. This book deals first with the passive components of electronic systems, such as resistors, capacitors, and inductors. These topics are followed by a discussion on the analysis of electronic circuits, which involves three ways, namely, the actual circuit, graphical techniques, and rule of thumb. The remaining p

  7. Classifying regional development in Iran (Application of Composite Index Approach

    Directory of Open Access Journals (Sweden)

    A. Sharifzadeh

    2012-01-01

    Full Text Available Extended abstract1- IntroductionThe spatial economy of Iran, like that of so many other developing countries, is characterized by an uneven spatial pattern of economic activities. The problem of spatial inequality emerged when efficiency-oriented sectoral policies came into conflict with the spatial dimension of development (Atash, 1988. Due to this conflict, extreme imbalanced development in Iran was created. Moreover spatial uneven distribution of economic activities in Iran is unknown and incomplete. So, there is an urgent need for more efficient and effective design, targeting and implementing interventions to manage spatial imbalances in development. Hence, the identification of development patterns at spatial scale and the factors generating them can help improve planning if development programs are focused on removing the constraints adversely affecting development in potentially good areas. There is a need for research that would describe and explain the problem of spatial development patterns as well as proposal of possible strategies, which can be used to develop the country and reduce the spatial imbalances. The main objective of this research was to determine spatial economic development level in order to identify spatial pattern of development and explain determinants of such imbalance in Iran based on methodology of composite index of development. Then, Iran provinces were ranked and classified according to the calculated composite index. To collect the required data, census of 2006 and yearbook in various times were used. 2- Theoretical basesTheories of regional inequality as well as empirical evidence regarding actual trends at the national or international level have been discussed and debated in the economic literature for over three decades. Early debates concerning the impact of market mechanisms on regional inequality in the West (Myrdal, 1957 have become popular again in the 1990s. There is a conflict on probable outcomes

  8. Least Square Support Vector Machine Classifier vs a Logistic Regression Classifier on the Recognition of Numeric Digits

    Directory of Open Access Journals (Sweden)

    Danilo A. López-Sarmiento

    2013-11-01

    Full Text Available In this paper is compared the performance of a multi-class least squares support vector machine (LSSVM mc versus a multi-class logistic regression classifier to problem of recognizing the numeric digits (0-9 handwritten. To develop the comparison was used a data set consisting of 5000 images of handwritten numeric digits (500 images for each number from 0-9, each image of 20 x 20 pixels. The inputs to each of the systems were vectors of 400 dimensions corresponding to each image (not done feature extraction. Both classifiers used OneVsAll strategy to enable multi-classification and a random cross-validation function for the process of minimizing the cost function. The metrics of comparison were precision and training time under the same computational conditions. Both techniques evaluated showed a precision above 95 %, with LS-SVM slightly more accurate. However the computational cost if we found a marked difference: LS-SVM training requires time 16.42 % less than that required by the logistic regression model based on the same low computational conditions.

  9. Graphic Symbol Recognition using Graph Based Signature and Bayesian Network Classifier

    CERN Document Server

    Luqman, Muhammad Muzzamil; Ramel, Jean-Yves

    2010-01-01

    We present a new approach for recognition of complex graphic symbols in technical documents. Graphic symbol recognition is a well known challenge in the field of document image analysis and is at heart of most graphic recognition systems. Our method uses structural approach for symbol representation and statistical classifier for symbol recognition. In our system we represent symbols by their graph based signatures: a graphic symbol is vectorized and is converted to an attributed relational graph, which is used for computing a feature vector for the symbol. This signature corresponds to geometry and topology of the symbol. We learn a Bayesian network to encode joint probability distribution of symbol signatures and use it in a supervised learning scenario for graphic symbol recognition. We have evaluated our method on synthetically deformed and degraded images of pre-segmented 2D architectural and electronic symbols from GREC databases and have obtained encouraging recognition rates.

  10. Performance evaluation of artificial intelligence classifiers for the medical domain.

    Science.gov (United States)

    Smith, A E; Nugent, C D; McClean, S I

    2002-01-01

    The application of artificial intelligence systems is still not widespread in the medical field, however there is an increasing necessity for these to handle the surfeit of information available. One drawback to their implementation is the lack of criteria or guidelines for the evaluation of these systems. This is the primary issue in their acceptability to clinicians, who require them for decision support and therefore need evidence that these systems meet the special safety-critical requirements of the domain. This paper shows evidence that the most prevalent form of intelligent system, neural networks, is generally not being evaluated rigorously regarding classification precision. A taxonomy of the types of evaluation tests that can be carried out, to gauge inherent performance of the outputs of intelligent systems has been assembled, and the results of this presented in a clear and concise form, which should be applicable to all intelligent classifiers for medicine.

  11. Handwritten Bangla Alphabet Recognition using an MLP Based Classifier

    CERN Document Server

    Basu, Subhadip; Sarkar, Ram; Kundu, Mahantapas; Nasipuri, Mita; Basu, Dipak Kumar

    2012-01-01

    The work presented here involves the design of a Multi Layer Perceptron (MLP) based classifier for recognition of handwritten Bangla alphabet using a 76 element feature set Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten characters of Bangla alphabet includes 24 shadow features, 16 centroid features and 36 longest-run features. Recognition performances of the MLP designed to work with this feature set are experimentally observed as 86.46% and 75.05% on the samples of the training and the test sets respectively. The work has useful application in the development of a complete OCR system for handwritten Bangla text.

  12. Prediction of Pork Quality by Fuzzy Support Vector Machine Classifier

    Science.gov (United States)

    Zhang, Jianxi; Yu, Huaizhi; Wang, Jiamin

    Existing objective methods to evaluate pork quality in general do not yield satisfactory results and their applications in meat industry are limited. In this study, fuzzy support vector machine (FSVM) method was developed to evaluate and predict pork quality rapidly and nondestructively. Firstly, the discrete wavelet transform (DWT) was used to eliminate the noise component in original spectrum and the new spectrum was reconstructed. Then, considering the characteristic variables still exist correlation and contain some redundant information, principal component analysis (PCA) was carried out. Lastly, FSVM was developed to differentiate and classify pork samples into different quality grades using the features from PCA. Jackknife tests on the working datasets indicated that the prediction accuracies were higher than other methods.

  13. A Speedy Cardiovascular Diseases Classifier Using Multiple Criteria Decision Analysis

    Directory of Open Access Journals (Sweden)

    Wah Ching Lee

    2015-01-01

    Full Text Available Each year, some 30 percent of global deaths are caused by cardiovascular diseases. This figure is worsening due to both the increasing elderly population and severe shortages of medical personnel. The development of a cardiovascular diseases classifier (CDC for auto-diagnosis will help address solve the problem. Former CDCs did not achieve quick evaluation of cardiovascular diseases. In this letter, a new CDC to achieve speedy detection is investigated. This investigation incorporates the analytic hierarchy process (AHP-based multiple criteria decision analysis (MCDA to develop feature vectors using a Support Vector Machine. The MCDA facilitates the efficient assignment of appropriate weightings to potential patients, thus scaling down the number of features. Since the new CDC will only adopt the most meaningful features for discrimination between healthy persons versus cardiovascular disease patients, a speedy detection of cardiovascular diseases has been successfully implemented.

  14. The Motion Trace of Particles in Classifying Flow Field

    Institute of Scientific and Technical Information of China (English)

    LI Guohua; NIE Wenping; YU Yongfu

    2005-01-01

    According to the theory of the stochastic trajectory model of particle in the gas-solid two-phase flows, the two-phase turbulence model between the blades in the inner cavity of the FW-Φ150 horizontal turbo classifier was established, and the commonly-used PHOENICS code was adopted to carried out the numerical simulation. It was achieved the flow characteristics under a certain condition as well as the motion trace of particles with different diameters entering from certain initial location and passing through the flow field between the blades under the correspondent condition. This research method quite directly demonstrates the motion of particles. An experiment was executed to prove the accuracy of the results of numerical simulation.

  15. Support vector classifier based on principal component analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Support vector classifier (SVC) has the superior advantages for small sample learning problems with high dimensions,with especially better generalization ability.However there is some redundancy among the high dimensions of the original samples and the main features of the samples may be picked up first to improve the performance of SVC.A principal component analysis (PCA) is employed to reduce the feature dimensions of the original samples and the pre-selected main features efficiently,and an SVC is constructed in the selected feature space to improve the learning speed and identification rate of SVC.Furthermore,a heuristic genetic algorithm-based automatic model selection is proposed to determine the hyperparameters of SVC to evaluate the performance of the learning machines.Experiments performed on the Heart and Adult benchmark data sets demonstrate that the proposed PCA-based SVC not only reduces the test time drastically,but also improves the identify rates effectively.

  16. On the way of classifying new states of active matter

    Science.gov (United States)

    Menzel, Andreas M.

    2016-07-01

    With ongoing research into the collective behavior of self-propelled particles, new states of active matter are revealed. Some of them are entirely based on the non-equilibrium character and do not have an immediate equilibrium counterpart. In their recent work, Romanczuk et al (2016 New J. Phys. 18 063015) concentrate on the characterization of smectic-like states of active matter. A new type, referred to by the authors as smectic P, is described. In this state, the active particles form stacked layers and self-propel along them. Identifying and classifying states and phases of non-equilibrium matter, including the transitions between them, is an up-to-date effort that will certainly extend for a longer period into the future.

  17. Symbolic shape descriptors for classifying craniosynostosis deformations from skull imaging.

    Science.gov (United States)

    Lin, H; Ruiz-Correa, S; Shapiro, L G; Hing, A; Cunningham, M L; Speltz, M; Sze, R

    2005-01-01

    Craniosynostosis is a serious condition of childhood, caused by the early fusion of the sutures of the skull. The resulting abnormal skull development can lead to severe deformities, increased intra-cranial pressure, as well as vision, hearing and breathing problems. In this work we develop a novel approach to accurately classify deformations caused by metopic and isolated sagittal synostosis. Our method combines a novel set of symbolic shape descriptors and off-the-shelf classification tools to model morphological variations that characterize the synostotic skull. We demonstrate the efficacy of our methodology in a series of large-scale classification experiments that contrast the performance of our proposed symbolic descriptors to those of traditional numeric descriptors, such as clinical severity indices, Fourier-based descriptors and cranial image quantifications. PMID:17281714

  18. Higher School Marketing Strategy Formation: Classifying the Factors

    Directory of Open Access Journals (Sweden)

    N. K. Shemetova

    2012-01-01

    Full Text Available The paper deals with the main trends of higher school management strategy formation. The author specifies the educational changes in the modern information society determining the strategy options. For each professional training level the author denotes the set of strategic factors affecting the educational service consumers and, therefore, the effectiveness of the higher school marketing. The given factors are classified from the stand-points of the providers and consumers of educational service (enrollees, students, graduates and postgraduates. The research methods include the statistic analysis and general methods of scientific analysis, synthesis, induction, deduction, comparison, and classification. The author is convinced that the university management should develop the necessary prerequisites for raising the graduates’ competitiveness in the labor market, and stimulate the active marketing policies of the relating subdivisions and departments. In author’s opinion, the above classification of marketing strategy factors can be used as the system of values for educational service providers. 

  19. Classifying orbits in the restricted three-body problem

    CERN Document Server

    Zotos, Euaggelos E

    2015-01-01

    The case of the planar circular restricted three-body problem is used as a test field in order to determine the character of the orbits of a small body which moves under the gravitational influence of the two heavy primary bodies. We conduct a thorough numerical analysis on the phase space mixing by classifying initial conditions of orbits and distinguishing between three types of motion: (i) bounded, (ii) escape and (iii) collisional. The presented outcomes reveal the high complexity of this dynamical system. Furthermore, our numerical analysis shows a remarkable presence of fractal basin boundaries along all the escape regimes. Interpreting the collisional motion as leaking in the phase space we related our results to both chaotic scattering and the theory of leaking Hamiltonian systems. We also determined the escape and collisional basins and computed the corresponding escape/collisional times. We hope our contribution to be useful for a further understanding of the escape and collisional mechanism of orbi...

  20. Intermediaries in Bredon (Co)homology and Classifying Spaces

    CERN Document Server

    Dembegioti, Fotini; Talelli, Olympia

    2011-01-01

    For certain contractible G-CW-complexes and F a family of subgroups of G, we construct a spectral sequence converging to the F-Bredon cohomology of G with E1-terms given by the F-Bredon cohomology of the stabilizer subgroups. As applications, we obtain several corollaries concerning the cohomological and geometric dimensions of the classifying space for the family F. We also introduce a hierarchically defined class of groups which contains all countable elementary amenable groups and countable linear groups of characteristic zero, and show that if a group G is in this class, then G has finite F-Bredon (co)homological dimension if and only if G has jump F-Bredon (co)homology.

  1. PERFORMANCE ANALYSIS OF SOFT COMPUTING TECHNIQUES FOR CLASSIFYING CARDIAC ARRHYTHMIA

    Directory of Open Access Journals (Sweden)

    R GANESH KUMAR

    2014-01-01

    Full Text Available Cardiovascular diseases kill more people than other diseases. Arrhythmia is a common term used for cardiac rhythm deviating from normal sinus rhythm. Many heart diseases are detected through electrocardiograms (ECG analysis. Manual analysis of ECG is time consuming and error prone. Thus, an automated system for detecting arrhythmia in ECG signals gains importance. Features are extracted from time series ECG data with Discrete Cosine Transform (DCT computing the distance between RR waves. The feature is the beat’s extracted RR interval. Frequency domain extracted features are classified using Classification and Regression Tree (CART, Radial Basis Function (RBF, Support Vector Machine (SVM and Multilayer Perceptron Neural Network (MLP-NN. Experiments were conducted on the MIT-BIH arrhythmia database.

  2. Performance Evaluation of Bagged RBF Classifier for Data Mining Applications

    Directory of Open Access Journals (Sweden)

    M.Govindarajan

    2013-11-01

    Full Text Available Data mining is the use of algorithms to extract the information and patterns derived by the knowledge discovery in databases process. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. The feasibility and the benefits of the proposed approaches are demonstrated by the means of data mining applications like intrusion detection, direct marketing, and signature verification. A variety of techniques have been employed for analysis ranging from traditional statistical methods to data mining approaches. Bagging and boosting are two relatively new but popular methods for producing ensembles. In this work, bagging is evaluated on real and benchmark data sets of intrusion detection, direct marketing, and signature verification in conjunction with radial basis function classifier as the base learner. The proposed bagged radial basis function is superior to individual approach for data mining applications in terms of classification accuracy.

  3. Road network extraction in classified SAR images using genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    肖志强; 鲍光淑; 蒋晓确

    2004-01-01

    Due to the complicated background of objectives and speckle noise, it is almost impossible to extract roads directly from original synthetic aperture radar(SAR) images. A method is proposed for extraction of road network from high-resolution SAR image. Firstly, fuzzy C means is used to classify the filtered SAR image unsupervisedly, and the road pixels are isolated from the image to simplify the extraction of road network. Secondly, according to the features of roads and the membership of pixels to roads, a road model is constructed, which can reduce the extraction of road network to searching globally optimization continuous curves which pass some seed points. Finally, regarding the curves as individuals and coding a chromosome using integer code of variance relative to coordinates, the genetic operations are used to search global optimization roads. The experimental results show that the algorithm can effectively extract road network from high-resolution SAR images.

  4. Building multiclass classifiers for remote homology detection and fold recognition

    Directory of Open Access Journals (Sweden)

    Karypis George

    2006-10-01

    Full Text Available Abstract Background Protein remote homology detection and fold recognition are central problems in computational biology. Supervised learning algorithms based on support vector machines are currently one of the most effective methods for solving these problems. These methods are primarily used to solve binary classification problems and they have not been extensively used to solve the more general multiclass remote homology prediction and fold recognition problems. Results We present a comprehensive evaluation of a number of methods for building SVM-based multiclass classification schemes in the context of the SCOP protein classification. These methods include schemes that directly build an SVM-based multiclass model, schemes that employ a second-level learning approach to combine the predictions generated by a set of binary SVM-based classifiers, and schemes that build and combine binary classifiers for various levels of the SCOP hierarchy beyond those defining the target classes. Conclusion Analyzing the performance achieved by the different approaches on four different datasets we show that most of the proposed multiclass SVM-based classification approaches are quite effective in solving the remote homology prediction and fold recognition problems and that the schemes that use predictions from binary models constructed for ancestral categories within the SCOP hierarchy tend to not only lead to lower error rates but also reduce the number of errors in which a superfamily is assigned to an entirely different fold and a fold is predicted as being from a different SCOP class. Our results also show that the limited size of the training data makes it hard to learn complex second-level models, and that models of moderate complexity lead to consistently better results.

  5. Assessment of the optimum degree of Sr3Fe2MoO9 electron-doping through oxygen removal: An X-ray powder diffraction and 57Fe Moessbauer spectroscopy study

    International Nuclear Information System (INIS)

    We describe the preparation and structural characterization by X-ray powder diffraction (XRPD) and Moessbauer spectroscopy of three electron-doped perovskites Sr3Fe2MoO9-δ with Fe/Mo = 2 obtained from Sr3Fe2MoO9. The compounds were synthesized by topotactic reduction with H2/N2 (5/95) at 600, 700 and 800 oC. Above 800 oC the Fe/Mo ratio changes from Fe/Mo = 2-1 oC are only in the high-spin Fe3+ electronic state.

  6. Boosting-Based On-Road Obstacle Sensing Using Discriminative Weak Classifiers

    Science.gov (United States)

    Adhikari, Shyam Prasad; Yoo, Hyeon-Joong; Kim, Hyongsuk

    2011-01-01

    This paper proposes an extension of the weak classifiers derived from the Haar-like features for their use in the Viola-Jones object detection system. These weak classifiers differ from the traditional single threshold ones, in that no specific threshold is needed and these classifiers give a more general solution to the non-trivial task of finding thresholds for the Haar-like features. The proposed quadratic discriminant analysis based extension prominently improves the ability of the weak classifiers to discriminate objects and non-objects. The proposed weak classifiers were evaluated by boosting a single stage classifier to detect rear of car. The experiments demonstrate that the object detector based on the proposed weak classifiers yields higher classification performance with less number of weak classifiers than the detector built with traditional single threshold weak classifiers. PMID:22163852

  7. Evaluation of toxicity and removal of color in textile effluent treated with electron beam; Avaliacao da toxicidade e remocao da cor de um efluente textil tratado com feixe de eletrons

    Energy Technology Data Exchange (ETDEWEB)

    Morais, Aline Viana de

    2015-07-01

    The textile industry is among the main activities Brazil, being relevant in number of jobs, quantity and diversity of products and mainly by the volume of water used in industrial processes and effluent generation. These effluents are complex mixtures which are characterized by the presence of dyes, surfactants, metal sequestering agents, salts and other potentially toxic chemicals for the aquatic biota. Considering the lack of adequate waste management to these treatments, new technologies are essential in highlighting the advanced oxidation processes such as ionizing radiation electron beam. This study includes the preparation of a standard textile effluent chemical laboratory and its treatment by electron beam from electron accelerator in order to reduce the toxicity and intense staining resulting from Cl. Blue 222 dye. The treatment caused a reduction in toxicity to exposed organisms with 34.55% efficiency for the Daphnia similis micro crustacean and 47.83% for Brachionus plicatilis rotifer at a dose of 2.5 kGy. The Vibrio fischeri bacteria obtained better results after treatment with a dose of 5 kGy showing 57.29% efficiency. Color reduction was greater than 90% at a dose of 2.5 kGy. This experiment has also carried out some preliminary tests on the sensitivity of the D. similis and V. fischeri organisms to exposure of some of the products used in this bleaching and dyeing and two water reuse simulations in new textile processing after the treating the effluent with electron beam. (author)

  8. Whole toxicity removal for industrial and domestic effluents treated with electron beam radiation, evaluated with Vibrio fischeri, Daphnia similis and Poecilia reticulata; Reducao da toxicidade aguda de efluentes industriais e domesticos tratados por irradiacao com feixe de eletrons, avaliada com as especies Vibrio fischeri, Daphnia similis and Poecilia reticulata

    Energy Technology Data Exchange (ETDEWEB)

    Borrely, Sueli Ivone

    2001-07-01

    Several studies have been performed in order to apply ionizing radiation to treat real complexes effluents from different sources, at IPEN. This paper shows the results of such kind of application devoted to influents and effluents from Suzano Wastewater Treatment Plant, Sao Paulo, Suzano WTP, from SABESP. The purpose of the work was to evaluate the radiation technology according to ecotoxicological aspects. The evaluation was carried out on the toxicity bases which included three sampling sites as follows: complex industrial effluents; domestic sewage mixed to the industrial discharge (GM) and final secondary effluent. The tested-organisms for toxicity evaluation were: the marine bacteria Vibrio fischeri, the microcrustacean Daphnia similis and the guppy Poecilia reticulata. The fish tests were applied only for secondary final effluents. The results demonstrated the original acute toxicity levels as well as the efficiency of electron beam for its reduction. An important acute toxicity removal was achieved: from 75% up to 95% with 50 kGy (UNA), 20 kGy (GM) and 5.0 kGy for the final effluent. The toxicity removal was a consequence of several organic solvents decomposed by radiation and acute toxicity reduction was about 95%. When the toxicity was evaluated for fish the radiation efficiency reached from 40% to 60%. The hypothesis tests showed a statistical significant removal in the developed studies condition. No residual hydrogen peroxide was found after 5.0 kGy was applied to final effluent. (author)

  9. Thyroid gland removal - discharge

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000293.htm Thyroid gland removal - discharge To use the sharing features ... surgery. This will make your scar show less. Thyroid Hormone Replacement You may need to take thyroid ...

  10. Hardware removal - extremity

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/007644.htm Hardware removal - extremity To use the sharing features on this page, please enable JavaScript. Surgeons use hardware such as pins, plates, or screws to help ...

  11. Gallbladder removal - laparoscopic

    Science.gov (United States)

    ... PA: Elsevier Saunders; 2012:chap 55. Read More Acute cholecystitis Chronic cholecystitis Gallbladder removal - open Gallstones Patient Instructions Bland diet Surgical wound care - open When you have nausea and vomiting ...

  12. Reactor for removing ammonia

    Science.gov (United States)

    Luo, Weifang; Stewart, Kenneth D.

    2009-11-17

    Disclosed is a device for removing trace amounts of ammonia from a stream of gas, particularly hydrogen gas, prepared by a reformation apparatus. The apparatus is used to prevent PEM "poisoning" in a fuel cell receiving the incoming hydrogen stream.

  13. Laparoscopic Adrenal Gland Removal

    Science.gov (United States)

    ... adrenal tumors that appear malignant. What are the Advantages of Laparoscopic Adrenal Gland Removal? In the past, ... of procedure and the patients overall condition. Common advantages are: Less postoperative pain Shorter hospital stay Quicker ...

  14. Mildew remover poisoning

    Science.gov (United States)

    ... level of consciousness and lack of responsiveness) Stupor SKIN Burns Irritation Necrosis (holes) in the skin or underlying ... Fluids through a vein (IV) Surgical removal of burned skin (skin debridement) Washing of the skin (irrigation). Perhaps ...

  15. Dye remover poisoning

    Science.gov (United States)

    ... SYSTEM Collapse Low blood pressure that develops rapidly SKIN Burns Holes (necrosis) in the skin or tissues underneath ... vein) Medicines to treat pain Surgical removal of burned skin (skin debridement) Washing of the skin (irrigation), perhaps ...

  16. Random Sampling with Removal

    OpenAIRE

    Gärtner, Bernd; Lengler, Johannes; Szedlak, May

    2015-01-01

    Random sampling is a classical tool in constrained optimization. Under favorable conditions, the optimal solution subject to a small subset of randomly chosen constraints violates only a small subset of the remaining constraints. Here we study the following variant that we call random sampling with removal: suppose that after sampling the subset, we remove a fixed number of constraints from the sample, according to an arbitrary rule. Is it still true that the optimal solution of the reduced s...

  17. Metal Removal in Wastewater

    OpenAIRE

    Sanchez Roldan, Laura

    2014-01-01

    The aim of this work was to study Copper removal capacity of different algae species and their mixtures from the municipal wastewater. This project was implemented in the greenhouse in the laboratories of Tampere University of Applied Sciences and the wastewater used was the one from the Tampere municipal wastewater treatment plant. Five algae species and three mixtures of them were tested for their Copper removal potential in wastewater in one batch test run. The most efficient algae mixture...

  18. Laser hair removal pearls.

    Science.gov (United States)

    Tierney, Emily P; Goldberg, David J

    2008-03-01

    A number of lasers and light devices are now available for the treatment of unwanted hair. The goal of laser hair removal is to damage stem cells in the bulge of the follicle through the targeting of melanin, the endogenous chromophore for laser and light devices utilized to remove hair. The competing chromophores in the skin and hair, oxyhemoglobin and water, have a decreased absorption between 690 nm and 1000 nm, thus making this an ideal range for laser and light sources. Pearls of laser hair removal are presented in this review, focusing on four areas of recent development: 1 treatment of blond, white and gray hair; 2 paradoxical hypertrichosis; 3 laser hair removal in children; and 4 comparison of lasers and IPL. Laser and light-based technologies to remove hair represents one of the most exciting areas where discoveries by dermatologists have led to novel treatment approaches. It is likely that in the next decade, continued advancements in this field will bring us closer to the development of a more permanent and painless form of hair removal.

  19. Laser hair removal pearls.

    Science.gov (United States)

    Tierney, Emily P; Goldberg, David J

    2008-03-01

    A number of lasers and light devices are now available for the treatment of unwanted hair. The goal of laser hair removal is to damage stem cells in the bulge of the follicle through the targeting of melanin, the endogenous chromophore for laser and light devices utilized to remove hair. The competing chromophores in the skin and hair, oxyhemoglobin and water, have a decreased absorption between 690 nm and 1000 nm, thus making this an ideal range for laser and light sources. Pearls of laser hair removal are presented in this review, focusing on four areas of recent development: 1 treatment of blond, white and gray hair; 2 paradoxical hypertrichosis; 3 laser hair removal in children; and 4 comparison of lasers and IPL. Laser and light-based technologies to remove hair represents one of the most exciting areas where discoveries by dermatologists have led to novel treatment approaches. It is likely that in the next decade, continued advancements in this field will bring us closer to the development of a more permanent and painless form of hair removal. PMID:18330794

  20. Classifying transcription factor targets and discovering relevant biological features

    Directory of Open Access Journals (Sweden)

    DeLisi Charles

    2008-05-01

    Full Text Available Abstract Background An important goal in post-genomic research is discovering the network of interactions between transcription factors (TFs and the genes they regulate. We have previously reported the development of a supervised-learning approach to TF target identification, and used it to predict targets of 104 transcription factors in yeast. We now include a new sequence conservation measure, expand our predictions to include 59 new TFs, introduce a web-server, and implement an improved ranking method to reveal the biological features contributing to regulation. The classifiers combine 8 genomic datasets covering a broad range of measurements including sequence conservation, sequence overrepresentation, gene expression, and DNA structural properties. Principal Findings (1 Application of the method yields an amplification of information about yeast regulators. The ratio of total targets to previously known targets is greater than 2 for 11 TFs, with several having larger gains: Ash1(4, Ino2(2.6, Yaf1(2.4, and Yap6(2.4. (2 Many predicted targets for TFs match well with the known biology of their regulators. As a case study we discuss the regulator Swi6, presenting evidence that it may be important in the DNA damage response, and that the previously uncharacterized gene YMR279C plays a role in DNA damage response and perhaps in cell-cycle progression. (3 A procedure based on recursive-feature-elimination is able to uncover from the large initial data sets those features that best distinguish targets for any TF, providing clues relevant to its biology. An analysis of Swi6 suggests a possible role in lipid metabolism, and more specifically in metabolism of ceramide, a bioactive lipid currently being investigated for anti-cancer properties. (4 An analysis of global network properties highlights the transcriptional network hubs; the factors which control the most genes and the genes which are bound by the largest set of regulators. Cell-cycle and

  1. Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers

    Science.gov (United States)

    Anaya, Leticia H.

    2011-01-01

    In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed.…

  2. Executed Movement Using EEG Signals through a Naive Bayes Classifier

    Directory of Open Access Journals (Sweden)

    Juliano Machado

    2014-11-01

    Full Text Available Recent years have witnessed a rapid development of brain-computer interface (BCI technology. An independent BCI is a communication system for controlling a device by human intension, e.g., a computer, a wheelchair or a neuroprosthes is, not depending on the brain’s normal output pathways of peripheral nerves and muscles, but on detectable signals that represent responsive or intentional brain activities. This paper presents a comparative study of the usage of the linear discriminant analysis (LDA and the naive Bayes (NB classifiers on describing both right- and left-hand movement through electroencephalographic signal (EEG acquisition. For the analysis, we considered the following input features: the energy of the segments of a band pass-filtered signal with the frequency band in sensorimotor rhythms and the components of the spectral energy obtained through the Welch method. We also used the common spatial pattern (CSP filter, so as to increase the discriminatory activity among movement classes. By using the database generated by this experiment, we obtained hit rates up to 70%. The results are compatible with previous studies.

  3. Addressing the Challenge of Defining Valid Proteomic Biomarkers and Classifiers

    LENUS (Irish Health Repository)

    Dakna, Mohammed

    2010-12-10

    Abstract Background The purpose of this manuscript is to provide, based on an extensive analysis of a proteomic data set, suggestions for proper statistical analysis for the discovery of sets of clinically relevant biomarkers. As tractable example we define the measurable proteomic differences between apparently healthy adult males and females. We choose urine as body-fluid of interest and CE-MS, a thoroughly validated platform technology, allowing for routine analysis of a large number of samples. The second urine of the morning was collected from apparently healthy male and female volunteers (aged 21-40) in the course of the routine medical check-up before recruitment at the Hannover Medical School. Results We found that the Wilcoxon-test is best suited for the definition of potential biomarkers. Adjustment for multiple testing is necessary. Sample size estimation can be performed based on a small number of observations via resampling from pilot data. Machine learning algorithms appear ideally suited to generate classifiers. Assessment of any results in an independent test-set is essential. Conclusions Valid proteomic biomarkers for diagnosis and prognosis only can be defined by applying proper statistical data mining procedures. In particular, a justification of the sample size should be part of the study design.

  4. Pulmonary nodule detection using a cascaded SVM classifier

    Science.gov (United States)

    Bergtholdt, Martin; Wiemker, Rafael; Klinder, Tobias

    2016-03-01

    Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.

  5. Learning multiscale and deep representations for classifying remotely sensed imagery

    Science.gov (United States)

    Zhao, Wenzhi; Du, Shihong

    2016-03-01

    It is widely agreed that spatial features can be combined with spectral properties for improving interpretation performances on very-high-resolution (VHR) images in urban areas. However, many existing methods for extracting spatial features can only generate low-level features and consider limited scales, leading to unpleasant classification results. In this study, multiscale convolutional neural network (MCNN) algorithm was presented to learn spatial-related deep features for hyperspectral remote imagery classification. Unlike traditional methods for extracting spatial features, the MCNN first transforms the original data sets into a pyramid structure containing spatial information at multiple scales, and then automatically extracts high-level spatial features using multiscale training data sets. Specifically, the MCNN has two merits: (1) high-level spatial features can be effectively learned by using the hierarchical learning structure and (2) multiscale learning scheme can capture contextual information at different scales. To evaluate the effectiveness of the proposed approach, the MCNN was applied to classify the well-known hyperspectral data sets and compared with traditional methods. The experimental results shown a significant increase in classification accuracies especially for urban areas.

  6. Classifying environmentally significant urban land uses with satellite imagery.

    Science.gov (United States)

    Park, Mi-Hyun; Stenstrom, Michael K

    2008-01-01

    We investigated Bayesian networks to classify urban land use from satellite imagery. Landsat Enhanced Thematic Mapper Plus (ETM(+)) images were used for the classification in two study areas: (1) Marina del Rey and its vicinity in the Santa Monica Bay Watershed, CA and (2) drainage basins adjacent to the Sweetwater Reservoir in San Diego, CA. Bayesian networks provided 80-95% classification accuracy for urban land use using four different classification systems. The classifications were robust with small training data sets with normal and reduced radiometric resolution. The networks needed only 5% of the total data (i.e., 1500 pixels) for sample size and only 5- or 6-bit information for accurate classification. The network explicitly showed the relationship among variables from its structure and was also capable of utilizing information from non-spectral data. The classification can be used to provide timely and inexpensive land use information over large areas for environmental purposes such as estimating stormwater pollutant loads. PMID:17291679

  7. Linearly and Quadratically Separable Classifiers Using Adaptive Approach

    Institute of Scientific and Technical Information of China (English)

    Mohamed Abdel-Kawy Mohamed Ali Soliman; Rasha M. Abo-Bakr

    2011-01-01

    This paper presents a fast adaptive iterative algorithm to solve linearly separable classification problems in Rn.In each iteration,a subset of the sampling data (n-points,where n is the number of features) is adaptively chosen and a hyperplane is constructed such that it separates the chosen n-points at a margin e and best classifies the remaining points.The classification problem is formulated and the details of the algorithm are presented.Further,the algorithm is extended to solving quadratically separable classification problems.The basic idea is based on mapping the physical space to another larger one where the problem becomes linearly separable.Numerical illustrations show that few iteration steps are sufficient for convergence when classes are linearly separable.For nonlinearly separable data,given a specified maximum number of iteration steps,the algorithm returns the best hyperplane that minimizes the number of misclassified points occurring through these steps.Comparisons with other machine learning algorithms on practical and benchmark datasets are also presented,showing the performance of the proposed algorithm.

  8. A Novel Performance Metric for Building an Optimized Classifier

    Directory of Open Access Journals (Sweden)

    Mohammad Hossin

    2011-01-01

    Full Text Available Problem statement: Typically, the accuracy metric is often applied for optimizing the heuristic or stochastic classification models. However, the use of accuracy metric might lead the searching process to the sub-optimal solutions due to its less discriminating values and it is also not robust to the changes of class distribution. Approach: To solve these detrimental effects, we propose a novel performance metric which combines the beneficial properties of accuracy metric with the extended recall and precision metrics. We call this new performance metric as Optimized Accuracy with Recall-Precision (OARP. Results: In this study, we demonstrate that the OARP metric is theoretically better than the accuracy metric using four generated examples. We also demonstrate empirically that a naïve stochastic classification algorithm, which is Monte Carlo Sampling (MCS algorithm trained with the OARP metric, is able to obtain better predictive results than the one trained with the conventional accuracy metric. Additionally, the t-test analysis also shows a clear advantage of the MCS model trained with the OARP metric over the accuracy metric alone for all binary data sets. Conclusion: The experiments have proved that the OARP metric leads stochastic classifiers such as the MCS towards a better training model, which in turn will improve the predictive results of any heuristic or stochastic classification models.

  9. A system-awareness decision classifier to automated MSN forensics

    Science.gov (United States)

    Chu, Yin-Teshou Tsao; Fan, Kuo-Pao; Cheng, Ya-Wen; Tseng, Po-Kai; Chen, Huan; Cheng, Bo-Chao

    2007-09-01

    Data collection is the most important stage in network forensics; but under the resource constrained situations, a good evidence collection mechanism is required to provide effective event collections in a high network traffic environment. In literatures, a few network forensic tools offer MSN-messenger behavior reconstruction. Moreover, they do not have classification strategies at the collection stage when the system becomes saturated. The emphasis of this paper is to address the shortcomings of the above situations and pose a solution to select a better classification in order to ensure the integrity of the evidences in the collection stage under high-traffic network environments. A system-awareness decision classifier (SADC) mechanism is proposed in this paper. MSN-shot sensor is able to adjust the amount of data to be collected according to the current system status and to keep evidence integrity as much as possible according to the file format and the current system status. Analytical results show that proposed SADC to implement selective collection (SC) consumes less cost than full collection (FC) under heavy traffic scenarios. With the deployment of the proposed SADC mechanism, we believe that MSN-shot is able to reconstruct the MSN-messenger behaviors perfectly in the context of upcoming next generation network.

  10. A dimensionless parameter for classifying hemodynamics in intracranial

    Science.gov (United States)

    Asgharzadeh, Hafez; Borazjani, Iman

    2015-11-01

    Rupture of an intracranial aneurysm (IA) is a disease with high rates of mortality. Given the risk associated with the aneurysm surgery, quantifying the likelihood of aneurysm rupture is essential. There are many risk factors that could be implicated in the rupture of an aneurysm. However, the most important factors correlated to the IA rupture are hemodynamic factors such as wall shear stress (WSS) and oscillatory shear index (OSI) which are affected by the IA flows. Here, we carry out three-dimensional high resolution simulations on representative IA models with simple geometries to test a dimensionless number (first proposed by Le et al., ASME J Biomech Eng, 2010), denoted as An number, to classify the flow mode. An number is defined as the ratio of the time takes the parent artery flow transports across the IA neck to the time required for vortex ring formation. Based on the definition, the flow mode is vortex if An>1 and it is cavity if Ananeurysms. In addition, we show that this classification works on three-dimensional geometries reconstructed from three-dimensional rotational angiography of human subjects. Furthermore, we verify the correlation of IA flow mode and WSS/OSI on the human subject IA. This work was supported partly by the NIH grant R03EB014860, and the computational resources were partly provided by CCR at UB. We thank Prof. Hui Meng and Dr. Jianping Xiang for providing us the database of aneurysms and helpful discussions.

  11. Classifying Volcanic Activity Using an Empirical Decision Making Algorithm

    Science.gov (United States)

    Junek, W. N.; Jones, W. L.; Woods, M. T.

    2012-12-01

    Detection and classification of developing volcanic activity is vital to eruption forecasting. Timely information regarding an impending eruption would aid civil authorities in determining the proper response to a developing crisis. In this presentation, volcanic activity is characterized using an event tree classifier and a suite of empirical statistical models derived through logistic regression. Forecasts are reported in terms of the United States Geological Survey (USGS) volcano alert level system. The algorithm employs multidisciplinary data (e.g., seismic, GPS, InSAR) acquired by various volcano monitoring systems and source modeling information to forecast the likelihood that an eruption, with a volcanic explosivity index (VEI) > 1, will occur within a quantitatively constrained area. Logistic models are constructed from a sparse and geographically diverse dataset assembled from a collection of historic volcanic unrest episodes. Bootstrapping techniques are applied to the training data to allow for the estimation of robust logistic model coefficients. Cross validation produced a series of receiver operating characteristic (ROC) curves with areas ranging between 0.78-0.81, which indicates the algorithm has good predictive capabilities. The ROC curves also allowed for the determination of a false positive rate and optimum detection for each stage of the algorithm. Forecasts for historic volcanic unrest episodes in North America and Iceland were computed and are consistent with the actual outcome of the events.

  12. A dimensionless parameter for classifying hemodynamics in intracranial

    Science.gov (United States)

    Asgharzadeh, Hafez; Borazjani, Iman

    2015-11-01

    Rupture of an intracranial aneurysm (IA) is a disease with high rates of mortality. Given the risk associated with the aneurysm surgery, quantifying the likelihood of aneurysm rupture is essential. There are many risk factors that could be implicated in the rupture of an aneurysm. However, the most important factors correlated to the IA rupture are hemodynamic factors such as wall shear stress (WSS) and oscillatory shear index (OSI) which are affected by the IA flows. Here, we carry out three-dimensional high resolution simulations on representative IA models with simple geometries to test a dimensionless number (first proposed by Le et al., ASME J Biomech Eng, 2010), denoted as An number, to classify the flow mode. An number is defined as the ratio of the time takes the parent artery flow transports across the IA neck to the time required for vortex ring formation. Based on the definition, the flow mode is vortex if An>1 and it is cavity if AnOSI on the human subject IA. This work was supported partly by the NIH grant R03EB014860, and the computational resources were partly provided by CCR at UB. We thank Prof. Hui Meng and Dr. Jianping Xiang for providing us the database of aneurysms and helpful discussions.

  13. Classifying and explaining democracy in the Muslim world

    Directory of Open Access Journals (Sweden)

    Rohaizan Baharuddin

    2012-12-01

    Full Text Available The purpose of this study is to classify and explain democracies in the 47 Muslim countries between the years 1998 and 2008 by using liberties and elections as independent variables. Specifically focusing on the context of the Muslim world, this study examines the performance of civil liberties and elections, variation of democracy practised the most, the elections, civil liberties and democratic transitions and patterns that followed. Based on the quantitative data primarily collected from Freedom House, this study demonstrates the following aggregate findings: first, the “not free not fair” elections, the “limited” civil liberties and the “Illiberal Partial Democracy” were the dominant feature of elections, civil liberties and democracy practised in the Muslim world; second, a total of 413 Muslim regimes out of 470 (47 regimes x 10 years remained the same as their democratic origin points, without any transitions to a better or worse level of democracy, throughout these 10 years; and third, a slow, yet steady positive transition of both elections and civil liberties occurred in the Muslim world with changes in the nature of elections becoming much more progressive compared to the civil liberties’ transitions.

  14. A framework to classify error in animal-borne technologies

    Directory of Open Access Journals (Sweden)

    Zackory eBurns

    2015-05-01

    Full Text Available The deployment of novel, innovative, and increasingly miniaturized devices on fauna, especially otherwise difficult to observe taxa, to collect data has steadily increased. Yet, every animal-borne technology has its shortcomings, such as limitations in its precision or accuracy. These shortcomings, here labelled as ‘error’, are not yet studied systematically and a framework to identify and classify error does not exist. Here, we propose a classification scheme to synthesize error across technologies, discussing basic physical properties used by a technology to collect data, conversion of raw data into useful variables, and subjectivity in the parameters chosen. In addition, we outline a four-step framework to quantify error in animal-borne devices: to know, to identify, to evaluate, and to store. Both the classification scheme and framework are theoretical in nature. However, since mitigating error is essential to answer many biological questions, we believe they will be operationalized and facilitate future work to determine and quantify error in animal-borne technologies. Moreover, increasing the transparency of error will ensure the technique used to collect data moderates the biological questions and conclusions.

  15. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Vitomir Štruc

    2010-01-01

    Full Text Available This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  16. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Science.gov (United States)

    Štruc, Vitomir; Pavešić, Nikola

    2010-12-01

    This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC). Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1) the introduction of a Gabor phase-based face representation and (2) the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  17. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Štruc Vitomir

    2010-01-01

    Full Text Available Abstract This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  18. Bilayer segmentation of webcam videos using tree-based classifiers.

    Science.gov (United States)

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems. PMID:21088317

  19. Using color histograms and SPA-LDA to classify bacteria.

    Science.gov (United States)

    de Almeida, Valber Elias; da Costa, Gean Bezerra; de Sousa Fernandes, David Douglas; Gonçalves Dias Diniz, Paulo Henrique; Brandão, Deysiane; de Medeiros, Ana Claudia Dantas; Véras, Germano

    2014-09-01

    In this work, a new approach is proposed to verify the differentiating characteristics of five bacteria (Escherichia coli, Enterococcus faecalis, Streptococcus salivarius, Streptococcus oralis, and Staphylococcus aureus) by using digital images obtained with a simple webcam and variable selection by the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). In this sense, color histograms in the red-green-blue (RGB), hue-saturation-value (HSV), and grayscale channels and their combinations were used as input data, and statistically evaluated by using different multivariate classifiers (Soft Independent Modeling by Class Analogy (SIMCA), Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA), Partial Least Squares Discriminant Analysis (PLS-DA) and Successive Projections Algorithm-Linear Discriminant Analysis (SPA-LDA)). The bacteria strains were cultivated in a nutritive blood agar base layer for 24 h by following the Brazilian Pharmacopoeia, maintaining the status of cell growth and the nature of nutrient solutions under the same conditions. The best result in classification was obtained by using RGB and SPA-LDA, which reached 94 and 100 % of classification accuracy in the training and test sets, respectively. This result is extremely positive from the viewpoint of routine clinical analyses, because it avoids bacterial identification based on phenotypic identification of the causative organism using Gram staining, culture, and biochemical proofs. Therefore, the proposed method presents inherent advantages, promoting a simpler, faster, and low-cost alternative for bacterial identification.

  20. Two-categorical bundles and their classifying spaces

    DEFF Research Database (Denmark)

    Baas, Nils A.; Bökstedt, M.; Kro, T.A.

    2012-01-01

    For a 2-category 2C we associate a notion of a principal 2C-bundle. In case of the 2-category of 2-vector spaces in the sense of M.M. Kapranov and V.A. Voevodsky this gives the the 2-vector bundles of N.A. Baas, B.I. Dundas and J. Rognes. Our main result says that the geometric nerve of a good 2......-category is a classifying space for the associated principal 2-bundles. In the process of proving this we develop a lot of powerful machinery which may be useful in further studies of 2-categorical topology. As a corollary we get a new proof of the classification of principal bundles. A calculation based...... on the main theorem shows that the principal 2-bundles associated to the 2-category of 2-vector spaces in the sense of J.C. Baez and A.S. Crans split, up to concordance, as two copies of ordinary vector bundles. When 2C is a cobordism type 2-category we get a new notion of cobordism-bundles which turns out...

  1. Improving tRNAscan-SE Annotation Results via Ensemble Classifiers.

    Science.gov (United States)

    Zou, Quan; Guo, Jiasheng; Ju, Ying; Wu, Meihong; Zeng, Xiangxiang; Hong, Zhiling

    2015-11-01

    tRNAScan-SE is a tRNA detection program that is widely used for tRNA annotation; however, the false positive rate of tRNAScan-SE is unacceptable for large sequences. Here, we used a machine learning method to try to improve the tRNAScan-SE results. A new predictor, tRNA-Predict, was designed. We obtained real and pseudo-tRNA sequences as training data sets using tRNAScan-SE and constructed three different tRNA feature sets. We then set up an ensemble classifier, LibMutil, to predict tRNAs from the training data. The positive data set of 623 tRNA sequences was obtained from tRNAdb 2009 and the negative data set was the false positive tRNAs predicted by tRNAscan-SE. Our in silico experiments revealed a prediction accuracy rate of 95.1 % for tRNA-Predict using 10-fold cross-validation. tRNA-Predict was developed to distinguish functional tRNAs from pseudo-tRNAs rather than to predict tRNAs from a genome-wide scan. However, tRNA-Predict can work with the output of tRNAscan-SE, which is a genome-wide scanning method, to improve the tRNAscan-SE annotation results. The tRNA-Predict web server is accessible at http://datamining.xmu.edu.cn/∼gjs/tRNA-Predict. PMID:27491037

  2. Fuzzy-Genetic Classifier algorithm for bank's customers

    Directory of Open Access Journals (Sweden)

    Rashed Mokhtar Elawady

    2011-09-01

    Full Text Available Modern finical banks are running in complex and dynamic environment which may bring high uncertainty and risk to them. So the ability to intelligently collect, mange, and analyze information about customers is a key source of competitive advantage for an E-business. But the data base for any bank is too large, complex and incomprehensible to determine if the customer risk or default. This paper presents a new algorithm for extracting accurate and comprehensible rules from database via fuzzy genetic classifier by two methodologies fuzzy system and genetic algorithms in one algorithm. Proposed evolved system exhibits two important characteristics; first, each rule is obtained through an efficient genetic rule extraction method which adapts the parameters of the fuzzy sets in the premise space and determines the required features of the rule, further improve the interpretability of the obtained model. Second, evolve the obtained rule base through genetic algorithm. The cooperation system increases the classification performance and reach to max classification ratio in the earlier generations.

  3. MISR Level 2 FIRSTLOOK TOA/Cloud Classifier parameters V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 FIRSTLOOK TOA/Cloud Classifiers Product. It contains the Angular Signature Cloud Mask (ASCM), Cloud Classifiers, and Support Vector Machine...

  4. 75 FR 51609 - Classified National Security Information Program for State, Local, Tribal, and Private Sector...

    Science.gov (United States)

    2010-08-23

    ... National Security Information Program for State, Local, Tribal, and Private Sector Entities By the... established a Classified National Security Information Program (Program) designed to safeguard and govern access to classified national security information shared by the Federal Government with State,...

  5. Construction of Classifier Based on MPCA and QSA and Its Application on Classification of Pancreatic Diseases

    OpenAIRE

    Huiyan Jiang; Di Zhao; Tianjiao Feng; Shiyang Liao; Yenwei Chen

    2013-01-01

    A novel method is proposed to establish the classifier which can classify the pancreatic images into normal or abnormal. Firstly, the brightness feature is used to construct high-order tensors, then using multilinear principal component analysis (MPCA) extracts the eigentensors, and finally, the classifier is constructed based on support vector machine (SVM) and the classifier parameters are optimized with quantum simulated annealing algorithm (QSA). In order to verify the effectiveness of th...

  6. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    OpenAIRE

    M. Favorskaya; Nosov, A.; Popov, A.

    2015-01-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin dete...

  7. Automating the construction of scene classifiers for content-based video retrieval

    OpenAIRE

    Israël, Menno; Broek, van den, M.A.F.H.; Putten, van, J.P.M.; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a two stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classific...

  8. One piece reactor removal

    International Nuclear Information System (INIS)

    Japan Research Reactor No.3 (JRR-3) was the first reactor consisting of 'Japanese-made' components alone except for fuel and heavy water. After reaching its initial critical state in September 1962, JRR-3 had been in operation for 21 years until March 1983. It was decided that the reactor be removed en-bloc in view of the work schedule, cost and management of the reactor following the removal. In the special method developed jointly by the Japanese Atomic Energy Research Institute and Shimizu Construction Co., Ltd., the reactor main unit was cut off from the building by continuous core boring, with its major components bound in the block with biological shield material (heavy concrete), and then conveyed and stored in a large waste store building constructed near the reactor building. Major work processes described in this report include the cutting off, lifting, horizontal conveyance and lowering of the reactor main unit. The removal of the JRR-3 reactor main unit was successfully carried out safely and quickly by the en-block removal method with radiation exposure dose of the workers being kept at a minimum. Thus the high performance of the en-bloc removal method was demonstrated and, in addition, valuable knowhow and other data were obtained from the work. (Nogami, K.)

  9. A Probabilistic Approach to Classifying Supernovae Using Photometric Information

    CERN Document Server

    Kuznetsova, N V; Kuznetsova, Natalia V.; Connolly, Brian M.

    2006-01-01

    This paper presents a novel method for determining the probability that a supernova candidate belongs to a known supernova type (such as Ia, Ibc, IIL, \\emph{etc.}), using its photometric information alone. It is validated with Monte Carlo, and both space- and ground- based data. We examine the application of the method to well-sampled as well as poorly sampled supernova light curves. Central to the method is the assumption that a supernova candidate belongs to a group of objects that can be modeled; we therefore discuss possible ways of removing anomalous or less well understood events from the sample. This method is particularly advantageous for analyses where the purity of the supernova sample is of the essence, or for those where it is important to know the number of the supernova candidates of a certain type (\\emph{e.g.}, in supernova rate studies).

  10. DATA CLASSIFICATION WITH NEURAL CLASSIFIER USING RADIAL BASIS FUNCTION WITH DATA REDUCTION USING HIERARCHICAL CLUSTERING

    Directory of Open Access Journals (Sweden)

    M. Safish Mary

    2012-04-01

    Full Text Available Classification of large amount of data is a time consuming process but crucial for analysis and decision making. Radial Basis Function networks are widely used for classification and regression analysis. In this paper, we have studied the performance of RBF neural networks to classify the sales of cars based on the demand, using kernel density estimation algorithm which produces classification accuracy comparable to data classification accuracy provided by support vector machines. In this paper, we have proposed a new instance based data selection method where redundant instances are removed with help of a threshold thus improving the time complexity with improved classification accuracy. The instance based selection of the data set will help reduce the number of clusters formed thereby reduces the number of centers considered for building the RBF network. Further the efficiency of the training is improved by applying a hierarchical clustering technique to reduce the number of clusters formed at every step. The paper explains the algorithm used for classification and for conditioning the data. It also explains the complexities involved in classification of sales data for analysis and decision-making.

  11. Building Keypoint Mappings on Multispectral Images by a Cascade of Classifiers with a Resurrection Mechanism

    Directory of Open Access Journals (Sweden)

    Yong Li

    2015-05-01

    Full Text Available Inspired by the boosting technique for detecting objects, this paper proposes a cascade structure with a resurrection mechanism to establish keypoint mappings on multispectral images. The cascade structure is composed of four steps by utilizing best bin first (BBF, color and intensity distribution of segment (CIDS, global information and the RANSAC process to remove outlier keypoint matchings. Initial keypoint mappings are built with the descriptors associated with keypoints; then, at each step, only a small number of keypoint mappings of a high confidence are classified to be incorrect. The unclassified keypoint mappings will be passed on to subsequent steps for determining whether they are correct. Due to the drawback of a classification rule, some correct keypoint mappings may be misclassified as incorrect at a step. Observing this, we design a resurrection mechanism, so that they will be reconsidered and evaluated by the rules utilized in subsequent steps. Experimental results show that the proposed cascade structure combined with the resurrection mechanism can effectively build more reliable keypoint mappings on multispectral images than existing methods.

  12. Rheological evaluation of pretreated cladding removal waste

    Energy Technology Data Exchange (ETDEWEB)

    McCarthy, D.; Chan, M.K.C.; Lokken, R.O.

    1986-01-01

    Cladding removal waste (CRW) contains concentrations of transuranic (TRU) elements in the 80 to 350 nCi/g range. This waste will require pretreatment before it can be disposed of as glass or grout at Hanford. The CRW will be pretreated with a rare earth strike and solids removal by centrifugation to segregate the TRU fraction from the non-TRU fraction of the waste. The centrifuge centrate will be neutralized with sodium hydroxide. This neutralized cladding removal waste (NCRW) is expected to be suitable for grouting. The TRU solids removed by centrifugation will be vitrified. The goal of the Rheological Evaluation of Pretreated Cladding Removal Waste Program was to evaluate those rheological and transport properties critical to assuring successful handling of the NCRW and TRU solids streams and to demonstrate transfers in a semi-prototypic pumping environment. This goal was achieved by a combination of laboratory and pilot-scale evaluations. The results obtained during these evaluations were correlated with classical rheological models and scaled-up to predict the performance that is likely to occur in the full-scale system. The Program used simulated NCRW and TRU solid slurries. Rockwell Hanford Operations (Rockwell) provided 150 gallons of simulated CRW and 5 gallons of simulated TRU solid slurry. The simulated CRW was neutralized by Pacific Northwest Laboratory (PNL). The physical and rheological properties of the NCRW and TRU solid slurries were evaluated in the laboratory. The properties displayed by NCRW allowed it to be classified as a pseudoplastic or yield-pseudoplastic non-Newtonian fluid. The TRU solids slurry contained very few solids. This slurry exhibited the properties associated with a pseudoplastic non-Newtonian fluid.

  13. Rheological evaluation of pretreated cladding removal waste

    International Nuclear Information System (INIS)

    Cladding removal waste (CRW) contains concentrations of transuranic (TRU) elements in the 80 to 350 nCi/g range. This waste will require pretreatment before it can be disposed of as glass or grout at Hanford. The CRW will be pretreated with a rare earth strike and solids removal by centrifugation to segregate the TRU fraction from the non-TRU fraction of the waste. The centrifuge centrate will be neutralized with sodium hydroxide. This neutralized cladding removal waste (NCRW) is expected to be suitable for grouting. The TRU solids removed by centrifugation will be vitrified. The goal of the Rheological Evaluation of Pretreated Cladding Removal Waste Program was to evaluate those rheological and transport properties critical to assuring successful handling of the NCRW and TRU solids streams and to demonstrate transfers in a semi-prototypic pumping environment. This goal was achieved by a combination of laboratory and pilot-scale evaluations. The results obtained during these evaluations were correlated with classical rheological models and scaled-up to predict the performance that is likely to occur in the full-scale system. The Program used simulated NCRW and TRU solid slurries. Rockwell Hanford Operations (Rockwell) provided 150 gallons of simulated CRW and 5 gallons of simulated TRU solid slurry. The simulated CRW was neutralized by Pacific Northwest Laboratory (PNL). The physical and rheological properties of the NCRW and TRU solid slurries were evaluated in the laboratory. The properties displayed by NCRW allowed it to be classified as a pseudoplastic or yield-pseudoplastic non-Newtonian fluid. The TRU solids slurry contained very few solids. This slurry exhibited the properties associated with a pseudoplastic non-Newtonian fluid

  14. Classification of Cancer Gene Selection Using Random Forest and Neural Network Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Jogendra Kushwah

    2013-06-01

    Full Text Available The free radical gene classification of cancer diseases is challenging job in biomedical data engineering. The improving of classification of gene selection of cancer diseases various classifier are used, but the classification of classifier are not validate. So ensemble classifier is used for cancer gene classification using neural network classifier with random forest tree. The random forest tree is ensembling technique of classifier in this technique the number of classifier ensemble of their leaf node of class of classifier. In this paper we combined neural network with random forest ensemble classifier for classification of cancer gene selection for diagnose analysis of cancer diseases. The proposed method is different from most of the methods of ensemble classifier, which follow an input output paradigm of neural network, where the members of the ensemble are selected from a set of neural network classifier. the number of classifiers is determined during the rising procedure of the forest. Furthermore, the proposed method produces an ensemble not only correct, but also assorted, ensuring the two important properties that should characterize an ensemble classifier. For empirical evaluation of our proposed method we used UCI cancer diseases data set for classification. Our experimental result shows that better result in compression of random forest tree classification.

  15. Arsenic removal from water

    Science.gov (United States)

    Moore, Robert C.; Anderson, D. Richard

    2007-07-24

    Methods for removing arsenic from water by addition of inexpensive and commonly available magnesium oxide, magnesium hydroxide, calcium oxide, or calcium hydroxide to the water. The hydroxide has a strong chemical affinity for arsenic and rapidly adsorbs arsenic, even in the presence of carbonate in the water. Simple and commercially available mechanical methods for removal of magnesium hydroxide particles with adsorbed arsenic from drinking water can be used, including filtration, dissolved air flotation, vortex separation, or centrifugal separation. A method for continuous removal of arsenic from water is provided. Also provided is a method for concentrating arsenic in a water sample to facilitate quantification of arsenic, by means of magnesium or calcium hydroxide adsorption.

  16. A GIS semiautomatic tool for classifying and mapping wetland soils

    Science.gov (United States)

    Moreno-Ramón, Héctor; Marqués-Mateu, Angel; Ibáñez-Asensio, Sara

    2016-04-01

    Wetlands are one of the most productive and biodiverse ecosystems in the world. Water is the main resource and controls the relationships between agents and factors that determine the quality of the wetland. However, vegetation, wildlife and soils are also essential factors to understand these environments. It is possible that soils have been the least studied resource due to their sampling problems. This feature has caused that sometimes wetland soils have been classified broadly. The traditional methodology states that homogeneous soil units should be based on the five soil forming-factors. The problem can appear when the variation of one soil-forming factor is too small to differentiate a change in soil units, or in case that there is another factor, which is not taken into account (e.g. fluctuating water table). This is the case of Albufera of Valencia, a coastal wetland located in the middle east of the Iberian Peninsula (Spain). The saline water table fluctuates throughout the year and it generates differences in soils. To solve this problem, the objectives of this study were to establish a reliable methodology to avoid that problems, and develop a GIS tool that would allow us to define homogeneous soil units in wetlands. This step is essential for the soil scientist, who has to decide the number of soil profiles in a study. The research was conducted with data from 133 soil pits of a previous study in the wetland. In that study, soil parameters of 401 samples (organic carbon, salinity, carbonates, n-value, etc.) were analysed. In a first stage, GIS layers were generated according to depth. The method employed was Bayesian Maxim Entropy. Subsequently, it was designed a program in GIS environment that was based on the decision tree algorithms. The goal of this tool was to create a single layer, for each soil variable, according to the different diagnostic criteria of Soil Taxonomy (properties, horizons and diagnostic epipedons). At the end, the program

  17. CLASSIFYING BENIGN AND MALIGNANT MASSES USING STATISTICAL MEASURES

    Directory of Open Access Journals (Sweden)

    B. Surendiran

    2011-11-01

    Full Text Available Breast cancer is the primary and most common disease found in women which causes second highest rate of death after lung cancer. The digital mammogram is the X-ray of breast captured for the analysis, interpretation and diagnosis. According to Breast Imaging Reporting and Data System (BIRADS benign and malignant can be differentiated using its shape, size and density, which is how radiologist visualize the mammograms. According to BIRADS mass shape characteristics, benign masses tend to have round, oval, lobular in shape and malignant masses are lobular or irregular in shape. Measuring regular and irregular shapes mathematically is found to be a difficult task, since there is no single measure to differentiate various shapes. In this paper, the malignant and benign masses present in mammogram are classified using Hue, Saturation and Value (HSV weight function based statistical measures. The weight function is robust against noise and captures the degree of gray content of the pixel. The statistical measures use gray weight value instead of gray pixel value to effectively discriminate masses. The 233 mammograms from the Digital Database for Screening Mammography (DDSM benchmark dataset have been used. The PASW data mining modeler has been used for constructing Neural Network for identifying importance of statistical measures. Based on the obtained important statistical measure, the C5.0 tree has been constructed with 60-40 data split. The experimental results are found to be encouraging. Also, the results will agree to the standard specified by the American College of Radiology-BIRADS Systems.

  18. VIRTUAL MINING MODEL FOR CLASSIFYING TEXT USING UNSUPERVISED LEARNING

    Directory of Open Access Journals (Sweden)

    S. Koteeswaran

    2014-01-01

    Full Text Available In real world data mining is emerging in various era, one of its most outstanding performance is held in various research such as Big data, multimedia mining, text mining etc. Each of the researcher proves their contribution with tremendous improvements in their proposal by means of mathematical representation. Empowering each problem with solutions are classified into mathematical and implementation models. The mathematical model relates to the straight forward rules and formulas that are related to the problem definition of particular field of domain. Whereas the implementation model derives some sort of knowledge from the real time decision making behaviour such as artificial intelligence and swarm intelligence and has a complex set of rules compared with the mathematical model. The implementation model mines and derives knowledge model from the collection of dataset and attributes. This knowledge is applied to the concerned problem definition. The objective of our work is to efficiently mine knowledge from the unstructured text documents. In order to mine textual documents, text mining is applied. The text mining is the sub-domain in data mining. In text mining, the proposed Virtual Mining Model (VMM is defined for effective text clustering. This VMM involves the learning of conceptual terms; these terms are grouped in Significant Term List (STL. VMM model is appropriate combination of layer 1 arch with ABI (Analysis of Bilateral Intelligence. The frequent update of conceptual terms in the STL is more important for effective clustering. The result is shown, Artifial neural network based unsupervised learning algorithm is used for learning texual pattern in the Virtual Mining Model. For learning of such terminologies, this paper proposed Artificial Neural Network based learning algorithm.

  19. Locating and classifying defects using an hybrid data base

    Energy Technology Data Exchange (ETDEWEB)

    Luna-Aviles, A; Diaz Pineda, A [Tecnologico de Estudios Superiores de Coacalco. Av. 16 de Septiembre 54, Col. Cabecera Municipal. C.P. 55700 (Mexico); Hernandez-Gomez, L H; Urriolagoitia-Calderon, G; Urriolagoitia-Sosa, G [Instituto Politecnico Nacional. ESIME-SEPI. Unidad Profesional ' Adolfo Lopez Mateos' Edificio 5, 30 Piso, Colonia Lindavista. Gustavo A. Madero. 07738 Mexico D.F. (Mexico); Durodola, J F [School of Technology, Oxford Brookes University, Headington Campus, Gipsy Lane, Oxford OX3 0BP (United Kingdom); Beltran Fernandez, J A, E-mail: alelunaav@hotmail.com, E-mail: luishector56@hotmail.com, E-mail: jdurodola@brookes.ac.uk

    2011-07-19

    A computational inverse technique was used in the localization and classification of defects. Postulated voids of two different sizes (2 mm and 4 mm diameter) were introduced in PMMA bars with and without a notch. The bar dimensions are 200x20x5 mm. One half of them were plain and the other half has a notch (3 mm x 4 mm) which is close to the defect area (19 mm x 16 mm).This analysis was done with an Artificial Neural Network (ANN) and its optimization was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). A hybrid data base was developed with numerical and experimental results. Synthetic data was generated with the finite element method using SOLID95 element of ANSYS code. A parametric analysis was carried out. Only one defect in such bars was taken into account and the first five natural frequencies were calculated. 460 cases were evaluated. Half of them were plain and the other half has a notch. All the input data was classified in two groups. Each one has 230 cases and corresponds to one of the two sort of voids mentioned above. On the other hand, experimental analysis was carried on with PMMA specimens of the same size. The first two natural frequencies of 40 cases were obtained with one void. The other three frequencies were obtained numerically. 20 of these bars were plain and the others have a notch. These experimental results were introduced in the synthetic data base. 400 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. In the next stage of this work, the ANN output was optimized with ANFIS. Previous papers showed that localization and classification of defects was reduced as notches were introduced in such bars. In the case of this paper, improved results were obtained when a hybrid data base was used.

  20. A novel clinical tool to classify facioscapulohumeral muscular dystrophy phenotypes.

    Science.gov (United States)

    Ricci, Giulia; Ruggiero, Lucia; Vercelli, Liliana; Sera, Francesco; Nikolic, Ana; Govi, Monica; Mele, Fabiano; Daolio, Jessica; Angelini, Corrado; Antonini, Giovanni; Berardinelli, Angela; Bucci, Elisabetta; Cao, Michelangelo; D'Amico, Maria Chiara; D'Angelo, Grazia; Di Muzio, Antonio; Filosto, Massimiliano; Maggi, Lorenzo; Moggio, Maurizio; Mongini, Tiziana; Morandi, Lucia; Pegoraro, Elena; Rodolico, Carmelo; Santoro, Lucio; Siciliano, Gabriele; Tomelleri, Giuliano; Villa, Luisa; Tupler, Rossella

    2016-06-01

    Based on the 7-year experience of the Italian Clinical Network for FSHD, we revised the FSHD clinical form to describe, in a harmonized manner, the phenotypic spectrum observed in FSHD. The new Comprehensive Clinical Evaluation Form (CCEF) defines various clinical categories by the combination of different features. The inter-rater reproducibility of the CCEF was assessed between two examiners using kappa statistics by evaluating 56 subjects carrying the molecular marker used for FSHD diagnosis. The CCEF classifies: (1) subjects presenting facial and scapular girdle muscle weakness typical of FSHD (category A, subcategories A1-A3), (2) subjects with muscle weakness limited to scapular girdle or facial muscles (category B subcategories B1, B2), (3) asymptomatic/healthy subjects (category C, subcategories C1, C2), (4) subjects with myopathic phenotype presenting clinical features not consistent with FSHD canonical phenotype (D, subcategories D1, D2). The inter-rater reliability study showed an excellent concordance of the final four CCEF categories with a κ equal to 0.90; 95 % CI (0.71; 0.97). Absolute agreement was observed for categories C and D, an excellent agreement for categories A [κ = 0.88; 95 % CI (0.75; 1.00)], and a good agreement for categories B [κ = 0.79; 95 % CI (0.57; 1.00)]. The CCEF supports the harmonized phenotypic classification of patients and families. The categories outlined by the CCEF may assist diagnosis, genetic counseling and natural history studies. Furthermore, the CCEF categories could support selection of patients in randomized clinical trials. This precise categorization might also promote the search of genetic factor(s) contributing to the phenotypic spectrum of disease. PMID:27126453

  1. Multimodal fusion of polynomial classifiers for automatic person recgonition

    Science.gov (United States)

    Broun, Charles C.; Zhang, Xiaozheng

    2001-03-01

    With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late

  2. Predicting Alzheimer's disease by classifying 3D-Brain MRI images using SVM and other well-defined classifiers

    International Nuclear Information System (INIS)

    Alzheimer's disease (AD) is the most common form of dementia affecting seniors age 65 and over. When AD is suspected, the diagnosis is usually confirmed with behavioural assessments and cognitive tests, often followed by a brain scan. Advanced medical imaging and pattern recognition techniques are good tools to create a learning database in the first step and to predict the class label of incoming data in order to assess the development of the disease, i.e., the conversion from prodromal stages (mild cognitive impairment) to Alzheimer's disease, which is the most critical brain disease for the senior population. Advanced medical imaging such as the volumetric MRI can detect changes in the size of brain regions due to the loss of the brain tissues. Measuring regions that atrophy during the progress of Alzheimer's disease can help neurologists in detecting and staging the disease. In the present investigation, we present a pseudo-automatic scheme that reads volumetric MRI, extracts the middle slices of the brain region, performs segmentation in order to detect the region of brain's ventricle, generates a feature vector that characterizes this region, creates an SQL database that contains the generated data, and finally classifies the images based on the extracted features. For our results, we have used the MRI data sets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database.

  3. Predicting Alzheimer's disease by classifying 3D-Brain MRI images using SVM and other well-defined classifiers

    Science.gov (United States)

    Matoug, S.; Abdel-Dayem, A.; Passi, K.; Gross, W.; Alqarni, M.

    2012-02-01

    Alzheimer's disease (AD) is the most common form of dementia affecting seniors age 65 and over. When AD is suspected, the diagnosis is usually confirmed with behavioural assessments and cognitive tests, often followed by a brain scan. Advanced medical imaging and pattern recognition techniques are good tools to create a learning database in the first step and to predict the class label of incoming data in order to assess the development of the disease, i.e., the conversion from prodromal stages (mild cognitive impairment) to Alzheimer's disease, which is the most critical brain disease for the senior population. Advanced medical imaging such as the volumetric MRI can detect changes in the size of brain regions due to the loss of the brain tissues. Measuring regions that atrophy during the progress of Alzheimer's disease can help neurologists in detecting and staging the disease. In the present investigation, we present a pseudo-automatic scheme that reads volumetric MRI, extracts the middle slices of the brain region, performs segmentation in order to detect the region of brain's ventricle, generates a feature vector that characterizes this region, creates an SQL database that contains the generated data, and finally classifies the images based on the extracted features. For our results, we have used the MRI data sets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database.

  4. UV irradiation and H{sub 2} passivation processes to classify and distinguish the origin of luminescence from thin film of nc-Si deposited by PECVD technique

    Energy Technology Data Exchange (ETDEWEB)

    Ali, Atif Mossad, E-mail: atifali@kku.edu.sa

    2015-02-25

    Highlights: • A shoulder peak appeared at around 493 cm{sup −1}, for the sample after UV illumination and H{sub 2} passivation. • The nanocrystalline silicon (nc-Si) thin film exhibits a dominant (1 1 0) texture. • The spin density of ESR centers, N{sub s}, and the g values increased after UV illumination. • A close relation between the increase in both intensity of PL 1.86 eV band and N{sub s} after UV irradiation. • We success to distinguish and classify between QC and defect as the origin of PL for each PL band. - Abstract: Synthesis and characterization of nanocrystalline silicon (nc-Si) deposited by plasma enhanced chemical vapor deposition method are reported. Optical and microstructural studies are carried out by UV–vis absorption spectroscopy, photoluminescence (PL), X-ray diffraction, high resolution transmission electron microscopy, selected area electron diffraction and Raman spectroscopy measurements. Two PL bands observed at peak energies of 1.86 eV and 2.23 eV. An intense debate as to whether the visible PL at room temperature originated from nc-Si as quantum confinement (QC) or defect state. We have used electron spin resonance and hydrogen (H{sub 2}) passivation processes to distinguish between defect state and excitons confined to the nc-Si as the source of the PL. We find that the origin of the PL in the sample can be controlled with the aid of ultra-violet (UV) irradiation to introduce defects, making them as the origin of the PL and then hydrogen passivation remove the defects, resulting in PL from QC states. i.e., Switching it from defect-related in the as-crystallized state to QC after passivation, and back to defect related after subsequent irradiation. The observations of the light emissions from nc-Si at energy of 1.8–2.5 eV should be very important for novel optoelectronic device applications of Si-based materials. In addition, this work opens up the possibility of growing nc-Si thin films at low deposition temperature with

  5. Plate removal following orthognathic surgery.

    Science.gov (United States)

    Little, Mhairi; Langford, Richard Julian; Bhanji, Adam; Farr, David

    2015-11-01

    The objectives of this study are to determine the removal rates of orthognathic plates used during orthognathic surgery at James Cook University Hospital and describe the reasons for plate removal. 202 consecutive orthognathic cases were identified between July 2004 and July 2012. Demographics and procedure details were collected for these patients. Patients from this group who returned to theatre for plate removal between July 2004 and November 2012 were identified and their notes were analysed for data including reason for plate removal, age, smoking status, sex and time to plate removal. 3.2% of plates were removed with proportionally more plates removed from the mandible than the maxilla. 10.4% of patients required removal of one or more plate. Most plates were removed within the first post-operative year. The commonest reasons for plate removal were plate exposure and infection. The plate removal rates in our study are comparable to those seen in the literature.

  6. The Effect of Implementing Gene Expression Classifier on Outcomes of Thyroid Nodules with Indeterminate Cytology.

    Science.gov (United States)

    Abeykoon, Jithma Prasad; Mueller, Luke; Dong, Frank; Chintakuntlawar, Ashish V; Paludo, Jonas; Mortada, Rami

    2016-08-01

    Thyroid nodules are classified into six cytological categories under the Bethesda classification system. Two of these categories, atypical of undetermined significance (AUS) and suspicious for a follicular neoplasm (SFN), are further labeled as "indeterminate" diagnosis. Starting in June, 2012, Kansas University-Wichita Endocrine clinic implemented Afirma® Gene Expression Classifier (AGEC) to evaluate the need for surgical resection of thyroid nodules in patients with an indeterminate diagnosis. Electronic medical records of patients who underwent thyroid nodule fine-needle aspiration from 2004-2014 were reviewed. The aim of this study was to find whether implementing AGEC was associated with decreased surgical recommendation rate, decreased cost, and increased incidence of thyroid malignancy diagnosed by surgery in patients with indeterminate diagnosis. A total of 299 consecutive patients' charts were screened. Sixty-one (20 %) patients had an indeterminate diagnosis. Out of these, 27 (44 %) patients underwent evaluation before and 34 (56 %) patients underwent evaluation after AGEC implementation, respectively. Surgical recommendation for patients with indeterminate finding decreased from 81.5 to 50 % (p = 0.01) after AGEC implementation. Surgical pathology was read as malignant in 20 and 85.7 % (p < 0.01) of patients before and after AGEC implementation, respectively. Primary cost-benefit estimate showed implementing AGEC has saved $1048/patient in medical evaluation and initial management of patients with indeterminate diagnosis. AGEC implementation has decreased the number of surgical recommendations, has lowered financial burden, and has increased incidence of thyroid malignancy diagnosed by surgical pathology in patients with indeterminate diagnosis of thyroid nodules. PMID:27102883

  7. Removing the remaining ridges in fingerprint segmentation

    Institute of Scientific and Technical Information of China (English)

    ZHU En; ZHANG Jian-ming; YIN Jian-ping; ZHANG Guo-min; HU Chun-feng

    2006-01-01

    Fingerprint segmentation is an important step in fingerprint recognition and is usually aimed to identify non-ridge regions and unrecoverable low quality ridge regions and exclude them as background so as to reduce the time expenditure of image processing and avoid detecting false features. In high and in low quality ridge regions, often are some remaining ridges which are the afterimages of the previously scanned finger and are expected to be excluded from the foreground. However, existing segmentation methods generally do not take the case into consideration, and often, the remaining ridge regions are falsely classified as foreground by segmentation algorithm with spurious features produced erroneously including unrecoverable regions as foreground. This paper proposes two steps for fingerprint segmentation aimed at removing the remaining ridge region from the foreground. The non-ridge regions and unrecoverable low quality ridge regions are removed as background in the first step, and then the foreground produced by the first step is further analyzed for possible remove of the remaining ridge region. The proposed method proved effective in avoiding detecting false ridges and in improving minutiae detection.

  8. A Solid Trap and Thermal Desorption System with Application to a Medical Electronic Nose

    Directory of Open Access Journals (Sweden)

    Xuntao Xu

    2008-11-01

    Full Text Available In this paper, a solid trap/thermal desorption-based odorant gas condensation system has been designed and implemented for measuring low concentration odorant gas. The technique was successfully applied to a medical electronic nose system. The developed system consists of a flow control unit, a temperature control unit and a sorbent tube. The theoretical analysis and experimental results indicate that gas condensation, together with the medical electronic nose system can significantly reduce the detection limit of the nose system and increase the system’s ability to distinguish low concentration gas samples. In addition, the integrated system can remove the influence of background components and fluctuation of operational environment. Even with strong disturbances such as water vapour and ethanol gas, the developed system can classify the test samples accurately.

  9. Contamination removal using various solvents and methodologies

    Science.gov (United States)

    Jeppsen, J. C.

    1989-01-01

    Critical and non-critical bonding surfaces must be kept free of contamination that may cause potential unbonds. For example, an aft-dome section of a redesigned solid rocket motor that had been contaminated with hydraulic oil did not appear to be sufficiently cleaned when inspected by the optically stimulated electron emission process (Con Scan) after it had been cleaned using a hand double wipe cleaning method. As a result, current and new cleaning methodologies as well as solvent capability in removing various contaminant materials were reviewed and testing was performed. Bonding studies were also done to verify that the cleaning methods used in removing contaminants provide an acceptable bonding surface. The removal of contaminants from a metal surface and the strength of subsequent bonds were tested using the Martin Marietta and double-wipe cleaning methods. Results are reported.

  10. Removing Welding Fumes

    Science.gov (United States)

    Moore, Lloyd J.; Hall, Vandel L.

    1987-01-01

    Portable exhaust duct for machining and welding shops removes oil mist, dust, smoke, and fumes. Duct used with shop exhaust system, inlets of which placed at various convenient locations in shop floor. Flanged connector on underside of wheeled base links flexible tube to exhaust system under floor. Made especially for welding in room with low ceiling.

  11. Combination of designed immune based classifiers for ERP assessment in a P300-based GKT

    Directory of Open Access Journals (Sweden)

    Mohammad Hassan Moradi

    2012-08-01

    Full Text Available Constructing a precise classifier is an important issue in pattern recognition task. Combination the decision of several competing classifiers to achieve improved classification accuracy has become interested in many research areas. In this study, Artificial Immune system (AIS as an effective artificial intelligence technique was used for designing of several efficient classifiers. Combination of multiple immune based classifiers was tested on ERP assessment in a P300-based GKT (Guilty Knowledge Test. Experiment results showed that the proposed classifier named Compact Artificial Immune System (CAIS was a successful classification method and could be competitive to other classifiers such as K-nearest neighbourhood (KNN, Linear Discriminant Analysis (LDA and Support Vector Machine (SVM. Also, in the experiments, it was observed that using the decision fusion techniques for multiple classifier combination lead to better recognition results. The best rate of recognition by CAIS was 80.90% that has been improved in compare to other applied classification methods in our study.

  12. Optical Diagnostics for Classifying Stages of Dental Erythema

    Science.gov (United States)

    Davis, Matthew J.; Splinter, Robert; Lockhart, Peter; Brennan, Michael; Fox, Philip C.

    2003-02-01

    Periodontal disease is a term used to describe an inflammatory disease affecting the tissues surrounding and supporting the teeth. Periodontal diseases are some of the most common chronic disorders, which affect humans in all parts of the world. Treatment usually involves the removal of plaque and calculus by scaling and polishing the tooth. In some cases a surgical reduction of hyperplastic tissue, may also be required. In addition, periodontitis is a risk factor for systemic disorders such as cardiovascular disease and diabetes. Current detection methods are qualitative, inaccurate, and often do not detect the periodontal disease in its early, reversible stages. Therefore, an early detection method should be implemented identifying the relationship of periodontal disease with erythema. In order to achieve this purpose we are developing an optical erythema meter to diagnose the periodontal disease in its reversible, gingival stage. The discrimination between healthy and diseased gum tissue was made by using the reflection of two illuminating wavelengths provided by light emitting diodes operating at wavelengths that target the absorption and reflection spectra of the highlights of each particular tissue type (healthy or diseased, and what kind of disease). Three different color gels could successfully be distinguished with a statistical significance of P < 0.05.

  13. Preparation of a removable polyurethane encapsulant

    Energy Technology Data Exchange (ETDEWEB)

    Parker, B.G.

    1976-08-01

    The preparation of polyurethane encapsulants, based on polyether diol/diisocyanate prepolymers and 1,4-butanediol, which are soluble in several organic solvents, was investigated. Since these materials can be easily removed, repair of electronic circuitry found defective in potted units can be readily accomplished. Polyether diols of varying molecular weights were reacted with toluene diisocyanate (TDI) and methylene diphenylisocyanate (MDI) to produce stable prepolymers. Several properties of both the isocyanate prepolymers and 1,4-butanediol cured polyurethane encapsulants are presented.

  14. Win percentage: a novel measure for assessing the suitability of machine classifiers for biological problems

    Science.gov (United States)

    2012-01-01

    Background Selecting an appropriate classifier for a particular biological application poses a difficult problem for researchers and practitioners alike. In particular, choosing a classifier depends heavily on the features selected. For high-throughput biomedical datasets, feature selection is often a preprocessing step that gives an unfair advantage to the classifiers built with the same modeling assumptions. In this paper, we seek classifiers that are suitable to a particular problem independent of feature selection. We propose a novel measure, called "win percentage", for assessing the suitability of machine classifiers to a particular problem. We define win percentage as the probability a classifier will perform better than its peers on a finite random sample of feature sets, giving each classifier equal opportunity to find suitable features. Results First, we illustrate the difficulty in evaluating classifiers after feature selection. We show that several classifiers can each perform statistically significantly better than their peers given the right feature set among the top 0.001% of all feature sets. We illustrate the utility of win percentage using synthetic data, and evaluate six classifiers in analyzing eight microarray datasets representing three diseases: breast cancer, multiple myeloma, and neuroblastoma. After initially using all Gaussian gene-pairs, we show that precise estimates of win percentage (within 1%) can be achieved using a smaller random sample of all feature pairs. We show that for these data no single classifier can be considered the best without knowing the feature set. Instead, win percentage captures the non-zero probability that each classifier will outperform its peers based on an empirical estimate of performance. Conclusions Fundamentally, we illustrate that the selection of the most suitable classifier (i.e., one that is more likely to perform better than its peers) not only depends on the dataset and application but also on the

  15. Win percentage: a novel measure for assessing the suitability of machine classifiers for biological problems

    Directory of Open Access Journals (Sweden)

    Parry R Mitchell

    2012-03-01

    Full Text Available Abstract Background Selecting an appropriate classifier for a particular biological application poses a difficult problem for researchers and practitioners alike. In particular, choosing a classifier depends heavily on the features selected. For high-throughput biomedical datasets, feature selection is often a preprocessing step that gives an unfair advantage to the classifiers built with the same modeling assumptions. In this paper, we seek classifiers that are suitable to a particular problem independent of feature selection. We propose a novel measure, called "win percentage", for assessing the suitability of machine classifiers to a particular problem. We define win percentage as the probability a classifier will perform better than its peers on a finite random sample of feature sets, giving each classifier equal opportunity to find suitable features. Results First, we illustrate the difficulty in evaluating classifiers after feature selection. We show that several classifiers can each perform statistically significantly better than their peers given the right feature set among the top 0.001% of all feature sets. We illustrate the utility of win percentage using synthetic data, and evaluate six classifiers in analyzing eight microarray datasets representing three diseases: breast cancer, multiple myeloma, and neuroblastoma. After initially using all Gaussian gene-pairs, we show that precise estimates of win percentage (within 1% can be achieved using a smaller random sample of all feature pairs. We show that for these data no single classifier can be considered the best without knowing the feature set. Instead, win percentage captures the non-zero probability that each classifier will outperform its peers based on an empirical estimate of performance. Conclusions Fundamentally, we illustrate that the selection of the most suitable classifier (i.e., one that is more likely to perform better than its peers not only depends on the dataset and

  16. Multivariate models to classify Tuscan virgin olive oils by zone.

    Directory of Open Access Journals (Sweden)

    Alessandri, Stefano

    1999-10-01

    Full Text Available In order to study and classify Tuscan virgin olive oils, 179 samples were collected. They were obtained from drupes harvested during the first half of November, from three different zones of the Region. The sampling was repeated for 5 years. Fatty acids, phytol, aliphatic and triterpenic alcohols, triterpenic dialcohols, sterols, squalene and tocopherols were analyzed. A subset of variables was considered. They were selected in a preceding work as the most effective and reliable, from the univariate point of view. The analytical data were transformed (except for the cycloartenol to compensate annual variations, the mean related to the East zone was subtracted from each value, within each year. Univariate three-class models were calculated and further variables discarded. Then multivariate three-zone models were evaluated, including phytol (that was always selected and all the combinations of palmitic, palmitoleic and oleic acid, tetracosanol, cycloartenol and squalene. Models including from two to seven variables were studied. The best model shows by-zone classification errors less than 40%, by-zone within-year classification errors that are less than 45% and a global classification error equal to 30%. This model includes phytol, palmitic acid, tetracosanol and cycloartenol.

    Para estudiar y clasificar aceites de oliva vírgenes Toscanos, se utilizaron 179 muestras, que fueron obtenidas de frutos recolectados durante la primera mitad de Noviembre, de tres zonas diferentes de la Región. El muestreo fue repetido durante 5 años. Se analizaron ácidos grasos, fitol, alcoholes alifáticos y triterpénicos, dialcoholes triterpénicos, esteroles, escualeno y tocoferoles. Se consideró un subconjunto de variables que fueron seleccionadas en un trabajo anterior como el más efectivo y fiable, desde el punto de vista univariado. Los datos analíticos se transformaron (excepto para el cicloartenol para compensar las variaciones anuales, rest

  17. Hard electronics; Hard electronics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    In the fields of power conversion devices and broadcasting/communication amplifiers, high power, high frequency and low losses are desirable. Further, for electronic elements in aerospace/aeronautical/geothermal surveys, etc., heat resistance to 500degC is required. Devices which respond to such hard specifications are called hard electronic devices. However, with Si which is at the core of the present electronics, the specifications cannot fully be fulfilled because of the restrictions arising from physical values. Accordingly, taking up new device materials/structures necessary to construct hard electronics, technologies to develop these to a level of IC were examined and studied. They are a technology to make devices/IC of new semiconductors such as SiC, diamond, etc. which can handle higher temperature, higher power and higher frequency than Si and also is possible of reducing losses, a technology to make devices of hard semiconducter materials such as a vacuum microelectronics technology using ultra-micro/high-luminance electronic emitter using negative electron affinity which diamond, etc. have, a technology to make devices of oxides which have various electric properties, etc. 321 refs., 194 figs., 8 tabs.

  18. Investigations in gallium removal

    Energy Technology Data Exchange (ETDEWEB)

    Philip, C.V.; Pitt, W.W. [Texas A and M Univ., College Station, TX (United States); Beard, C.A. [Amarillo National Resource Center for Plutonium, TX (United States)

    1997-11-01

    Gallium present in weapons plutonium must be removed before it can be used for the production of mixed-oxide (MOX) nuclear reactor fuel. The main goal of the preliminary studies conducted at Texas A and M University was to assist in the development of a thermal process to remove gallium from a gallium oxide/plutonium oxide matrix. This effort is being conducted in close consultation with the Los Alamos National Laboratory (LANL) personnel involved in the development of this process for the US Department of Energy (DOE). Simple experiments were performed on gallium oxide, and cerium-oxide/gallium-oxide mixtures, heated to temperatures ranging from 700--900 C in a reducing environment, and a method for collecting the gallium vapors under these conditions was demonstrated.

  19. Facilities removal working group

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This working group`s first objective is to identify major economic, technical, and regulatory constraints on operator practices and decisions relevant to offshore facilities removal. Then, the group will try to make recommendations as to regulatory and policy adjustments, additional research, or process improvements and/or technological advances, that may be needed to improve the efficiency and effectiveness of the removal process. The working group will focus primarily on issues dealing with Gulf of Mexico platform abandonments. In order to make the working group sessions as productive as possible, the Facilities Removal Working Group will focus on three topics that address a majority of the concerns and/or constraints relevant to facilities removal. The three areas are: (1) Explosive Severing and its Impact on Marine Life, (2) Pile and Conductor Severing, and (3) Deep Water Abandonments This paper will outline the current state of practice in the offshore industry, identifying current regulations and specific issues encountered when addressing each of the three main topics above. The intent of the paper is to highlight potential issues for panel discussion, not to provide a detailed review of all data relevant to the topic. Before each panel discussion, key speakers will review data and information to facilitate development and discussion of the main issues of each topic. Please refer to the attached agenda for the workshop format, key speakers, presentation topics, and panel participants. The goal of the panel discussions is to identify key issues for each of the three topics above. The working group will also make recommendations on how to proceed on these key issues.

  20. KKG Group Paraffin Removal

    Energy Technology Data Exchange (ETDEWEB)

    Schulte, Ralph

    2001-12-01

    The Rocky Mountain Oilfield Testing Center (RMOTC) has recently completed a test of a paraffin removal system developed by the KKG Group utilizing the technology of two Russian scientists, Gennady Katzyn and Boris Koggi. The system consisting of chemical ''sticks'' that generate heat in-situ to melt the paraffin deposits in oilfield tubing. The melted paraffin is then brought to the surface utilizing the naturally flowing energy of the well.

  1. Laser removal of tattoos.

    Science.gov (United States)

    Tammaro, A; Fatuzzo, G; Narcisi, A; Abruzzese, C; Caperchi, C; Gamba, A; Parisella, F R; Persechino, S

    2012-01-01

    In Western countries the phenomenon of "tattooing" is expanding and tattoos are considered a new fashion among young people. In this paper we briefly trace the history of tattooing, the techniques used, the analysis of pigments used, and their possible adverse reactions. We also carried out a review of the international literature on the use of Q-switched laser in tattoo removal and its complications, and we describe our experience in the use of this technique. PMID:22697088

  2. Power plant removal costs

    International Nuclear Information System (INIS)

    The financial, regulatory and political significance of the estimated high removal costs of nuclear power plants has generated considerable interest in recent years, and the political significance has resulted in the Nuclear Regulatory Commission (NRC) eliminating the use of conventional depreciation accounting for the decontamination portion of the removal (decommissioning). While nuclear plant licensees are not precluded from utilizing conventional depreciation accounting for the demolition of non-radioactive structures and site restoration, state and federal utility regulators have not been favorably inclined to requests for this distinction. The realization that steam-generating units will be more expensive to remove, relative to their original cost, predates the realization that nuclear units will be expensive. However, the nuclear issues have overshadowed this realization, but are unlikely to continue to do so. Numerous utilities have prepared cost estimates for steam generating units, and this presentation discusses the implications of a number of such estimates that are a matter of public record. The estimates cover nearly 400 gas, oil, coal and lignite generating units. The earliest estimate was made in 1978, and for analysis purposes the author has segregated them between gas and oil units, and coal and lignite units

  3. Electronic Cigarettes

    Science.gov (United States)

    ... New FDA Regulations Text Size: A A A Electronic Cigarettes Electronic cigarettes (e-cigarettes) are battery operated products designed ... more about: The latest news and events about electronic cigarettes on this FDA page Electronic cigarette basics ...

  4. Fall Detector Using Discrete Wavelet Decomposition And SVM Classifier

    Directory of Open Access Journals (Sweden)

    Wójtowicz Bartłomiej

    2015-06-01

    Full Text Available This paper presents the design process and the results of a novel fall detector designed and constructed at the Faculty of Electronics, Military University of Technology. High sensitivity and low false alarm rates were achieved by using four independent sensors of varying physical quantities and sophisticated methods of signal processing and data mining. The manuscript discusses the study background, hardware development, alternative algorithms used for the sensor data processing and fusion for identification of the most efficient solution and the final results from testing the Android application on smartphone. The test was performed in four 6-h sessions (two sessions with female participants at the age of 28 years, one session with male participants aged 28 years and one involving a man at the age of 49 years and showed correct detection of all 40 simulated falls with only three false alarms. Our results confirmed the sensitivity of the proposed algorithm to be 100% with a nominal false alarm rate (one false alarm per 8 h.

  5. Ensemble regularized linear discriminant analysis classifier for P300-based brain-computer interface.

    Science.gov (United States)

    Onishi, Akinari; Natsume, Kiyohisa

    2013-01-01

    This paper demonstrates a better classification performance of an ensemble classifier using a regularized linear discriminant analysis (LDA) for P300-based brain-computer interface (BCI). The ensemble classifier with an LDA is sensitive to the lack of training data because covariance matrices are estimated imprecisely. One of the solution against the lack of training data is to employ a regularized LDA. Thus we employed the regularized LDA for the ensemble classifier of the P300-based BCI. The principal component analysis (PCA) was used for the dimension reduction. As a result, an ensemble regularized LDA classifier showed significantly better classification performance than an ensemble un-regularized LDA classifier. Therefore the proposed ensemble regularized LDA classifier is robust against the lack of training data.

  6. Analysis of Parametric & Non Parametric Classifiers for Classification Technique using WEKA

    Directory of Open Access Journals (Sweden)

    Yugal kumar

    2012-07-01

    Full Text Available In the field of Machine learning & Data Mining, lot of work had been done to construct new classification techniques/ classifiers and lot of research is going on to construct further new classifiers with the help of nature inspired technique such as Genetic Algorithm, Ant Colony Optimization, Bee Colony Optimization, Neural Network, Particle Swarm Optimization etc. Many researchers provided comparative study/ analysis of classification techniques. But this paper deals with another form of analysis of classification techniques i.e. parametric and non parametric classifiers analysis. This paper identifies parametric & non parametric classifiers that are used in classification process and provides tree representation of these classifiers. For the analysis purpose, four classifiers are used in which two of them are parametric and rest of are non-parametric in nature.

  7. Classifier-ensemble incremental-learning procedure for nuclear transient identification at different operational conditions

    Energy Technology Data Exchange (ETDEWEB)

    Baraldi, Piero, E-mail: piero.baraldi@polimi.i [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Razavi-Far, Roozbeh [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Zio, Enrico [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Ecole Centrale Paris-Supelec, Paris (France)

    2011-04-15

    An important requirement for the practical implementation of empirical diagnostic systems is the capability of classifying transients in all plant operational conditions. The present paper proposes an approach based on an ensemble of classifiers for incrementally learning transients under different operational conditions. New classifiers are added to the ensemble where transients occurring in new operational conditions are not satisfactorily classified. The construction of the ensemble is made by bagging; the base classifier is a supervised Fuzzy C Means (FCM) classifier whose outcomes are combined by majority voting. The incremental learning procedure is applied to the identification of simulated transients in the feedwater system of a Boiling Water Reactor (BWR) under different reactor power levels.

  8. The Entire Quantile Path of a Risk-Agnostic SVM Classifier

    CERN Document Server

    Yu, Jin; Zhang, Jian

    2012-01-01

    A quantile binary classifier uses the rule: Classify x as +1 if P(Y = 1|X = x) >= t, and as -1 otherwise, for a fixed quantile parameter t {[0, 1]. It has been shown that Support Vector Machines (SVMs) in the limit are quantile classifiers with t = 1/2 . In this paper, we show that by using asymmetric cost of misclassification SVMs can be appropriately extended to recover, in the limit, the quantile binary classifier for any t. We then present a principled algorithm to solve the extended SVM classifier for all values of t simultaneously. This has two implications: First, one can recover the entire conditional distribution P(Y = 1|X = x) = t for t {[0, 1]. Second, we can build a risk-agnostic SVM classifier where the cost of misclassification need not be known apriori. Preliminary numerical experiments show the effectiveness of the proposed algorithm.

  9. An Active Learning Classifier for Further Reducing Diabetic Retinopathy Screening System Cost

    Directory of Open Access Journals (Sweden)

    Yinan Zhang

    2016-01-01

    Full Text Available Diabetic retinopathy (DR screening system raises a financial problem. For further reducing DR screening cost, an active learning classifier is proposed in this paper. Our approach identifies retinal images based on features extracted by anatomical part recognition and lesion detection algorithms. Kernel extreme learning machine (KELM is a rapid classifier for solving classification problems in high dimensional space. Both active learning and ensemble technique elevate performance of KELM when using small training dataset. The committee only proposes necessary manual work to doctor for saving cost. On the publicly available Messidor database, our classifier is trained with 20%–35% of labeled retinal images and comparative classifiers are trained with 80% of labeled retinal images. Results show that our classifier can achieve better classification accuracy than Classification and Regression Tree, radial basis function SVM, Multilayer Perceptron SVM, Linear SVM, and K Nearest Neighbor. Empirical experiments suggest that our active learning classifier is efficient for further reducing DR screening cost.

  10. An Active Learning Classifier for Further Reducing Diabetic Retinopathy Screening System Cost

    Science.gov (United States)

    An, Mingqiang

    2016-01-01

    Diabetic retinopathy (DR) screening system raises a financial problem. For further reducing DR screening cost, an active learning classifier is proposed in this paper. Our approach identifies retinal images based on features extracted by anatomical part recognition and lesion detection algorithms. Kernel extreme learning machine (KELM) is a rapid classifier for solving classification problems in high dimensional space. Both active learning and ensemble technique elevate performance of KELM when using small training dataset. The committee only proposes necessary manual work to doctor for saving cost. On the publicly available Messidor database, our classifier is trained with 20%–35% of labeled retinal images and comparative classifiers are trained with 80% of labeled retinal images. Results show that our classifier can achieve better classification accuracy than Classification and Regression Tree, radial basis function SVM, Multilayer Perceptron SVM, Linear SVM, and K Nearest Neighbor. Empirical experiments suggest that our active learning classifier is efficient for further reducing DR screening cost.

  11. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    Energy Technology Data Exchange (ETDEWEB)

    Wurtz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kaplan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-28

    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  12. Combining classifiers using their receiver operating characteristics and maximum likelihood estimation.

    Science.gov (United States)

    Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H

    2005-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  13. Combining Classifiers Using Their Receiver Operating Characteristics and Maximum Likelihood Estimation*

    Science.gov (United States)

    Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.

    2010-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  14. Multi-Stage Feature Selection Based Intelligent Classifier for Classification of Incipient Stage Fire in Building

    Directory of Open Access Journals (Sweden)

    Allan Melvin Andrew

    2016-01-01

    Full Text Available In this study, an early fire detection algorithm has been proposed based on low cost array sensing system, utilising off- the shelf gas sensors, dust particles and ambient sensors such as temperature and humidity sensor. The odour or “smellprint” emanated from various fire sources and building construction materials at early stage are measured. For this purpose, odour profile data from five common fire sources and three common building construction materials were used to develop the classification model. Normalised feature extractions of the smell print data were performed before subjected to prediction classifier. These features represent the odour signals in the time domain. The obtained features undergo the proposed multi-stage feature selection technique and lastly, further reduced by Principal Component Analysis (PCA, a dimension reduction technique. The hybrid PCA-PNN based approach has been applied on different datasets from in-house developed system and the portable electronic nose unit. Experimental classification results show that the dimension reduction process performed by PCA has improved the classification accuracy and provided high reliability, regardless of ambient temperature and humidity variation, baseline sensor drift, the different gas concentration level and exposure towards different heating temperature range.

  15. The Electron

    Energy Technology Data Exchange (ETDEWEB)

    Thomson, George

    1972-01-01

    Electrons are elementary particles of atoms that revolve around and outside the nucleus and have a negative charge. This booklet discusses how electrons relate to electricity, some applications of electrons, electrons as waves, electrons in atoms and solids, the electron microscope, among other things.

  16. Hard electronics; Hard electronics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Hard material technologies were surveyed to establish the hard electronic technology which offers superior characteristics under hard operational or environmental conditions as compared with conventional Si devices. The following technologies were separately surveyed: (1) The device and integration technologies of wide gap hard semiconductors such as SiC, diamond and nitride, (2) The technology of hard semiconductor devices for vacuum micro- electronics technology, and (3) The technology of hard new material devices for oxides. The formation technology of oxide thin films made remarkable progress after discovery of oxide superconductor materials, resulting in development of an atomic layer growth method and mist deposition method. This leading research is expected to solve such issues difficult to be easily realized by current Si technology as high-power, high-frequency and low-loss devices in power electronics, high temperature-proof and radiation-proof devices in ultimate electronics, and high-speed and dense- integrated devices in information electronics. 432 refs., 136 figs., 15 tabs.

  17. Robust Template Decomposition without Weight Restriction for Cellular Neural Networks Implementing Arbitrary Boolean Functions Using Support Vector Classifiers

    Directory of Open Access Journals (Sweden)

    Yih-Lon Lin

    2013-01-01

    Full Text Available If the given Boolean function is linearly separable, a robust uncoupled cellular neural network can be designed as a maximal margin classifier. On the other hand, if the given Boolean function is linearly separable but has a small geometric margin or it is not linearly separable, a popular approach is to find a sequence of robust uncoupled cellular neural networks implementing the given Boolean function. In the past research works using this approach, the control template parameters and thresholds are restricted to assume only a given finite set of integers, and this is certainly unnecessary for the template design. In this study, we try to remove this restriction. Minterm- and maxterm-based decomposition algorithms utilizing the soft margin and maximal margin support vector classifiers are proposed to design a sequence of robust templates implementing an arbitrary Boolean function. Several illustrative examples are simulated to demonstrate the efficiency of the proposed method by comparing our results with those produced by other decomposition methods with restricted weights.

  18. A consensus prognostic gene expression classifier for ER positive breast cancer

    OpenAIRE

    Teschendorff, Andrew E.; Naderi, Ali; Barbosa-Morais, Nuno L.; Pinder, Sarah E; Ellis, Ian O.; Aparicio, Sam; Brenton, James D.; Caldas, Carlos

    2006-01-01

    Background A consensus prognostic gene expression classifier is still elusive in heterogeneous diseases such as breast cancer. Results Here we perform a combined analysis of three major breast cancer microarray data sets to hone in on a universally valid prognostic molecular classifier in estrogen receptor (ER) positive tumors. Using a recently developed robust measure of prognostic separation, we further validate the prognostic classifier in three external independent cohorts, confirming the...

  19. Using Multivariate Machine Learning Methods and Structural MRI to Classify Childhood Onset Schizophrenia and Healthy Controls

    OpenAIRE

    DeannaGreenstein; JamesD.Malley

    2012-01-01

    Introduction: Multivariate machine learning methods can be used to classify groups of schizophrenia patients and controls using structural magnetic resonance imaging (MRI). However, machine learning methods to date have not been extended beyond classification and contemporaneously applied in a meaningful way to clinical measures. We hypothesized that brain measures would classify groups, and that increased likelihood of being classified as a patient using regional brain measures would be posi...

  20. Adaptation in P300 braincomputer interfaces: A two-classifier cotraining approach

    OpenAIRE

    Panicker, Rajesh C.; Sun, Ying; Puthusserypady, Sadasivan

    2010-01-01

    A cotraining-based approach is introduced for constructing high-performance classifiers for P300-based braincomputer interfaces (BCIs), which were trained from very little data. It uses two classifiers: Fishers linear discriminant analysis and Bayesian linear discriminant analysis progressively teaching each other to build a final classifier, which is robust and able to learn effectively from unlabeled data. Detailed analysis of the performance is carried out through extensive cross-validatio...