WorldWideScience

Sample records for classified removable electronic

  1. Sixty Percent Conceptual Design Report: Enterprise Accountability System for Classified Removable Electronic Media

    Energy Technology Data Exchange (ETDEWEB)

    B. Gardiner; L.Graton; J.Longo; T.Marks, Jr.; B.Martinez; R. Strittmatter; C.Woods; J. Joshua

    2003-05-03

    Classified removable electronic media (CREM) are tracked in several different ways at the Laboratory. To ensure greater security for CREM, we are creating a single, Laboratory-wide system to track CREM. We are researching technology that can be used to electronically tag and detect CREM, designing a database to track the movement of CREM, and planning to test the system at several locations around the Laboratory. We focus on affixing ''smart tags'' to items we want to track and installing gates at pedestrian portals to detect the entry or exit of tagged items. By means of an enterprise database, the system will track the entry and exit of tagged items into and from CREM storage vaults, vault-type rooms, access corridors, or boundaries of secure areas, as well as the identity of the person carrying an item. We are considering several options for tracking items that can give greater security, but at greater expense.

  2. Fuzzy classifier for fault diagnosis in analog electronic circuits.

    Science.gov (United States)

    Kumar, Ashwani; Singh, A P

    2013-11-01

    Many studies have presented different approaches for the fault diagnosis with fault models having ± 50% variation in the component values in analog electronic circuits. There is still a need of the approaches which provide the fault diagnosis with the variation in the component value below ± 50%. A new single and multiple fault diagnosis technique for soft faults in analog electronic circuit using fuzzy classifier has been proposed in this paper. This technique uses the simulation before test (SBT) approach by analyzing the frequency response of the analog circuit under faulty and fault free conditions. Three signature parameters peak gain, frequency and phase associated with peak gain, of the frequency response of the analog circuit are observed and extracted such that they give unique values for faulty and fault free configuration of the circuit. The single and double fault models with the component variations from ± 10% to ± 50% are considered. The fuzzy classifier along the classification of faults gives the estimated component value under faulty and faultfree conditions. The proposed method is validated using simulated data and the real time data for a benchmark analog circuit. The comparative analysis is also presented for both the validations. PMID:23849881

  3. Removal of micropollutants with coarse-ground activated carbon for enhanced separation with hydrocyclone classifiers.

    Science.gov (United States)

    Otto, N; Platz, S; Fink, T; Wutscherk, M; Menzel, U

    2016-01-01

    One key technology to eliminate organic micropollutants (OMP) from wastewater effluent is adsorption using powdered activated carbon (PAC). To avoid a discharge of highly loaded PAC particles into natural water bodies a separation stage has to be implemented. Commonly large settling tanks and flocculation filters with the application of coagulants and flocculation aids are used. In this study, a multi-hydrocyclone classifier with a downstream cloth filter has been investigated on a pilot plant as a space-saving alternative with no need for a dosing of chemical additives. To improve the separation, a coarser ground PAC type was compared to a standard PAC type with regard to elimination results of OMP as well as separation performance. With a PAC dosing rate of 20 mg/l an average of 64.7 wt% of the standard PAC and 79.5 wt% of the coarse-ground PAC could be separated in the hydrocyclone classifier. A total average separation efficiency of 93-97 wt% could be reached with a combination of both hydrocyclone classifier and cloth filter. Nonetheless, the OMP elimination of the coarse-ground PAC was not sufficient enough to compete with the standard PAC. Further research and development is necessary to find applicable coarse-grained PAC types with adequate OMP elimination capabilities. PMID:27232411

  4. Electron beam removal of gaseous organic pollutants

    International Nuclear Information System (INIS)

    This paper briefly reviews the VOC treatment by using electron beam technology carried out in the Institute of Nuclear Chemistry and Technology. VOC destruction was studied from experimental scale to pilot scale. Organic compounds as studied objects included chlorinated aliphatic compounds, aromatic and polynuclear aromatic hydrocarbons (PAHs). The possibility of VOC destruction by using EB technology was theoretically analyzed in this paper. (author)

  5. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    Science.gov (United States)

    Jing, Yaqi; Meng, Qinghao; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen

    2014-05-01

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.

  6. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen [Tianjin Key Laboratory of Process Measurement and Control, Institute of Robotics and Autonomous Systems, School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2014-05-15

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.

  7. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    International Nuclear Information System (INIS)

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively

  8. VOC removal by microwave, electron beam and catalyst technique

    International Nuclear Information System (INIS)

    A hybrid technique, developed for VOCs removal using microwave (MW) treatment, electron beam (EB) irradiation and catalyst method, is presented. Two hybrid laboratory installations, developed for the study of air pollution control by combined EB irradiation, MW irradiation and catalyst, are described. Air loaded with toluene was treated at different MW power levels, water content, flow rates, and different irradiation modes, separately and combined with MW and EB. Also, simultaneous EB and MW irradiation method was applied to SO2 and NOx removal. Real synergy effects between EB induced NTP, MW induced NTP and catalysis can be observed

  9. High-Energy Electron Beam Application to Air Pollutants Removal

    International Nuclear Information System (INIS)

    The advantage of electron beam (EB) process in pollutants removal is connected to its high efficiency to transfer high amount of energy directly into the matter under treatment. Disadvantage which is mostly related to high investment cost of accelerator may be effectively overcome in future as the result of use accelerator new developments. The potential use of medium to high-energy high power EB accelerators for air pollutants removal is demonstrated in [1]. The lower electrical efficiencies of accelerators with higher energies are partially compensated by the lower electron energy losses in the beam windows. In addition, accelerators with higher electron energies can provide higher beam powers with lower beam currents [1]. The total EB energy losses (backscattering, windows and in the intervening air space) are substantially lower with higher EB incident energy. The useful EB energy is under 50% for 0.5 MeV and about 95% above 3 MeV. In view of these arguments we decided to study the application of high energy EB for air pollutants removal. Two electron beam accelerators are available for our studies: electron linear accelerators ALIN-10 and ALID-7, built in the Electron Accelerator Laboratory, INFLPR, Bucharest, Romania. Both accelerators are of traveling-wave type, operating at a wavelength of 10 cm. They utilize tunable S-band magnetrons, EEV M 5125 type, delivering 2 MW of power in 4 μ pulses. The accelerating structure is a disk-loaded tube operating in the 2 mode. The optimum values of the EB peak current IEB and EB energy EEB to produce maximum output power PEB for a fixed pulse duration EB and repetition frequency fEB are as follows: for ALIN-10: EEB = 6.23 MeV; IEB =75 mA; PEB 164 W (fEB = 100 Hz, EB = 3.5 s) and for ALID-7: EEB 5.5 MeV; IEB = 130 mA; PEB = 670 W (fEB = 250 Hz, EB = 3.75 s). This paper presents a special designed installation, named SDI-1, and several representative results obtained by high energy EB application to SO2, NOx and VOCs

  10. Terra MODIS Band 27 Electronic Crosstalk Effect and Its Removal

    Science.gov (United States)

    Sun, Junqiang; Xiong, Xiaoxiong; Madhavan, Sriharsha; Wenny, Brian

    2012-01-01

    The MODerate-resolution Imaging Spectroradiometer (MODIS) is one of the primary instruments in the NASA Earth Observing System (EOS). The first MODIS instrument was launched in December, 1999 on-board the Terra spacecraft. MODIS has 36 bands, covering a wavelength range from 0.4 micron to 14.4 micron. MODIS band 27 (6.72 micron) is a water vapor band, which is designed to be insensitive to Earth surface features. In recent Earth View (EV) images of Terra band 27, surface feature contamination is clearly seen and striping has become very pronounced. In this paper, it is shown that band 27 is impacted by electronic crosstalk from bands 28-30. An algorithm using a linear approximation is developed to correct the crosstalk effect. The crosstalk coefficients are derived from Terra MODIS lunar observations. They show that the crosstalk is strongly detector dependent and the crosstalk pattern has changed dramatically since launch. The crosstalk contributions are positive to the instrument response of band 27 early in the mission but became negative and much larger in magnitude at later stages of the mission for most detectors of the band. The algorithm is applied to both Black Body (BB) calibration and MODIS L1B products. With the crosstalk effect removed, the calibration coefficients of Terra MODIS band 27 derived from the BB show that the detector differences become smaller. With the algorithm applied to MODIS L1B products, the Earth surface features are significantly removed and the striping is substantially reduced in the images of the band. The approach developed in this report for removal of the electronic crosstalk effect can be applied to other MODIS bands if similar crosstalk behaviors occur.

  11. Classifying Microorganisms

    DEFF Research Database (Denmark)

    Sommerlund, Julie

    2006-01-01

    This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris...

  12. PERFORMANCE OF AN AIR CLASSIFIER TO REMOVE LIGHT ORGANIC CONTAMINATION FROM ALUMINUM RECOVERED FROM MUNICIPAL WASTE BY EDDY CURRENT SEPARATION. TEST NO. 5.03, RECOVERY 1, NEW ORLEANS

    Science.gov (United States)

    The report describes a test in which aluminum cans recovered from municipal waste, together with known amounts of contaminant, were processed by a 'zig-zag' vertical air classifier to remove aerodynamically light contaminant. Twelve test runs were conducted; the proportions of co...

  13. Classifying Microorganisms.

    Science.gov (United States)

    Baker, William P.; Leyva, Kathryn J.; Lang, Michael; Goodmanis, Ben

    2002-01-01

    Focuses on an activity in which students sample air at school and generate ideas about how to classify the microorganisms they observe. The results are used to compare air quality among schools via the Internet. Supports the development of scientific inquiry and technology skills. (DDR)

  14. Multiple-electron removal and molecular fragmentation of CO by fast F4+ impact

    International Nuclear Information System (INIS)

    Multiple-electron removal from and molecular fragmentation of carbon monoxide molecules caused by collisions with 1-MeV/amu F4+ ions were studied using the coincidence time-of-flight technique. In these collisions, multiple-electron removal of the target molecule is a dominant process. Cross sections for the different levels of ionization of the CO molecule during the collision were determined. The relative cross sections of ionization decrease with increasing number of electrons removed in a similar way as seen in atomic targets. This behavior is in agreement with a two-step mechanism, where first the molecule is ionized by a Franck-Condon ionization and then the molecular ion dissociates. Most of the highly charged intermediate states of the molecule dissociate rapidly. Only CO+ and CO2+ molecular ions have been seen to survive long enough to be detected as molecular ions. The relative cross sections for the different breakup channels were evaluated for collisions in which the molecule broke into two charged fragments as well as for collisions where only a single charged molecular ion or fragment were produced. The average charge state of each fragment resulting from COQ+→Ci++Oj+ breakup increases with the number of electrons removed from the molecule approximately following the relationship bar i=bar j=Q/2 as long as K-shell electrons are not removed. This does not mean that the charge-state distribution is exactly symmetric, as, in general, removing electrons from the carbon fragment is slightly more likely than removing electrons from the oxygen due to the difference in binding energy. The cross sections for molecular breakup into a charged fragment and a neutral fragment drop rapidly with an increasing number of electrons removed

  15. Device for the removal of sulfur dioxide from exhaust gas by pulsed energization of free electrons

    International Nuclear Information System (INIS)

    The performance of a new device using pulsed streamer corona for the removal of sulfur dioxide from humid air has been evaluated. The pulsed streamer corona produced free electrons which enhance gas-phase chemical reactions, and convert SO2 to sulfuric acid mist. The SO2 removal efficiency was compared with that of the electron-beam flue-gas treatment process. The comparison demonstrates the advantage of the novel device

  16. Evaluation of sustainable electron donors for nitrate removal in different water media.

    Science.gov (United States)

    Fowdar, Harsha S; Hatt, Belinda E; Breen, Peter; Cook, Perran L M; Deletic, Ana

    2015-11-15

    An external electron donor is usually included in wastewater and groundwater treatment systems to enhance nitrate removal through denitrification. The choice of electron donor is critical for both satisfactory denitrification rates and sustainable long-term performance. Electron donors that are waste products are preferred to pure organic chemicals. Different electron donors have been used to treat different water types and little is known as to whether there are any electron donors that are suitable for multiple applications. Seven different carbon rich waste products, including liquid and solid electron donors, were studied in comparison to pure acetate. Batch-scale tests were used to measure their ability to reduce nitrate concentrations in a pure nutrient solution, light greywater, secondary-treated wastewater and tertiary-treated wastewater. The tested electron donors removed oxidised nitrogen (NOx) at varying rates, ranging from 48 mg N/L/d (acetate) to 0.3 mg N/L/d (hardwood). The concentrations of transient nitrite accumulation also varied across the electron donors. The different water types had an influence on NOx removal rates, the extent of which was dependent on the type of electron donor. Overall, the highest rates were recorded in light greywater, followed by the pure nutrient solution and the two partially treated wastewaters. Cotton wool and rice hulls were found to be promising electron donors with good NOx removal rates, lower leachable nutrients and had the least variation in performance across water types. PMID:26379204

  17. Carbon classified?

    DEFF Research Database (Denmark)

    Lippert, Ingmar

    2012-01-01

    . Using an actor- network theory (ANT) framework, the aim is to investigate the actors who bring together the elements needed to classify their carbon emission sources and unpack the heterogeneous relations drawn on. Based on an ethnographic study of corporate agents of ecological modernisation over a...... corporations construing themselves as able and suitable to manage their emissions, and, additionally, given that the construction of carbon emissions has performative consequences, the underlying practices need to be declassified, i.e. opened for public scrutiny. Hence the paper concludes by arguing for a...

  18. Studies of toxic metals removal in industrial wastewater after electron-beam treatment

    International Nuclear Information System (INIS)

    The Advanced Oxidation Process, using electron-beam, have been studied by scientific community due to its capacity to mineralize the toxic organic compound from highly reactive radical's formation. The electron-beam treatment process has been adopted by several countries for organic compounds removal and to effluents and sewers biological degradation. In this work, studies of metals removal in the simulated aqueous solutions and in the actual industrial effluents were carried out, using electron-beam treatment. The effluents samples were collected at ETE/SABESP (Governmental Wastewater Treatment Plant) in Suzano, SP city. The sampling was outlined at three distinctive sites: Industrial Receiver Unit, Medium Bar, and Final Effluent. The effluents samples were irradiated using different irradiation doses (20, 50, 100, 200 and 500 kGy). The removal behavior of metals Ca, CI, S, P, K, Al, Fe, As, Ni, Cr, Zn, Si, Co, Mn, As, Se, Cd, Hg and Pb was verified. The elements determination was accomplished with the x-ray fluorescence (WD-XRFS) technique using Fundamental Parameters method and thin film samples. The elements Fe, Zn, Cr and Co presented a removal > 99% to 200 kGy of irradiation dose in industrial effluent. At the same dose, P, Al and Si presented a removal of 81.8%, 97.6% and 98.7%, respectively. Ca and S were removed more than 80% at 20 kGy and Na, CI and K did not presented any degree of removal. As, Se, Cd, Hg and Pb removal was studied in the simulated aqueous solutions and industrial effluents with scavengers addition (EDTA and HCOONa). The elements As and Hg presented a removal of 92% and 99%, respectively, with HCOONa, at 500 kGy irradiation dose. The Se presented a 96.5% removal at same irradiation dose without scavengers addition. The removal of Cd and Pb did not give a significant removal, once all of the assay were carried out in the oxidant medium. (author)

  19. Effect of cathode electron acceptors on simultaneous anaerobic sulfide and nitrate removal in microbial fuel cell.

    Science.gov (United States)

    Cai, Jing; Zheng, Ping; Mahmood, Qaisar

    2016-01-01

    The current investigation reports the effect of cathode electron acceptors on simultaneous sulfide and nitrate removal in two-chamber microbial fuel cells (MFCs). Potassium permanganate and potassium ferricyanide were common cathode electron acceptors and evaluated for substrate removal and electricity generation. The abiotic MFCs produced electricity through spontaneous electrochemical oxidation of sulfide. In comparison with abiotic MFC, the biotic MFC showed better ability for simultaneous nitrate and sulfide removal along with electricity generation. Keeping external resistance of 1,000 Ω, both MFCs showed good capacities for substrate removal where nitrogen and sulfate were the main end products. The steady voltage with potassium permanganate electrodes was nearly twice that of with potassium ferricyanide. Cyclic voltammetry curves confirmed that the potassium permanganate had higher catalytic activity than potassium ferricyanide. The potassium permanganate may be a suitable choice as cathode electron acceptor for enhanced electricity generation during simultaneous treatment of sulfide and nitrate in MFCs. PMID:26901739

  20. Characterization of phosphorus removal bacteria in (AO)2 SBR system by using different electron acceptors

    Institute of Scientific and Technical Information of China (English)

    JIANG Yi-feng; WANG Lin; YU Ying; WANG Bao-zhen; LIU Shuo; SHEN Zheng

    2007-01-01

    Characteristics of phosphorus removal bacteria were investigated by using three different types of electron acceptors, as well as the positive role of nitrite in phosphorus removal process. An (AO)2 SBR (anaerobic-aerobic-anoxic-aerobic sequencing batch reactor) was thereby employed to enrich denitrifying phosphorus removal bacteria for simultaneously removing phosphorus and nitrogen via anoxic phosphorus uptake. Ammonium oxidation was controlled at the first phase of the nitrification process. Nitrite-inhibition batch tests illustrated that nitrite was not an inhibitor to phosphorus uptake process, but served as an alternative electron acceptor to nitrate and oxygen if the concentration was under the inhibition level of 40mg NO2 - N · L- 1. It implied that in addition to the two well-accepted groups of phosphorus removal bacterium ( one can only utilize oxygen as electron acceptor, P1, while the other can use both oxygen and nitrate as electron acceptor, P2 ), a new group of phosphorus removal bacterium P3, which could use oxygen, nitrate and nitrite as electron acceptor to take up phosphorus were identified in the test system. To understand (AO)2 SBR sludge better, the relative population of the different bacteria in this system, plus another A/O SBR sludge (seed sludge) were respectively estimated by the phosphorus uptake batch tests with either oxygen or nitrate or nitrite as electron acceptor. The results demonstrated that phosphorus removal capability of (AO)2 SBR sludge had a little degradation after A/O sludge was cultivated in the (AO)2 mode over a long period of time. However, denitrifying phosphorus removal bacteria ( P2 and P3 ) was significantly enriched showed by the relative population of the three types of bacteria,which implied that energy for aeration and COD consumption could be reduced in theory.

  1. Advanced heat removal system with porous media for electronic devices

    Energy Technology Data Exchange (ETDEWEB)

    Mahalle, A.M. [Sant Gadge Baba Amravati Univ., Amravati (India). Dept. of Mechanical Engineering; Jajoo, B.N. [Sant Gadge Baba Amravati Univ., Amravati (India). College of Engineering and Technology

    2007-07-01

    High porosity metal foams are primarily utilized in aerospace applications, although there use has been widened to include cooling in electronic packaging. They are good for high heat dissipation and other important applications have been found taking advantages of the thermal properties of the metal foam, including compact heat exchangers for airborne equipment; regenerative and dissipative air cooled condenser towers; and compact heat sinks for electronic power. Metal foam heat exchangers are efficient, compact and lightweight because of their low relative density, open porosity and high thermal conductivity of the cell edges, as well as the large accessible surface area per unit volume, and the ability to mix the cooling fluid. This paper presented the results of an investigation whose purpose was to prove the foam metal is a high heat dissipater using different heat inputs. The paper discussed the experimental methodology and described the metal foam sample used in the experiment. The heat transfer coefficient was increased as the velocity increased. The Reynolds number and Nusselt number was increased to increasing velocity. It was concluded that heat transfer from foam was primarily governed by total heat transfer area of the foam rather than the thermal conductivity. 16 refs., 14 figs.

  2. VOC removal by simultaneous electron beam and biofilter application

    International Nuclear Information System (INIS)

    During the recent years the stringent legislation and the public environmental knowledge has led to the situation in which many companies have to reduce their process and ventilation gas emissions consisting of volatile organic compounds (VOCs) and being noxious for environment. There are several different methods for VOC controls. In this research project we will focus to combine two novel and promising methods; electron beam treatment and biofiltration. Both of them are sufficient alone in many cases but with combination we hope to get more advantages. Now preliminary test series (phenol and styrene as test VOC matter) have been done to become certain that the radiation byproducts from electron beam irradiation are suitable for biofiltration and are not toxic to bio-organisms of a biofilter. Test series involved reference samples without irradiating and irradiated samples containing wet and dry air; dry air and VOC, and wet air and phenol. After e-beam irradiation with 13.5 kGy mean dose gas samples were collected to TENAX-TA sampling tube and were later analyzed with gas chromatograph and mass selective detector combination. The results show that among the decomposition products in the gas there are mostly aldehydes and few esters, ketons and aromatic compounds which are known to be biodegradable

  3. USING LOOP THERMOSYPHON TO WASTE HEAT REMOVAL FROM POWER ELECTRONIC COMPONENT

    OpenAIRE

    Nemec, Patrik; Malcho, Milan; Smitka, Martin

    2015-01-01

    Loop thermosyphon is a simple and reliable device providing several times higher heat transfer than convectional coolers used in cooling electronic. The paper deals with the cooling of power electronic component by means of this device. The main object of the paper is design and construction of the device to provide heat removal from the electronic component. Paper describes function principle of loop thermosyphon, testing of the function and measurement of cooling efficiency in dependence on...

  4. Removal of NOx by pulsed, intense relativistic electron beam in distant gas chamber

    International Nuclear Information System (INIS)

    Removal of NOx has been studied using a pulsed, intense relativistic electron beam (IREB). The dependence of NOx concentration and the removal efficiency of NOx on the number of IREB shot have been investigated within a distant gas chamber spatially isolated from the electron beam source. The distant gas chamber is filled up with a dry-air-balanced NO gas mixture with the pressure of 270 kPa, and is irradiated by the IREB (2 MeV, 30 A, 35 ns) passing through a 1.6-m-long atmosphere. With the initial NO concentration of 88 ppm, ∼ 70 % of NOx is successfully removed by firing 10 shots of IREB. The NOx removal efficiency has been found to be 50-155 g/kWh

  5. A study on the removal of color in dyeing wastewater using electron beam irradiation

    International Nuclear Information System (INIS)

    In this research, experiments of electron beam irradiation have been carried out for the wastewater from different types of dye industry, and for the reactive dye, for the acid dye and for the disperse dye which are commercially widely used with respect to industrial dyeing process. At the electron beam irradiation dose of 2.34KGy, the efficiency of color removing was higher than that of usual chemical treatment for the reactive dye and for the acid dye. Wastewater from printing dye industry showed the highest measuring value of color among the wastewater from different types of dye industries, which are polyester, cotton T/C, printing, yarn dyeing, and nylon dye industry. Electron beam irradiation tests have been performed for the wastewater from different types of dye industries. Color removing rates by electron beam irradiation were higher than those by general chemical treatment for the wastewater from cotton T/C dye industry and from yarn dyeing industry, and whose dispersive dye contents are low. EA (electron beam irradiation + activated sludge) process and CA (chemical treatment + activated sludge) process have been tested for removing color and organic substance in wastewater from different types of dye industries. EA process showed better results in color removing rate for the wastewater from cotton T/C dye industry and yarn dyeing industry. However, CA process showed better results in color removing rate for the wastewater from polyester, printing, and nylon dye industry. CA process were predominant in CODMn removal rates compare to EA process for the wastewater from different types of dye industries. However, both CA and EA processes showed less than 80mg/L of BOD5, which is the legal effluent guideline. (author)

  6. Influence of persulfate ions on the removal of phenol in aqueous solution using electron beam irradiation

    International Nuclear Information System (INIS)

    The removal of phenol (Co = 100 μM) during electron beam irradiation was studied in pure water and in the presence of HCO3- and Br- ions. It was found that the introduction of S2O82- ions (1 mM), by generating SO4-· radicals increases the radiation yield of phenol removal. 90% removal of phenol was obtained with radiation doses 600 and 1200 Gy with and without S2O82- ions respectively. This system induced smaller oxygen consumption with smaller concentration of catechol and hydroquinone found in the solution. HCO3- and Br- have an inhibiting effect in the presence as in the absence of S2O82-. In most cases, the introduction of S2O82- ions in water radiolysis system can advantageously increase the yield of organic compounds removal by oxidation.

  7. Experimental study of electron beam induced removal of H/sub 2/S from geothermal fluids

    Energy Technology Data Exchange (ETDEWEB)

    Helfritch, D.J.; Singhvi, R.; Evans, R.D.; Reynolds, W.E.

    1983-09-01

    The treatment of geothermal steam by electron beam irradiation is a potential alternative method of H/sub 2/S removal which can be applied upstream or downstream and has no chemical requirements. The experimental work reported here examines the effectiveness of electron beam treatment of geothermal fluids. These fluids are produced by combining the constituents in a heated cell, which contains an electron beam transparent window. Irradiation of the contents and subsequent chemical analysis allows an evaluation of effectiveness. These results are used for a commercial feasibility assessment.

  8. Pilot plant for electron beam SO2 and NOx removal from combustion flue gases

    International Nuclear Information System (INIS)

    Polish pilot plant for electron beam flue gas treatment was built in Electro-power Station Kaweczyn. The flue gas flow capacity is equal to 20000 Nm3/h. The applied technology allows simultaneous removal of SO2 and NOx. The process is dry and by product can be used as fertilizer. In the report construction of the pilot plant is described. The preliminary results of investigations proved high efficiency of acidic pollutants removal from flue gases. (author). 23 refs, 6 tabs, 24 ills

  9. Removal of iopromide and degradation characteristics in electron beam irradiation process

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Minhwan; Yoon, Yeojoon; Cho, Eunha; Jung, Youmi [Department of Environmental Engineering (YIEST), Yonsei University, 234 Maeji, Heungup, Wonju 220-710 (Korea, Republic of); Lee, Byung-Cheol [Quantum Optics Laboratory, Korea Atomic Energy Research Institute, 1045, Daedeok-daero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Paeng, Ki-Jung [Department of Chemistry, Yonsei University, 234 Maeji, Heungup, Wonju 220-710 (Korea, Republic of); Kang, Joon-Wun, E-mail: jwk@yonsei.ac.kr [Department of Environmental Engineering (YIEST), Yonsei University, 234 Maeji, Heungup, Wonju 220-710 (Korea, Republic of)

    2012-08-15

    Highlights: Black-Right-Pointing-Pointer The second-order kinetic was fitted in overall removal tendency of iopromide. Black-Right-Pointing-Pointer In the electron beam/H{sub 2}O{sub 2} process, enhanced removal rate of iopromide was observed. Black-Right-Pointing-Pointer The iopromide removal rate increased in the presence of OH{center_dot} scavengers. Black-Right-Pointing-Pointer The mineralization was mainly performed in the electron beam/H{sub 2}O{sub 2} condition. Black-Right-Pointing-Pointer The e{sub aq}{sup -} mainly attacks the iodo-group, whereas the OH{center_dot} reacts non-selectively. - Abstract: The aim of this study is to evaluate the removal efficiency of iopromide using electron beam (E-beam) irradiation technology, and its degradation characteristics with hydroxyl radical (OH{center_dot}) and hydrated electron (e{sub aq}{sup -}). Studies are conducted with different initial concentrations of iopromide in pure water and in the presence of hydrogen peroxide, bicarbonate ion, or sulfite ion. E-beam absorbed dose of 19.6 kGy was required to achieve 90% degradation of 100 {mu}M iopromide and the E-beam/H{sub 2}O{sub 2} system increased the removal efficiency by an amount of OH{center_dot} generation. In the presence of OH{center_dot} scavengers (10 mM sulfite ion), the required dose for 90% removal of 100 {mu}M iopromide was only 0.9 kGy. This greatly enhanced removal was achieved in the presence of OH{center_dot} scavengers, which was rather unexpected and unlike the results obtained from most advanced oxidation process (AOP) experiments. The reasons for this enhancement can be explained by a kinetic study using the bimolecular rate constants of each reaction species. To explore the reaction scheme of iopromide with OH{center_dot} or e{sub aq}{sup -} and the percent of mineralization for the two reaction paths, the total organic carbon (TOC), released iodide, and intermediates were analyzed.

  10. Power beaming, orbital debris removal, and other space applications of a ground based free electron laser

    OpenAIRE

    Wilder, Benjamin A.

    2010-01-01

    When compared to other laser types, the Free Electron Laser (FEL) provides optimal beam quality for successful atmospheric propagation. Assuming the development and deployment of a mega-watt (MW) class, ground or sea based FEL, this thesis investigates several proposed space applications including power beaming to satellites, the removal of orbital debris, laser illumination of objects within the solar system for scientific study, and interstellar laser illumination for communications. Po...

  11. Energy consumption of SO2 removal from humid air under electron beam and electric field influence

    International Nuclear Information System (INIS)

    The kinetic of SO2 oxidation in humid air under influence of electron beam and electrical field was investigated by computer simulation method in steady state and pulse mode. SO2 oxidation process was stimulated by radical and ion reactions. The calculation model has included 46 different particles and 160 chemical reactions. Gas mixture containing 1000 ppm of SO2 concentration was investigated at temperature T=67 deg. C and pressure p=1 at. Water content was within the range 2-12%. Electron beam parameters were as follows: average beam current density 0.0032-3,2 mA/cm2, pulse duration 400 μs, repetition rate 50 Hz. Electrical field density was E/n =10-15 Vcm2. Electrical pulse duration was changed within the range 5 x10-7-10-5s. The influence of the parameters of synchronized electron beam and electrical field pulses on energy deposition was under consideration. Energy cost of SO2 removal on 90% level was estimated in steady state and pulse modes. It was found that total electron beam and electrical field energy losses in pulse mode are 6 times lower to compare with steady state conditions. The optimum of electrical field pulse duration from point of view minimum energy cost of SO2 removal was found for different electron beam pulse current levels

  12. Comparison of single-electron removal processes in collisions of electrons, positrons, protons, and antiprotons with hydrogen and helium

    International Nuclear Information System (INIS)

    We present and compare total cross sections for single-electron removal in collisions of electrons, positrons, protons, and antiprotons with atomic hydrogen and helium. These cross sections have been calculated using the classical trajectory Monte Carlo technique in the velocity range of 0.5--7.0 a.u. (6.25--1224 keV/u). The cross sections are compared at equal collision velocities and exhibit differences arising from variations in mass and sign of charge of the projectile. At low and intermediate velocities these differences are large in both the ionization and charge transfer channels. At high velocities the single-ionization cross section for each of these singly charged particles becomes equal. However, the differences in the single-charge-transfer cross sections for positron and proton impact persist to very large velocities. We extend our previous work [Phys. Rev. A 38, 1866 (1988)] to explain these mass and sign of the charge effects in single-electron removal collisions

  13. A Classifier Ensemble of Binary Classifier Ensembles

    Directory of Open Access Journals (Sweden)

    Sajad Parvin

    2011-09-01

    Full Text Available This paper proposes an innovative combinational algorithm to improve the performance in multiclass classification domains. Because the more accurate classifier the better performance of classification, the researchers in computer communities have been tended to improve the accuracies of classifiers. Although a better performance for classifier is defined the more accurate classifier, but turning to the best classifier is not always the best option to obtain the best quality in classification. It means to reach the best classification there is another alternative to use many inaccurate or weak classifiers each of them is specialized for a sub-space in the problem space and using their consensus vote as the final classifier. So this paper proposes a heuristic classifier ensemble to improve the performance of classification learning. It is specially deal with multiclass problems which their aim is to learn the boundaries of each class from many other classes. Based on the concept of multiclass problems classifiers are divided into two different categories: pairwise classifiers and multiclass classifiers. The aim of a pairwise classifier is to separate one class from another one. Because of pairwise classifiers just train for discrimination between two classes, decision boundaries of them are simpler and more effective than those of multiclass classifiers.The main idea behind the proposed method is to focus classifier in the erroneous spaces of problem and use of pairwise classification concept instead of multiclass classification concept. Indeed although usage of pairwise classification concept instead of multiclass classification concept is not new, we propose a new pairwise classifier ensemble with a very lower order. In this paper, first the most confused classes are determined and then some ensembles of classifiers are created. The classifiers of each of these ensembles jointly work using majority weighting votes. The results of these ensembles

  14. Velocity dependence of CO and CH4 electron removal and fragmentation caused by fast proton impact

    International Nuclear Information System (INIS)

    Cross sections of the breakup channels of CO and CH4, caused by 1-14 MeV proton impact, have been measured. The total cross sections for single to triple electron removal are in reasonable agreement with SCA calculations. The production cross sections for CO+ and CH4+ ions are in good agreement with electron impact ionization at the same high velocities. Proton and electron impact are expected to by the same at high velocities. Proton and electron impact are expected to be the same at high velocities where the first Born approximation is valid. At lower velocities the proton impact cross sections are in general higher than the electron impact data. The ion-neutral breakup channels show similar trends, but ion-pair channels have a different velocity dependence. The effect of the charge sign and the projectile mass on the fragmentation of doubly ionized molecules (ion-pairs) needs further study because electron data of ion-pair production is scarce

  15. CLASSIFIER IN BODO

    OpenAIRE

    Pratima Brahma

    2014-01-01

    The present paper investigates the classifiers in Bodo. In Bodo classifiers have function as specific determiner of the physical shape or size, quantity and quality of the noun. Classifiers in Bodo are predominantly of monosyllabic structure. It occurs with numeral and the classifiers precede numeral. The monosyllabic structure may be single verb or simple verb and noun; it functions as classifiers by suffixing numerals. In Bodo, classifier can occur before and after in no...

  16. Positive role of nitrite as electron acceptor on anoxic denitrifying phosphorus removal process

    Institute of Scientific and Technical Information of China (English)

    HUANG RongXin; LI Dong; LI XiangKun; BAO LinLin; JIANG AnXi; ZHANG Jie

    2007-01-01

    Literatures revealed that the electron acceptor-nitrite could be inhibitory or toxic in the denitrifying phosphorus removal process.Batch test experiments were used to investigate the inhibitory effect during the anoxic condition.The inoculated activated sludge was taken from a continuous double- sludge denitrifying phosphorus and nitrogen removal system.Nitrite was added at the anoxic stage.One time injection and sequencing batch injection were carried on in the denitrifying dephosphorus procedure.The results indicated that the nitrite concentration higher than 30 mg/L would inhibit the anoxic phosphate uptake severely, and the threshold inhibitory concentration was dependent on the characteristics of the activated sludge and the operating conditions; instead, lower than the inhibitory concentration would not be detrimental to anoxic phosphorus uptake, and it could act as good electron acceptor for the anoxic phosphate accumulated.Positive effects performed during the denitrifying biological dephosphorus all the time.The utility of nitrite as good electron acceptor would provide a new feasible way in the denitrifying phosphorus process.

  17. Experimental facility for investigation of gaseous pollutants removal process stimulated by electron beam and microwave energy

    International Nuclear Information System (INIS)

    A laboratory unit for the investigation of toxic gases removal from flue gases based on an ILU 6 accelerator has been built at the Institute of Nuclear Chemistry and Technology. This installation was provided with independent pulsed and continuous wave (c.w.) microwave generators to create electrical discharge and another pulsed microwave generator for plasma diagnostics. This allows to investigate a combined removal process based on the simultaneous use of the electron beam and streams of microwave energy in one reaction vessel. Two heating furnaces, each of them being a water-tube boiler with 100 kW thermal power, were applied for the production of combustion gas with flow rates 5-400 Nm3/h. Proper composition of the flue gas was obtained by introducing such components as SO2, NO and NH3 to the gas stream. The installation consists of: inlet system (two boilers - house heating furnace, boiler pressure regulator, SO2, NO and NH3 dosage system, analytical equipment); reaction vessel where the electron beam from ILU 6 accelerator and microwave streams from the pulse and c.w. generators can be introduced simultaneously or separately and plasma diagnostic pulsed microwave stream can be applied; outlet system (retention chamber, filtration unit, fan, off-take duct of gas, analytical equipment). The experiments have demonstrated that it is possible to investigate the removal process in the presence of NH3 by separate or simultaneous application of the electron beam and of microwave energy streams under stable experimental conditions. (author). 15 refs, 26 figs, 5 tabs

  18. Removal of brominated flame retardant from electrical and electronic waste plastic by solvothermal technique

    International Nuclear Information System (INIS)

    Highlights: ► A process for brominated flame retardants (BFRs) removal in plastic was established. ► The plastic became bromine-free with the structure maintained after this treatment. ► BFRs transferred into alcohol solvent were easily debrominated by metallic copper. - Abstract: Brominated flame retardants (BFRs) in electrical and electronic (E and E) waste plastic are toxic, bioaccumulative and recalcitrant. In the present study, tetrabromobisphenol A (TBBPA) contained in this type of plastic was tentatively subjected to solvothermal treatment so as to obtain bromine-free plastic. Methanol, ethanol and isopropanol were examined as solvents for solvothermal treatment and it was found that methanol was the optimal solvent for TBBPA removal. The optimum temperature, time and liquid to solid ratio for solvothermal treatment to remove TBBPA were 90 °C, 2 h and 15:1, respectively. After the treatment with various alcohol solvents, it was found that TBBPA was finally transferred into the solvents and bromine in the extract was debrominated catalyzed by metallic copper. Bisphenol A and cuprous bromide were the main products after debromination. The morphology and FTIR properties of the plastic were generally unchanged after the solvothermal treatment indicating that the structure of the plastic maintained after the process. This work provides a clean and applicable process for BFRs-containing plastic disposal.

  19. Electron beam technology development for volatile organic compounds removal from flue gases

    International Nuclear Information System (INIS)

    In Poland the first legal act related to some VOCs emission limits values from stationery sources was enforced in 1990. Recently after European Union enlargement existing EU regulations in this matter must be followed as priority. Emission limit values are set up for new and existing plants in EU directives, but new legal acts e.g. for polycyclic aromatic hydrocarbons are on legislation stage and will be introduced soon. The objective to reduce the emission may be realized by primary control on technology design stage (rather for new installation) or secondary control by application of flue gas treatment technology (existing plants). There are purification technologies on the market dedicated for VOCs treatment, but due to new, more strict, regulations more effective methods are investigated and developed. Electron beam (EB) treatment seems to be one promising technology for VOCs removal from flue gas. EB treatment for flue gas purification, invented in Japan in 1970s, has been finally applied successfully in industrial scale in China for SO2 removal and in Poland for simultaneous SO2 and NOx. Since 1990s EB process has been investigated as potential process for VOCs removal, based on oxidation by active species like hydroxyl (OH) radicals, O atoms and O3, produced in irradiated gas. Described in this report aromatic hydrocarbons, in reaction with OH radical produce hydroxyheksadienyl radicals, which are transformed in chain process in mixture of oxidized aromatic and aliphatic hydrocarbons (alcohols, aldehydes, acids, etc.) and finally into inorganic CO, CO2 and H2O. Experimental work carried out in pilot plant for polycyclic aromatic hydrocarbons (PAHs) has given positive results for EB treatment as a multicomponent technology removing simultaneously SO2, NOx and VOCs (PAHs) in one-staged process. (author)

  20. Removal of diclofenac from surface water by electron beam irradiation combined with a biological aerated filter

    Science.gov (United States)

    He, Shijun; Wang, Jianlong; Ye, Longfei; Zhang, Youxue; Yu, Jiang

    2014-12-01

    The degradation of DCF was investigated in aqueous solution by using electron beam (EB) technology. When the initial concentration was between 10 and 40 mg/L, almost 100% of the DCF was degraded at a dose of 0.5 kGy. However, only about 6.5% of DCF was mineralized even at 2 kGy according to total organic carbon (TOC) measurements. A combined process of EB and biological aerated filter (BAF) was therefore developed to enhance the treatment of DCF contaminated surface water. The effluent quality of combined process was substantially improved by EB pretreatment due to the degradation of DCF and related intermediates. Both irradiation and biological treatment reduced the toxicity of the treated water. The experimental results showed that EB is effective for removing DCF from artificial aqueous solution and real surface water.

  1. Removal of impurities from metallurgical grade silicon by electron beam melting

    International Nuclear Information System (INIS)

    Solar cells are currently fabricated from a variety of silicon-based materials. Now the major silicon material for solar cells is the scrap of electronic grade silicon (EG-Si). But in the current market it is difficult to secure a steady supply of this material. Therefore, alternative production processes are needed to increase the feedstock. In this paper, EBM is used to purify silicon. MG-Si particles after leaching with an initial purity of 99.88% in mass as starting materials were used. The final purity of the silicon disk obtained after EBM was above 99.995% in mass. This result demonstrates that EBM can effectively remove impurities from silicon. This paper mainly studies the impurity distribution in the silicon disk after EBM. (semiconductor materials)

  2. Evaluation of sustained release polylactate electron donors for removal of hexavalent chromium from contaminated groundwater

    Energy Technology Data Exchange (ETDEWEB)

    Brodie, E.L.; Joyner, D. C.; Faybishenko, B.; Conrad, M. E.; Rios-Velazquez, C.; Mork, B.; Willet, A.; Koenigsberg, S.; Herman, D.; Firestone, M. K.; Hazen, T. C.; Malave, Josue; Martinez, Ramon

    2011-02-15

    To evaluate the efficacy of bioimmobilization of Cr(VI) in groundwater at the Department of Energy Hanford site, we conducted a series of microcosm experiments using a range of commercial electron donors with varying degrees of lactate polymerization (polylactate). These experiments were conducted using Hanford Formation sediments (coarse sand and gravel) immersed in Hanford groundwater, which were amended with Cr(VI) and several types of lactate-based electron donors (Hydrogen Release Compound, HRC; primer-HRC, pHRC; extended release HRC) and the polylactate-cysteine form (Metal Remediation Compound, MRC). The results showed that polylactate compounds stimulated an increase in bacterial biomass and activity to a greater extent than sodium lactate when applied at equivalent carbon concentrations. At the same time, concentrations of headspace hydrogen and methane increased and correlated with changes in the microbial community structure. Enrichment of Pseudomonas spp. occurred with all lactate additions, and enrichment of sulfate-reducing Desulfosporosinus spp. occurred with almost complete sulfate reduction. The results of these experiments demonstrate that amendment with the pHRC and MRC forms result in effective removal of Cr(VI) from solution most likely by both direct (enzymatic) and indirect (microbially generated reductant) mechanisms.

  3. Effect of high electron donor supply on dissimilatory nitrate reduction pathways in a bioreactor for nitrate removal

    DEFF Research Database (Denmark)

    Behrendt, Anna; Tarre, Sheldon; Beliavski, Michael;

    2014-01-01

    The possible shift of a bioreactor for NO3- removal from predominantly denitrification (DEN) to dissimilatory nitrate reduction to ammonium (DNRA) by elevated electron donor supply was investigated. By increasing the C/NO3- ratio in one of two initially identical reactors, the production of high ...

  4. Role of aqueous electron and hydroxyl radical in the removal of endosulfan from aqueous solution using gamma irradiation

    International Nuclear Information System (INIS)

    Highlights: • Removal of endosulfan was assessed by gamma irradiation under different conditions. • Removal of endosulfan by gamma irradiation was mainly due to reaction of aqueous electron. • The radiation yield value decreased while dose constant increased with increasing gamma-ray dose-rate. • Second-order rate constant of endosulfan with aqueous electron was determined by competition kinetic method. • Degradation pathways were proposed from the nature of identified by-products. - Abstract: The removal of endosulfan, an emerging water pollutant, from water was investigated using gamma irradiation based advanced oxidation and reduction processes (AORPs). A significant removal, 97% of initially 1.0 μM endosulfan was achieved at an absorbed dose of 1020 Gy. The removal of endosulfan by gamma-rays irradiation was influenced by an absorbed dose and significantly increased in the presence of aqueous electron (eaq−). However, efficiency of the process was inhibited in the presence of eaq− scavengers, such as N2O, NO3−, acid, and Fe3+. The observed dose constant decreased while radiation yield (G-value) increased with increasing initial concentrations of the target contaminant and decreasing dose-rate. The removal efficiency of endosulfan II was lower than endosulfan I. The degradation mechanism of endosulfan by the AORPs was proposed showing that reductive pathways involving eaq− started at the chlorine attached to the ring while oxidative pathway was initiated due to attack of hydroxyl radical at the S=O bond. The mass balance showed 95% loss of chloride from endosulfan at an absorbed dose of 1020 Gy. The formation of chloride and acetate suggest that gamma irradiation based AORPs are potential methods for the removal of endosulfan and its by-products from contaminated water

  5. Removal of multiple electron acceptors by pilot-scale, two-stage membrane biofilm reactors.

    Science.gov (United States)

    Zhao, He-Ping; Ontiveros-Valencia, Aura; Tang, Youneng; Kim, Bi-O; Vanginkel, Steven; Friese, David; Overstreet, Ryan; Smith, Jennifer; Evans, Patrick; Krajmalnik-Brown, Rosa; Rittmann, Bruce

    2014-05-01

    We studied the performance of a pilot-scale membrane biofilm reactor (MBfR) treating groundwater containing four electron acceptors: nitrate (NO3(-)), perchlorate (ClO4(-)), sulfate (SO4(2-)), and oxygen (O2). The treatment goal was to remove ClO4(-) from ∼200 μg/L to less than 6 μg/L. The pilot system was operated as two MBfRs in series, and the positions of the lead and lag MBfRs were switched regularly. The lead MBfR removed at least 99% of the O2 and 63-88% of NO3(-), depending on loading conditions. The lag MBfR was where most of the ClO4(-) reduction occurred, and the effluent ClO4(-) concentration was driven to as low as 4 μg/L, with most concentrations ≤10 μg/L. However, SO4(2-) reduction occurred in the lag MBfR when its NO3(-) + O2 flux was smaller than ∼0.18 g H2/m(2)-d, and this was accompanied by a lower ClO4(-) flux. We were able to suppress SO4(2-) reduction by lowering the H2 pressure and increasing the NO3(-) + O2 flux. We also monitored the microbial community using the quantitative polymerase chain reaction targeting characteristic reductase genes. Due to regular position switching, the lead and lag MBfRs had similar microbial communities. Denitrifying bacteria dominated the biofilm when the NO3(-) + O2 fluxes were highest, but sulfate-reducing bacteria became more important when SO4(2-) reduction was enhanced in the lag MBfR due to low NO3(-) + O2 flux. The practical two-stage strategy to achieve complete ClO4(-) and NO3(-) reduction while suppressing SO4(2-) reduction involved controlling the NO3(-) + O2 surface loading between 0.18 and 0.34 g H2/m(2)-d and using a low H2 pressure in the lag MBfR. PMID:24565802

  6. Enhanced biological phosphorus removal. Carbon sources, nitrate as electron acceptor, and characterization of the sludge community

    Energy Technology Data Exchange (ETDEWEB)

    Christensson, M.

    1997-10-01

    Enhanced biological phosphorus removal (EBPR) was studied in laboratory scale experiments as well as in a full scale EBPR process. The studies were focused on carbon source transformations, the use of nitrate as an electron acceptor and characterisation of the microflora. A continuous anaerobic/aerobic laboratory system was operated on synthetic wastewater with acetate as sole carbon source. An efficient EBPR was obtained and mass balances over the anaerobic reactor showed a production of 1.45 g poly-{beta}-hydroxyalcanoic acids (PHA), measured as chemical oxygen demand (COD), per g of acetic acid (as COD) taken up. Furthermore, phosphate was released in the anaerobic reactor in a ratio of 0.33 g phosphorus (P) per g PHA (COD) formed and 0.64 g of glycogen (COD) was consumed per g of acetic acid (COD) taken up. Microscopic investigations revealed a high amount of polyphosphate accumulating organisms (PAO) in the sludge. Isolation and characterisation of bacteria indicated Acinetobacter spp. to be abundant in the sludge, while sequencing of clones obtained in a 16S rDNA clone library showed a large part of the bacteria to be related to the high mole % G+C Gram-positive bacteria and only a minor fraction to be related to the gamma-subclass of proteobacteria to which Acinetobacter belongs. Operation of a similar anaerobic/aerobic laboratory system with ethanol as sole carbon source showed that a high EBPR can be achieved with this compound as carbon source. However, a prolonged detention time in the anaerobic reactor was required. PHA were produced in the anaerobic reactor in an amount of 1.24 g COD per g of soluble DOC taken up, phosphate was released in an amount of 0.4-0.6 g P per g PHA (COD) produced and 0.46 g glycogen (COD) was consumed per g of soluble COD taken up. Studies of the EBPR in the UCT process at the sewage treatment plant in Helsingborg, Sweden, showed the amount of volatile fatty acids (VFA) available to the PAO in the anaerobic stage to be

  7. Diagnosis of cervical cancer cell taken from scanning electron and atomic force microscope images of the same patients using discrete wavelet entropy energy and Jensen Shannon, Hellinger, Triangle Measure classifier

    Science.gov (United States)

    Aytac Korkmaz, Sevcan

    2016-05-01

    The aim of this article is to provide early detection of cervical cancer by using both Atomic Force Microscope (AFM) and Scanning Electron Microscope (SEM) images of same patient. When the studies in the literature are examined, it is seen that the AFM and SEM images of the same patient are not used together for early diagnosis of cervical cancer. AFM and SEM images can be limited when using only one of them for the early detection of cervical cancer. Therefore, multi-modality solutions which give more accuracy results than single solutions have been realized in this paper. Optimum feature space has been obtained by Discrete Wavelet Entropy Energy (DWEE) applying to the 3 × 180 AFM and SEM images. Then, optimum features of these images are classified with Jensen Shannon, Hellinger, and Triangle Measure (JHT) Classifier for early diagnosis of cervical cancer. However, between classifiers which are Jensen Shannon, Hellinger, and triangle distance have been validated the measures via relationships. Afterwards, accuracy diagnosis of normal, benign, and malign cervical cancer cell was found by combining mean success rates of Jensen Shannon, Hellinger, and Triangle Measure which are connected with each other. Averages of accuracy diagnosis for AFM and SEM images by averaging the results obtained from these 3 classifiers are found as 98.29% and 97.10%, respectively. It has been observed that AFM images for early diagnosis of cervical cancer have higher performance than SEM images. Also in this article, surface roughness of malign AFM images in the result of the analysis made for the AFM images, according to the normal and benign AFM images is observed as larger, If the volume of particles has found as smaller. She has been a Faculty Member at Fırat University in the Electrical- Electronic Engineering Department since 2007. Her research interests include image processing, computer vision systems, pattern recognition, data fusion, wavelet theory, artificial neural

  8. E-Nose Vapor Identification Based on Dempster-Shafer Fusion of Multiple Classifiers

    Science.gov (United States)

    Li, Winston; Leung, Henry; Kwan, Chiman; Linnell, Bruce R.

    2005-01-01

    Electronic nose (e-nose) vapor identification is an efficient approach to monitor air contaminants in space stations and shuttles in order to ensure the health and safety of astronauts. Data preprocessing (measurement denoising and feature extraction) and pattern classification are important components of an e-nose system. In this paper, a wavelet-based denoising method is applied to filter the noisy sensor measurements. Transient-state features are then extracted from the denoised sensor measurements, and are used to train multiple classifiers such as multi-layer perceptions (MLP), support vector machines (SVM), k nearest neighbor (KNN), and Parzen classifier. The Dempster-Shafer (DS) technique is used at the end to fuse the results of the multiple classifiers to get the final classification. Experimental analysis based on real vapor data shows that the wavelet denoising method can remove both random noise and outliers successfully, and the classification rate can be improved by using classifier fusion.

  9. Modelling study of NOx removal in oil-fired waste off-gases under electron beam irradiation

    International Nuclear Information System (INIS)

    Computer simulations for high concentration of NOx removal from oil-fired waste off-gases under electron beam irradiation were carried out by using the Computer code “Kinetic” and GEAR method. 293 reactions involving 64 species were used for the modelling calculations. The composition of simulated oil-fired off-gas was the same as the experimental conditions. The calculations were made for following system: (75.78% N2+11.5% CO2+8.62% H2O+4.1% O2), NOx concentration varies from 200 ppm to 1500 ppm. Calculation results qualitatively agree with the experimental results. Furthermore the influence of temperature, SO2 concentration and ammonia addition is discussed. - Highlights: • Modelling study of NOx removal in oil-fired off-gases under EB irradiation. • Energy consumption (i.e. dose) influence on NOx removal efficiency was examined. • The influence of temperature, SO2 concentration and ammonia addition was examined. • NOx removal mechanism from flue gas under electron beam irradiation was elaborated

  10. New method to remove the electronic noise for absolutely calibrating low gain photomultiplier tubes with a higher precision

    International Nuclear Information System (INIS)

    A new method to remove the electronic noise in order to absolutely calibrate low gain photomultiplier tubes with a higher precision is proposed and validated with experiments using a digitizer-based data acquisition system. This method utilizes the fall time difference between the electronic noise (about 0.5 ns) and the real PMT signal (about 2.4 ns for Hamamatsu H10570 PMT assembly). Using this technique along with a convolution algorithm, the electronic noise and the real signals are separated very well, even including the very small signals heavily influenced by the electronic noise. One application that this method allows is for us to explore the energy relationship for gamma sensing in Cherenkov radiators while maintaining the fastest possible timing performance and high dynamic range

  11. Increased electric sail thrust through removal of trapped shielding electrons by orbit chaotisation due to spacecraft body

    Directory of Open Access Journals (Sweden)

    P. Janhunen

    2009-08-01

    Full Text Available An electric solar wind sail is a recently introduced propellantless space propulsion method whose technical development has also started. The electric sail consists of a set of long, thin, centrifugally stretched and conducting tethers which are charged positively and kept in a high positive potential of order 20 kV by an onboard electron gun. The positively charged tethers deflect solar wind protons, thus tapping momentum from the solar wind stream and producing thrust. The amount of obtained propulsive thrust depends on how many electrons are trapped by the potential structures of the tethers, because the trapped electrons tend to shield the charged tether and reduce its effect on the solar wind. Here we present physical arguments and test particle calculations indicating that in a realistic three-dimensional electric sail spacecraft there exist a natural mechanism which tends to remove the trapped electrons by chaotising their orbits and causing them to eventually collide with the conducting tethers. We present calculations which indicate that if these mechanisms were able to remove trapped electrons nearly completely, the electric sail performance could be about five times higher than previously estimated, about 500 nN/m, corresponding to 1 N thrust for a baseline construction with 2000 km total tether length.

  12. Study on decomposition and removal of organic pollutants in gases using electron beams

    International Nuclear Information System (INIS)

    Volatile organic compounds (VOC) used as solvents and de-oil reagents have been emitted to the atmosphere and oxidized subsequently into toxic photochemical oxidants in the atmosphere. Reduction of the emission of VOC has been required under law and regulations for factories/plants at which huge amounts of VOC are used. The electron beam (EB) treatment is suitable for purification of high flow-rate ventilation air containing dilute VOC emitted from such factories/plants. The purification processes of such ventilation air have been developed based on the decomposition reactions and property changes of VOC. The results for chloro-ethylenes and aromatic hydrocarbons, which have been emitted with abundant quantities, are introduced in the present paper. Chloroethylenes, except for monochloroethylene, were oxidized into water-soluble primary products through chain reactions in EB irradiated humid air. The chain oxidation reactions of such chloro-ethylenes were initiated exclusively by a reaction with OH radicals, but electron-attachment dissociation under EB irradiation. Gas-phase termination reactions involved the bimolecular reaction of alkylperoxyl radicals for tri- and di-chloroethylenes, and the reaction of alkylperoxyl radicals and alkyl radicals beside such a bimolecular reaction for tetrachloroethylene. The deposition of the alkyl-peroxyl radicals on an irradiation vessel wall also terminated the chain oxidation reactions. The solid-phase termination reaction was negligible to the gas-phase termination reactions under irradiation with high-dose rate so that the oxidation of chloro-ethylenes was achieved with lower doses under high-dose rate irradiation like EB irradiation. The hydrolysis of the primary products combined with EB irradiation is prospective to be applied to the purification of chloroethylenes/air mixtures with lower doses. Under irradiation of aromatic hydrocarbons/air mixtures, toxic and oxidation-resistant particles with mean diameters of a few

  13. Recognition Using Hybrid Classifiers.

    Science.gov (United States)

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply. PMID:26959677

  14. Dynamic system classifier

    CERN Document Server

    Pumpe, Daniel; Müller, Ewald; Enßlin, Torsten A

    2016-01-01

    Stochastic differential equations describe well many physical, biological and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of DSC to oscillation processes with a time dependent frequency {\\omega}(t) and damping factor {\\gamma}(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The {\\omega} and {\\gamma} timelines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiment...

  15. Removal of cadmium ions from wastewater using innovative electronic waste-derived material

    International Nuclear Information System (INIS)

    Highlights: • A novel developed adsorbent material derived from waste printed circuit boards’ component. • The innovative adsorbent material can effectively remove cadmium ions from aqueous solutions. • The maximum capacity for cadmium ion removal is 2.1 mmol/g. • Cadmium removal capacity is either equivalent or better than commercial resins. - Abstract: Cadmium is a highly toxic heavy metal even at a trace level. In this study, a novel material derived from waste PCBs has been applied as an adsorbent to remove cadmium ions from aqueous solutions. The effects of various factors including contact time, initial cadmium ion concentration, pH and adsorbent dosage have been evaluated. The maximum uptake capacity of the newly derived material for cadmium ions has reached 2.1 mmol/g at an initial pH 4. This value shows that this material can effectively remove cadmium ions from effluent. The equilibrium isotherm has been analyzed using several isotherm equations and is best described by the Redlich–Peterson model. Furthermore, different commercial adsorbent resins have been studied for comparison purposes. The results further confirm that this activated material is highly competitive with its commercial counterparts

  16. Classifying Returns as Extreme

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2014-01-01

    I consider extreme returns for the stock and bond markets of 14 EU countries using two classification schemes: One, the univariate classification scheme from the previous literature that classifies extreme returns for each market separately, and two, a novel multivariate classification scheme tha...

  17. Classifying Cereal Data

    Science.gov (United States)

    The DSQ includes questions about cereal intake and allows respondents up to two responses on which cereals they consume. We classified each cereal reported first by hot or cold, and then along four dimensions: density of added sugars, whole grains, fiber, and calcium.

  18. Embedded feature ranking for ensemble MLP classifiers

    OpenAIRE

    Windeatt, T; Duangsoithong, R; Smith, R

    2011-01-01

    A feature ranking scheme for multilayer perceptron (MLP) ensembles is proposed, along with a stopping criterion based upon the out-of-bootstrap estimate. To solve multi-class problems feature ranking is combined with modified error-correcting output coding. Experimental results on benchmark data demonstrate the versatility of the MLP base classifier in removing irrelevant features.

  19. Whole acute toxicity removal from industrial and domestic effluents treated by electron beam radiation: emphasis on anionic surfactants

    International Nuclear Information System (INIS)

    Electron beam radiation has been applied to improve real industrial and domestic effluents received by Suzano wastewater treatment plant. Radiation efficacy has been evaluated as toxicity reduction, using two biological assays. Three sites were sampled and submitted for toxicity assays, anionic surfactant determination and electron beam irradiation. This paper shows the reduction of acute toxicity for both test-organisms, the marine bacteria Vibrio fischeri and the crustacean Daphnia similis. The raw toxic effluents exibitted from 0.6 ppm up to 11.67 ppm for anionic surfactant before being treated by the electron beam. Radiation processing resulted in reduction of the acute toxicity as well as surfactant removal. The final biological effluent was in general less toxic than other sites but the presence of anionic surfactants was evidenced

  20. Whole acute toxicity removal from industrial and domestic effluents treated by electron beam radiation: emphasis on anionic surfactants

    Energy Technology Data Exchange (ETDEWEB)

    Moraes, M.C.F. E-mail: mariacristinafm@uol.com.br; Romanelli, M.F; Sena, H.C.; Pasqualini da Silva, G.; Sampa, M.H.O.; Borrely, S.I

    2004-10-01

    Electron beam radiation has been applied to improve real industrial and domestic effluents received by Suzano wastewater treatment plant. Radiation efficacy has been evaluated as toxicity reduction, using two biological assays. Three sites were sampled and submitted for toxicity assays, anionic surfactant determination and electron beam irradiation. This paper shows the reduction of acute toxicity for both test-organisms, the marine bacteria Vibrio fischeri and the crustacean Daphnia similis. The raw toxic effluents exibitted from 0.6 ppm up to 11.67 ppm for anionic surfactant before being treated by the electron beam. Radiation processing resulted in reduction of the acute toxicity as well as surfactant removal. The final biological effluent was in general less toxic than other sites but the presence of anionic surfactants was evidenced.

  1. Whole acute toxicity removal from industrial and domestic effluents treated by electron beam radiation: emphasis on anionic surfactants

    Science.gov (United States)

    Moraes, M. C. F.; Romanelli, M. F.; Sena, H. C.; Pasqualini da Silva, G.; Sampa, M. H. O.; Borrely, S. I.

    2004-09-01

    Electron beam radiation has been applied to improve real industrial and domestic effluents received by Suzano wastewater treatment plant. Radiation efficacy has been evaluated as toxicity reduction, using two biological assays. Three sites were sampled and submitted for toxicity assays, anionic surfactant determination and electron beam irradiation. This paper shows the reduction of acute toxicity for both test-organisms, the marine bacteria Vibrio fischeri and the crustacean Daphnia similis. The raw toxic effluents exibitted from 0.6 ppm up to 11.67 ppm for anionic surfactant before being treated by the electron beam. Radiation processing resulted in reduction of the acute toxicity as well as surfactant removal. The final biological effluent was in general less toxic than other sites but the presence of anionic surfactants was evidenced.

  2. Intelligent Garbage Classifier

    Directory of Open Access Journals (Sweden)

    Ignacio Rodríguez Novelle

    2008-12-01

    Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  3. Intelligent Garbage Classifier

    OpenAIRE

    Ignacio Rodríguez Novelle; Javier Pérez Cid; Alvaro Salmador

    2008-01-01

    IGC (Intelligent Garbage Classifier) is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  4. Classifier in Age classification

    OpenAIRE

    B. Santhi; R.Seethalakshmi

    2012-01-01

    Face is the important feature of the human beings. We can derive various properties of a human by analyzing the face. The objective of the study is to design a classifier for age using facial images. Age classification is essential in many applications like crime detection, employment and face detection. The proposed algorithm contains four phases: preprocessing, feature extraction, feature selection and classification. The classification employs two class labels namely child and Old. This st...

  5. Classified Stable Matching

    CERN Document Server

    Huang, Chien-Chung

    2009-01-01

    We introduce the {\\sc classified stable matching} problem, a problem motivated by academic hiring. Suppose that a number of institutes are hiring faculty members from a pool of applicants. Both institutes and applicants have preferences over the other side. An institute classifies the applicants based on their research areas (or any other criterion), and, for each class, it sets a lower bound and an upper bound on the number of applicants it would hire in that class. The objective is to find a stable matching from which no group of participants has reason to deviate. Moreover, the matching should respect the upper/lower bounds of the classes. In the first part of the paper, we study classified stable matching problems whose classifications belong to a fixed set of ``order types.'' We show that if the set consists entirely of downward forests, there is a polynomial-time algorithm; otherwise, it is NP-complete to decide the existence of a stable matching. In the second part, we investigate the problem using a p...

  6. Impact of the amount of working fluid in loop heat pipe to remove waste heat from electronic component

    Directory of Open Access Journals (Sweden)

    Smitka Martin

    2014-03-01

    Full Text Available One of the options on how to remove waste heat from electronic components is using loop heat pipe. The loop heat pipe (LHP is a two-phase device with high effective thermal conductivity that utilizes change phase to transport heat. It was invented in Russia in the early 1980’s. The main parts of LHP are an evaporator, a condenser, a compensation chamber and a vapor and liquid lines. Only the evaporator and part of the compensation chamber are equipped with a wick structure. Inside loop heat pipe is working fluid. As a working fluid can be used distilled water, acetone, ammonia, methanol etc. Amount of filling is important for the operation and performance of LHP. This work deals with the design of loop heat pipe and impact of filling ratio of working fluid to remove waste heat from insulated gate bipolar transistor (IGBT.

  7. Impact of the amount of working fluid in loop heat pipe to remove waste heat from electronic component

    Science.gov (United States)

    Smitka, Martin; Kolková, Z.; Nemec, Patrik; Malcho, M.

    2014-03-01

    One of the options on how to remove waste heat from electronic components is using loop heat pipe. The loop heat pipe (LHP) is a two-phase device with high effective thermal conductivity that utilizes change phase to transport heat. It was invented in Russia in the early 1980's. The main parts of LHP are an evaporator, a condenser, a compensation chamber and a vapor and liquid lines. Only the evaporator and part of the compensation chamber are equipped with a wick structure. Inside loop heat pipe is working fluid. As a working fluid can be used distilled water, acetone, ammonia, methanol etc. Amount of filling is important for the operation and performance of LHP. This work deals with the design of loop heat pipe and impact of filling ratio of working fluid to remove waste heat from insulated gate bipolar transistor (IGBT).

  8. Modelling of phenol removal in aqueous solution depending on the electron beam energy

    International Nuclear Information System (INIS)

    This paper deals with the influence of the electron beam energy (E=1.2-3 MeV; I=20-125 μA; D R=1.3-8.3 kGy s-1) on the degradation of phenol in aqueous solution. The decomposition of phenol and the concentration of its principal by-products are significantly influenced by the energy of the electron beam. The degradation yield increases with the electron energy. A simplified phenomenologic model of the reactor was proposed to describe the results

  9. Numerical investigation on the variation of welding stresses after material removal from a thick titanium alloy plate joined by electron beam welding

    International Nuclear Information System (INIS)

    Highlights: → After less materials removal from the top, stresses on the bottom remain unchanged. → The transverse stress within the weld decreases significantly with material removal. → Local material removal does not influence the longitudinal stress significantly. -- Abstract: The stress modification after material removal from a 50 mm thick titanium alloy plate jointed by electron beam welding (EBW) was investigated through the finite element method (FEM). The welding experiment and milling process were carried out to experimentally determine the stresses induced by EBW and their modification after local material removal. The modification of as-welded stresses due to the local material removal method and the whole layer removal method was discussed with the finite element analysis. Investigated results showed that with less materials removal from the top, the stresses on the bottom surface remain almost unchanged; after material removal from the top and bottom part, the transverse stress on the newly-formed surface decreases significantly as compared to the as-welded stresses at the same locations; however, the stress modification only occurs at the material removal region in the case of local region removal method; the longitudinal stress decreases with the whole layer removal method while remains almost unchanged with the local region removal method.

  10. Method and apparatus for removing heat from electronic devices using synthetic jets

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Rajdeep; Weaver, Jr., Stanton Earl; Seeley, Charles Erklin; Arik, Mehmet; Icoz, Tunc; Wolfe, Jr., Charles Franklin; Utturkar, Yogen Vishwas

    2014-04-15

    An apparatus for removing heat comprises a heat sink having a cavity, and a synthetic jet stack comprising at least one synthetic jet mounted within the cavity. At least one rod and at least one engaging structure to provide a rigid positioning of the at least one synthetic jet with respect to the at least one rod. The synthetic jet comprises at least one orifice through which a fluid is ejected.

  11. Method and apparatus for removing heat from electronic devices using synthetic jets

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Rajdeep; Weaver, Stanton Earl; Seeley, Charles Erklin; Arik, Mehmet; Icoz, Tunc; Wolfe, Jr., Charles Franklin; Utturkar, Yogen Vishwas

    2015-11-24

    An apparatus for removing heat comprises a heat sink having a cavity, and a synthetic jet stack comprising at least one synthetic jet mounted within the cavity. At least one rod and at least one engaging structure to provide a rigid positioning of the at least one synthetic jet with respect to the at least one rod. The synthetic jet comprises at least one orifice through which a fluid is ejected.

  12. The Effect of Fragaria vesca Extract on Smear Layer Removal: A Scanning Electron Microscopic Evaluation

    OpenAIRE

    Davoudi, Amin; Razavi, Sayed Alireza; Mosaddeghmehrjardi, Mohammad Hossein; Tabrizizadeh, Mehdi

    2015-01-01

    Introduction: Successful endodontic treatment depends on elimination of the microorganisms through chemomechanical debridement. The aim of this in vitro study was to evaluate the effectiveness of Fragaria vesca (wild strawberry) extract (FVE) on the removal of smear layer (SL). Methods and Materials: In this analytical-observational study, 40 extracted mandibular and maxillary human teeth were selected. After canal preparation with standard step-back technique, the teeth were randomly divided...

  13. Ionic Polymer-Based Removable and Charge-Dissipative Coatings for Space Electronic Applications Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Protection of critical electronic systems in spacecraft and satellites is imperative for NASA's future missions to high-energy, outer-planet environments. The...

  14. The physico-chemical bases of simultaneous SO2 and NOx removal technology from combustion gases by means of electron beam

    International Nuclear Information System (INIS)

    The physico-chemical bases of electron beam process for simultaneously removal of SO2 and NOx from the flue gases have been presented. The influence of multistage irradiation as well as the distribution of energy deposition into flue gas on NOx efficiency removal have been discussed. (author). 3 refs, 7 figs

  15. Influence of wick properties in a vertical LHP on remove waste heat from electronic equipment

    Science.gov (United States)

    Smitka, Martin; Nemec, Patrik; Malcho, Milan

    2014-08-01

    The loop heat pipe is a vapour-liquid phase-change device that transfers heat from evaporator to condenser. One of the most important parts of the LHP is the porous wick structure. The wick structure provides capillary force to circulate the working fluid. To achieve good thermal performance of LHP, capillary wicks with high permeability and porosity and fine pore radius are expected. The aim of this work is to develop porous wick of sintered nickel powder with different grain sizes. These porous wicks were used in LHP and there were performed a series of measurements to remove waste heat from the insulated gate bipolar transistor (IGBT).

  16. Influence of wick properties in a vertical LHP on remove waste heat from electronic equipment

    International Nuclear Information System (INIS)

    The loop heat pipe is a vapour-liquid phase-change device that transfers heat from evaporator to condenser. One of the most important parts of the LHP is the porous wick structure. The wick structure provides capillary force to circulate the working fluid. To achieve good thermal performance of LHP, capillary wicks with high permeability and porosity and fine pore radius are expected. The aim of this work is to develop porous wick of sintered nickel powder with different grain sizes. These porous wicks were used in LHP and there were performed a series of measurements to remove waste heat from the insulated gate bipolar transistor (IGBT)

  17. Scanning electron microscopy of root canal walls after removing the smear layer

    OpenAIRE

    Gašić Jovanka; Dačić-Simonović Dragica; Radičević Goran; Mitić Aleksandar; Stojilković Goran; Daković Jelena

    2003-01-01

    The purpose of this study was to investigate ultrastructurally the effect of smear layer removal by applying different root canal irrigances: 3% HO2, 4% NaOCl and 15% Na-EDTA (in combination with 3% HO2, 4% NaOCl) and establish the appearance of root canal dentine surface after treatment with 15% Na-EDTA for different time periods(1 min and 5 min) using additional irrigant 3% HOor 4% NaOCl. 22 22 Teeth with single and double canals extracted for orthodontic reasons, were used in this study. A...

  18. Influence of wick properties in a vertical LHP on remove waste heat from electronic equipment

    Energy Technology Data Exchange (ETDEWEB)

    Smitka, Martin, E-mail: martin.smitka@fstroj.uniza.sk, E-mail: patrik.nemec@fstroj.uniza.sk, E-mail: milan.malcho@fstroj.uniza.sk; Nemec, Patrik, E-mail: martin.smitka@fstroj.uniza.sk, E-mail: patrik.nemec@fstroj.uniza.sk, E-mail: milan.malcho@fstroj.uniza.sk; Malcho, Milan, E-mail: martin.smitka@fstroj.uniza.sk, E-mail: patrik.nemec@fstroj.uniza.sk, E-mail: milan.malcho@fstroj.uniza.sk [University of Žilina, Faculty of Mechanical Engineering, Department of Power Engeneering, Univerzitna 1, 010 26 Žilina (Slovakia)

    2014-08-06

    The loop heat pipe is a vapour-liquid phase-change device that transfers heat from evaporator to condenser. One of the most important parts of the LHP is the porous wick structure. The wick structure provides capillary force to circulate the working fluid. To achieve good thermal performance of LHP, capillary wicks with high permeability and porosity and fine pore radius are expected. The aim of this work is to develop porous wick of sintered nickel powder with different grain sizes. These porous wicks were used in LHP and there were performed a series of measurements to remove waste heat from the insulated gate bipolar transistor (IGBT)

  19. Organic substrates as electron donors in permeable reactive barriers for removal of heavy metals from acid mine drainage.

    Science.gov (United States)

    Kijjanapanich, P; Pakdeerattanamint, K; Lens, P N L; Annachhatre, A P

    2012-12-01

    This research was conducted to select suitable natural organic substrates as potential carbon sources for use as electron donors for biological sulphate reduction in a permeable reactive barrier (PRB). A number of organic substrates were assessed through batch and continuous column experiments under anaerobic conditions with acid mine drainage (AMD) obtained from an abandoned lignite coal mine. To keep the heavy metal concentration at a constant level, the AMD was supplemented with heavy metals whenever necessary. Under anaerobic conditions, sulphate-reducing bacteria (SRB) converted sulphate into sulphide using the organic substrates as electron donors. The sulphide that was generated precipitated heavy metals as metal sulphides. Organic substrates, which yielded the highest sulphate reduction in batch tests, were selected for continuous column experiments which lasted over 200 days. A mixture of pig-farm wastewater treatment sludge, rice husk and coconut husk chips yielded the best heavy metal (Fe, Cu, Zn and Mn) removal efficiencies of over 90%. PMID:23437664

  20. Polycyclic aromatic hydrocarbons removal from flue gas by electron beam treatment - Pilot plant tests

    International Nuclear Information System (INIS)

    Volatile organic compounds (VOCs) emitted from coal combustion belong to aliphatic, chlorinated, aromatic hydrocarbons, aldehydes and but as the most dangerous polycyclic aromatic hydrocarbons (PAHs) are considered. Many of them are involved in the formation of photochemical smog and depletion of stratospheric ozone. Some PAHs are mutagenic, carcinogenic or both. Tests at the pilot plant constructed at coal-fired power station were performed to estimate the influence of electron beam on PAHs concentration in flue-gas. The influence of electron beam dose on the global toxicity of flue gas components has been analyzed. The concentrations of PAHs decreased after irradiation. (author)

  1. NOx and PAHs removal from industrial flue gas by using electron beam technology in the alcohol addition

    International Nuclear Information System (INIS)

    Complete text of publication follows. The preliminary test of NOx and Polycyclic Aromatic Hydrocarbons (PAHs) removal from flue gas were investigated in the alcohol addition by using electron beam irradiation in EPS Kaweczyn. Experimental conditions were as follows: flue gas flow rate 5000 nM3/hr; humidity 4-5%; inlet concentrations of SO2 and NOx, which were emitted from power station, were 192 ppm and 106 ppm, respectively; ammonia addition is 2.75 m3/hr; alcohol addition is 600 l/hr. It was found that NOx removal efficiency in the presence of alcohol was increased by 10% than without alcohol addition when the absorbed dose was below 6 kGy. The NOx removal efficiency was decreased when the absorbed dose was higher than 10 kGy. In order to understand PAHs' behavior under EB irradiation, inlet PAHs (emitted from coal combustion process) sample and outlet PAHs (after irradiation) sample were collected by using a condensed bottle connected with XAD-2 adsorbent and active carbon adsorbent and were analyzed by a GC-MS. It is found that: at the 8 kGy adsorbed dose, concentrations of PAHs with small aromatic rings (≤3, except Acenaphthylene) are reduced and concentrations of PAHs with large aromatic rings (≤4) are increased. A possible mechanism is proposed

  2. Evaluation of toxicity and removal of color in textile effluent treated with electron beam

    International Nuclear Information System (INIS)

    The textile industry is among the main activities Brazil, being relevant in number of jobs, quantity and diversity of products and mainly by the volume of water used in industrial processes and effluent generation. These effluents are complex mixtures which are characterized by the presence of dyes, surfactants, metal sequestering agents, salts and other potentially toxic chemicals for the aquatic biota. Considering the lack of adequate waste management to these treatments, new technologies are essential in highlighting the advanced oxidation processes such as ionizing radiation electron beam. This study includes the preparation of a standard textile effluent chemical laboratory and its treatment by electron beam from electron accelerator in order to reduce the toxicity and intense staining resulting from Cl. Blue 222 dye. The treatment caused a reduction in toxicity to exposed organisms with 34.55% efficiency for the Daphnia similis micro crustacean and 47.83% for Brachionus plicatilis rotifer at a dose of 2.5 kGy. The Vibrio fischeri bacteria obtained better results after treatment with a dose of 5 kGy showing 57.29% efficiency. Color reduction was greater than 90% at a dose of 2.5 kGy. This experiment has also carried out some preliminary tests on the sensitivity of the D. similis and V. fischeri organisms to exposure of some of the products used in this bleaching and dyeing and two water reuse simulations in new textile processing after the treating the effluent with electron beam. (author)

  3. A comparative scanning electron microscopy evaluation of smear layer removal with apple vinegar and sodium hypochlorite associated with EDTA

    Directory of Open Access Journals (Sweden)

    George Táccio de Miranda Candeiro

    2011-12-01

    Full Text Available OBJECTIVE: The purpose of this study was to evaluate by scanning electron microscopy (SEM the removal of smear layer from the middle and apical root thirds after use of different irrigating solutions. MATERIAL AND METHODS: Forty roots of permanent human teeth had their canals instrumented and were randomly assigned to 4 groups (n=10, according to the irrigating solution: apple vinegar (group A, apple vinegar finished with 17% ethylenediaminetetraacetic acid (EDTA (group B, 1% sodium hypochlorite (NaOCl finished with 17% EDTA (group C and saline (group D - control. After chemomechanical preparation, the roots were cleaved longitudinally and their middle and apical thirds were examined by SEM at ×1,000 magnification. Two calibrated examiners (kappa=0.92 analyzed the SEM micrographs qualitatively attributing scores that indicated the efficacy of the solutions in removing the smear layer from the surface of the dentin tubules (1 - poor, 2 - good and 3 - excellent. Data from the control and experimental groups were analyzed by the Kruskal-Wallis and Dunn's test, while the Wilcoxon test was used to compare the middle and apical thirds of the canals within the same group (a=0.05. RESULTS: The middle third presented less amount of smear layer than the apical third, regardless of the irrigant. There was statistically significant difference (p=0.0402 among the groups in the middle third. In the apical third, the apple vinegar/EDTA group showed the greatest removal of smear layer (p=0.0373. CONCLUSION: Apple vinegar associated or not with EDTA was effective in removing smear layer when used as an endodontic irrigant.

  4. Educating Health Professionals about the Electronic Health Record (EHR): Removing the Barriers to Adoption

    OpenAIRE

    Paule Bellwood; Brian Armstrong; Ronald S. Joe; Elizabeth Borycki; Rebecca Campbell

    2011-01-01

    In the healthcare industry we have had a significant rise in the use of electronic health records (EHRs) in health care settings (e.g. hospital, clinic, physician office and home). There are three main barriers that have arisen to the adoption of these technologies: (1) a shortage of health professional faculty who are familiar with EHRs and related technologies, (2) a shortage of health informatics specialists who can implement these technologies, and (3) poor access to differing types of EH...

  5. Effect of accelerated electron beam on pesticides removal of effluents from flower plantations

    International Nuclear Information System (INIS)

    Flower industry in Ecuador uses a great quantity of pesticides for flowers growing; many of them are toxic and no biodegradable, which contaminate the different effluents. The study of this research is focused to the possibility of using electron beam radiation generated by electron accelerator in order to decrease the concentration of pesticides in effluents both from flower cultivation and from treatment of flowers. The research is initiated with a survey to twelve flower plantations located in the provinces of Pichincha and Cotopaxi (Ecuador), with the purpose of knowing the class of used pesticides, its form of utilization before, during and after fumigation process, the class of staff working in flower industry and the methods of effluents treatment that are using. The information on importation of pesticides and exportation of different classes of flowers was carried out, as well as the flowers sales, with the purpose of selecting the pesticides to be studied. The study of electron beam influence was realized with 6 pesticides considered toxic (Diazinon, procloraz, imidacloprid, dimetoato, carbofuran and metiocarb).The studied variables were: irradiation dose, pesticide concentration, irradiation atmosphere and pH effect. Besides, pH changes, formation of nitrites, nitrates, sulphates, sulfides, ammonium ion and cyanides, after irradiation process of pesticides in aqueous solutions were analyzed. In general, the obtained degradation of pesticides was 99 % for pesticides: procloraz, imidacloprid, carbofuran and dimetoato, and 67% for metiocarb pesticide, when the pesticide concentration was 50 ppm and 5 kGy irradiation dose. (The author)

  6. [Effects of carbon sources, temperature and electron acceptors on biological phosphorus removal].

    Science.gov (United States)

    Han, Yun; Xu, Song; Dong, Tao; Wang, Bin-Fan; Wang, Xian-Yao; Peng, Dang-Cong

    2015-02-01

    Effects of carbon sources, temperature and electron acceptors on phosphorus uptake and release were investigated in a pilot-scale oxidation ditch. Phosphorus uptake and release rates were measured with different carbon sources (domestic sewage, sodium acetate, glucose) at 25 degrees C. The results showed that the minimum phosphorus uptake and release rates of glucose were 5.12 mg x (g x h)(-1) and 6.43 mg x (g x h)(-1), respectively, and those of domestic sewage are similar to those of sodium acetate. Phosphorus uptake and release rates increased with the increase of temperature (12, 16, 20 and 25 degrees C) using sodium acetate as carbon sources. Anoxic phosphorus uptake rate decreased with added COD. Electron acceptors (oxygen, nitrate, nitrite) had significant effects on phosphorus uptake rate and their order was in accordance with oxygen > nitrate > nitrite. The mass ratio of anoxic P uptake and N consumption (P(uptake)/N (consumption)) of nitrate and nitrite were 0.96 and 0.65, respectively. PMID:26031087

  7. Effect of accelerated electron beam on pesticides removal of effluents from flower plantations

    International Nuclear Information System (INIS)

    The flower industry in Ecuador uses a great quantity of pesticides for flower growing. Many of them are toxic and non-biodegradable, which contaminates the different effluents. The study of this research is focused on the possibility of using electron beam radiation in order to decrease the concentration of pesticides in effluents. The research is initiated with a survey of twelve flower plantations in Ecuador. The information on importation of pesticides and exportation of different classes of flowers was carried out, as well as the flower sales. The study of electron beam influence was realized with 6 pesticides considered toxic (Diazinon, procloraz, imidacloprid, dimetoato, carbofuran and metiocarb). The studied variables were irradiation dose, pesticide concentration, aeration and pH effect. Besides pH changes, formation of nitrites, nitrates, sulfates, sulfides, ammonium ion and cyanides, after irradiation process of pesticides in aqueous solutions were analyzed. In general, the obtained degradation of pesticides was 99% for the pesticides procloraz, imidacloprid, carbofuran and dimetoato, and 67% for metiocarb pesticide when the pesticide concentration was 50 ppm and 5 kGy irradiation dose. (author)

  8. Modeling the Effect of External Carbon Source Addition under Different Electron Acceptor Conditions in Biological Nutrient Removal Activated Sludge Systems.

    Science.gov (United States)

    Hu, Xiang; Wisniewski, Kamil; Czerwionka, Krzysztof; Zhou, Qi; Xie, Li; Makinia, Jacek

    2016-02-16

    The aim of this study was to expand the International Water Association Activated Sludge Model No. 2d (ASM2d) to predict the aerobic/anoxic behavior of polyphosphate accumulating organisms (PAOs) and "ordinary" heterotrophs in the presence of different external carbon sources and electron acceptors. The following new aspects were considered: (1) a new type of the readily biodegradable substrate, not available for the anaerobic activity of PAOs, (2) nitrite as an electron acceptor, and (3) acclimation of "ordinary" heterotrophs to the new external substrate via enzyme synthesis. The expanded model incorporated 30 new or modified process rate equations. The model was evaluated against data from several, especially designed laboratory experiments which focused on the combined effects of different types of external carbon sources (acetate, ethanol and fusel oil) and electron acceptors (dissolved oxygen, nitrate and nitrite) on the behavior of PAOs and "ordinary" heterotrophs. With the proposed expansions, it was possible to improve some deficiencies of the ASM2d in predicting the behavior of biological nutrient removal (BNR) systems with the addition of external carbon sources, including the effect of acclimation to the new carbon source. PMID:26783836

  9. Educating Health Professionals about the Electronic Health Record (EHR: Removing the Barriers to Adoption

    Directory of Open Access Journals (Sweden)

    Paule Bellwood

    2011-03-01

    Full Text Available In the healthcare industry we have had a significant rise in the use of electronic health records (EHRs in health care settings (e.g. hospital, clinic, physician office and home. There are three main barriers that have arisen to the adoption of these technologies: (1 a shortage of health professional faculty who are familiar with EHRs and related technologies, (2 a shortage of health informatics specialists who can implement these technologies, and (3 poor access to differing types of EHR software. In this paper we outline a novel solution to these barriers: the development of a web portal that provides facility and health professional students with access to multiple differing types of EHRs over the WWW. The authors describe how the EHR is currently being used in educational curricula and how it has overcome many of these barriers. The authors also briefly describe the strengths and limitations of the approach.

  10. Effect of residual chips on the material removal process of the bulk metallic glass studied by in situ scratch testing inside the scanning electron microscope

    OpenAIRE

    Hu Huang; Hongwei Zhao; Chengli Shi; Boda Wu; Zunqiang Fan; Shunguang Wan; Chunyang Geng

    2012-01-01

    Research on material removal mechanism is meaningful for precision and ultra-precision manufacturing. In this paper, a novel scratch device was proposed by integrating the parasitic motion principle linear actuator. The device has a compact structure and it can be installed on the stage of the scanning electron microscope (SEM) to carry out in situ scratch testing. Effect of residual chips on the material removal process of the bulk metallic glass (BMG) was studied by in situ scratch testing ...

  11. Development of removal technology for volatile organic compounds (VOCs) using electron beam

    International Nuclear Information System (INIS)

    The Air Pollution Control Law was revised on May 2004 for reduction of the emission of VOCs from factories. Reduction of the emission of VOCs to the atmosphere will be required for existing and new factories and plants at which huge amounts of VOCs are utilized after the enforcement of the revised Air Pollution Control Law. In case of the existing large-scale factories and plants, high-concentrated VOCs in ventilation gases has already been treated from a few % to a few hundreds ppmv with the absorption by activated carbons, the thermal incineration, and the catalytic oxidations, etc. Additional compact treatment systems are required for the purification of high flow-rate ventilation air mixtures containing dilute VOCs. The electron beam (EB) treatment is suitable for such a time-saving purification of ventilation air mixtures, because dilute VOCs can be quickly decomposed by EB induced free radicals at high concentrations. In our groups, the purification process using EB irradiation has been developed based on the decomposition reactions and property changes of organics. Aerosols and gaseous organics were produced from aromatic hydrocarbons in air by EB irradiation. The yields of aerosols and gaseous organics relative to decomposed chlorobenzene were 39-43% and 26-28%, respectively, at doses of 4-8 kGy, respectively. The filter was clogged with aerosols in filtration of aerosols, because of the stick aerosols absorbing gaseous water in air mixtures. The collection treatments of aerosols, for example an electro-precipitator after EB irradiation, is regarded as one of possible purification treatments of the aromatic hydrocarbons/air mixtures. Chloroethylenes except for monochloroethylene are decomposed into water-soluble gaseous primary products, such as chloroacetyl chlorides, carbonyl chloride, formyl chloride, through Cl-atom chain oxidation in air mixtures by EB irradiation. The hydrolysis of these gaseous products in irradiated air mixtures is prospective to be

  12. Comparative Analysis of Classifier Fusers

    Directory of Open Access Journals (Sweden)

    Marcin Zmyslony

    2012-06-01

    Full Text Available There are many methods of decision making by an ensemble of classifiers. The most popular are methods that have their origin in voting method, where the decision of the common classifier is a combination of individual classifiers’ outputs. This work presents comparative analysis of some classifier fusion methods based on weighted voting of classifiers’ responses and combination of classifiers’ discriminant functions. We discus different methods of producing combined classifiers based on weights. We show that it is notpossible to obtain classifier better than an abstract model of committee known as an Oracle if it is based only on weighted voting but models based on discriminant function or classifier using feature values and class numbers could outperform the Oracle as well. Delivered conclusions are confirmed by the results of computer experiments carried out on benchmark and computer generated data.

  13. Comparative Analysis of Classifier Fusers

    Directory of Open Access Journals (Sweden)

    Marcin Zmyslony

    2012-05-01

    Full Text Available There are many methods of decision making by an ensemble of classifiers. The most popular are methods that have their origin in voting method, where the decision of the common classifier is a combination of individual classifiers’ outputs. This work presents comparative analysis of some classifier fusion methods based on weighted voting of classifiers’ responses and combination of classifiers’ discriminant functions. We discus different methods of producing combined classifiers based on weights. We show that it is not possible to obtain classifier better than an abstract model of committee known as an Oracle if it is based only on weighted voting but models based on discriminant function or classifier using feature values and class numbers could outperform the Oracle as well. Delivered conclusions are confirmed by the results of computer experiments carried out on benchmark and computer generated data.

  14. Electronic structure calculations of mercury mobilization from mineral phases and photocatalytic removal from water and the atmosphere

    International Nuclear Information System (INIS)

    Mercury is a hazardous environmental pollutant mobilized from natural sources, and anthropogenically contaminated and disturbed areas. Current methods to assess mobility and environmental impact are mainly based on field measurements, soil monitoring, and kinetic modelling. In order to understand in detail the extent to which different mineral sources can give rise to mercury release it is necessary to investigate the complexity at the microscopic level and the possible degradation/dissolution processes. In this work, we investigated the potential for mobilization of mercury structurally trapped in three relevant minerals occurring in hot spring environments and mining areas, namely, cinnabar (α-HgS), corderoite (α-Hg3S2Cl2), and mercuric chloride (HgCl2). Quantum chemical methods based on density functional theory as well as more sophisticated approaches are used to assess the possibility of a) direct photoreduction and formation of elemental Hg at the surface of the minerals, providing a path for ready release in the environment; and b) reductive dissolution of the minerals in the presence of solutions containing halogens. Furthermore, we study the use of TiO2 as a potential photocatalyst for decontamination of polluted waters (mainly Hg2+-containing species) and air (atmospheric Hg0). Our results partially explain the observed pathways of Hg mobilization from relevant minerals and the microscopic mechanisms behind photocatalytic removal of Hg-based pollutants. Possible sources of disagreement with observations are discussed and further improvements to our approach are suggested. - Highlights: • Mercury mobilization pathways from three Hg bearing minerals were studied. • Their electronic properties were analysed using quantum mechanical modelling. • Cinnabar and corderoite are not photodegradable, but mercuric chloride is. • The trend is reversed for dissolution induced by the presence of halogen couples. • Photocatalytic removal of Hg from air and

  15. Electronic structure calculations of mercury mobilization from mineral phases and photocatalytic removal from water and the atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Da Pieve, Fabiana, E-mail: fabiana.dapieve@gmail.com [Université libre de Bruxelles (U.L.B.), Boulevard du Triomphe, CP 231, Campus Plaine, B-1050 Bruxelles (Belgium); Stankowski, Martin [LU Open Innovation Center, Lund University, Box 117, SE-221 00 Lund (Sweden); European Theoretical Spectroscopy Facility (ETSF) (Country Unknown); Hogan, Conor [European Theoretical Spectroscopy Facility (ETSF) (Country Unknown); Consiglio Nazionale delle Ricerche, Istituto di Struttura della Materia (CNR–ISM), University of Rome “Tor Vergata”, via Fosso del Cavaliere 100, 00133 Rome (Italy); Physics Department, University of Rome “Tor Vergata”, via Fosso del Cavaliere 100, 00133 Rome (Italy)

    2014-09-15

    Mercury is a hazardous environmental pollutant mobilized from natural sources, and anthropogenically contaminated and disturbed areas. Current methods to assess mobility and environmental impact are mainly based on field measurements, soil monitoring, and kinetic modelling. In order to understand in detail the extent to which different mineral sources can give rise to mercury release it is necessary to investigate the complexity at the microscopic level and the possible degradation/dissolution processes. In this work, we investigated the potential for mobilization of mercury structurally trapped in three relevant minerals occurring in hot spring environments and mining areas, namely, cinnabar (α-HgS), corderoite (α-Hg{sub 3}S{sub 2}Cl{sub 2}), and mercuric chloride (HgCl{sub 2}). Quantum chemical methods based on density functional theory as well as more sophisticated approaches are used to assess the possibility of a) direct photoreduction and formation of elemental Hg at the surface of the minerals, providing a path for ready release in the environment; and b) reductive dissolution of the minerals in the presence of solutions containing halogens. Furthermore, we study the use of TiO{sub 2} as a potential photocatalyst for decontamination of polluted waters (mainly Hg{sup 2+}-containing species) and air (atmospheric Hg{sup 0}). Our results partially explain the observed pathways of Hg mobilization from relevant minerals and the microscopic mechanisms behind photocatalytic removal of Hg-based pollutants. Possible sources of disagreement with observations are discussed and further improvements to our approach are suggested. - Highlights: • Mercury mobilization pathways from three Hg bearing minerals were studied. • Their electronic properties were analysed using quantum mechanical modelling. • Cinnabar and corderoite are not photodegradable, but mercuric chloride is. • The trend is reversed for dissolution induced by the presence of halogen couples.

  16. Feature Selection and Effective Classifiers.

    Science.gov (United States)

    Deogun, Jitender S.; Choubey, Suresh K.; Raghavan, Vijay V.; Sever, Hayri

    1998-01-01

    Develops and analyzes four algorithms for feature selection in the context of rough set methodology. Experimental results confirm the expected relationship between the time complexity of these algorithms and the classification accuracy of the resulting upper classifiers. When compared, results of upper classifiers perform better than lower…

  17. Degradation and acute toxicity removal of the antidepressant Fluoxetine (Prozac(®)) in aqueous systems by electron beam irradiation.

    Science.gov (United States)

    Silva, Vanessa Honda Ogihara; Dos Santos Batista, Ana Paula; Silva Costa Teixeira, Antonio Carlos; Borrely, Sueli Ivone

    2016-06-01

    Electron beam irradiation (EBI) has been considered an advanced technology for the treatment of water and wastewater, whereas very few previous investigations reported its use for removing pharmaceutical pollutants. In this study, the degradation of fluoxetine (FLX), an antidepressant marketed as Prozac(®), was investigated by using EBI at FLX initial concentration of 19.4 ± 0.2 mg L(-1). More than 90 % FLX degradation was achieved at 0.5 kGy, with FLX below the detection limit (0.012 mg L(-1)) at doses higher than 2.5 kGy. The elucidation of organic byproducts performed using direct injection mass spectrometry, along with the results of ion chromatography, indicated hydroxylation of FLX molecules with release of fluoride and nitrate anions. Nevertheless, about 80 % of the total organic carbon concentration remained even for 7.5 kGy or higher doses. The decreases in acute toxicity achieved 86.8 and 9.6 % for Daphnia similis and Vibrio fischeri after EBI exposure at 5 kGy, respectively. These results suggest that EBI could be an alternative to eliminate FLX and to decrease residual toxicity from wastewater generated in pharmaceutical formulation facilities, although further investigation is needed for correlating the FLX degradation mechanism with the toxicity results. PMID:26961524

  18. Efficacy of various root canal irrigants on removal of smear layer in the primary root canals after hand instrumentation: A scanning electron microscopy study

    Directory of Open Access Journals (Sweden)

    Hariharan V

    2010-01-01

    Full Text Available Aim: The purpose of this in-vitro study is to determine the efficacy of various irrigants in removing the smear layer in primary teeth root canals after hand instrumentation. Materials and Methods: The present study consisted of 30 human primary incisors which were sectioned at the cementoenamel junction horizontally. The specimens were divided randomly into four experimental and one control group having six teeth each and each group was treated with the specific irrigant. 5.25% NaOCl, 5.25% NaOCl + 10% EDTA, 6% citric acid, 2% chlorhexidine, saline (control were the irrigants evaluated for efficacy in removal of smear layer. The specimens were split along the longitudinal axis using a chisel after placing superficial grooves in cementum not extending to the root canal. The exposed surface was subjected to scanning electron microscopic analysis to reveal the efficacy of irrigants in removal of smear layer. The representative areas were evaluated twice at 15 days interval by a single evaluator. The scale for the smear layer removal by Rome et al was modified and used in the present study. Results: The pictures from the scanning electron microscopy showed that among the tested irrigants, citric acid has the best efficacy to remove the smear layer without altering the normal dentinal structures, which was supported by the lowest mean smear scores. The pictures from the 10%EDTA + 5.25% sodium hypochlorite group showed that even though it removed the smear layer, it adversely affected the dentine structure. SEM pictures of the other groups like sodium hypochlorite, chlorhexidine revealed that these irrigants does not have the capacity to remove the smear layer in primary teeth. Conclusions: The results of the present study clearly indicate the superior efficacy of 6% citric acid than the other tested irrigants on removing the smear layer in primary teeth root canals.

  19. Subsurface Biogeochemical Heterogeneity (Field-scale removal of U(VI) from groundwater in an alluvial aquifer by electron donor amendment)

    Energy Technology Data Exchange (ETDEWEB)

    Long, Philip E.; Lovley, Derek R.; N' Guessan, A. L.; Nevin, Kelly; Resch, C. T.; Arntzen, Evan; Druhan, Jenny; Peacock, Aaron; Baldwin, Brett; Dayvault, Dick; Holmes, Dawn; Williams, Ken; Hubbard, Susan; Yabusaki, Steve; Fang, Yilin; White, D. C.; Komlos, John; Jaffe, Peter

    2006-06-01

    Determine if biostimulation of alluvial aquifers by electron donor amendment can effectively remove U(VI) from groundwater at the field scale. Uranium contamination in groundwater is a significant problem at several DOE sites. In this project, the possibility of accelerating bioreduction of U(VI) to U(IV) as a means of decreasing U(VI) concentrations in groundwater is directly addressed by conducting a series of field-scale experiments. Scientific goals include demonstrating the quantitative linkage between microbial activity and U loss from groundwater and relating the dominant terminal electron accepting processes to the rate of U loss. The project is currently focused on understanding the mechanisms for unexpected long-term ({approx}2 years) removal of U after stopping electron donor amendment. Results obtained in the project successfully position DOE and others to apply biostimulation broadly to U contamination in alluvial aquifers.

  20. Classified

    CERN Multimedia

    Computer Security Team

    2011-01-01

    In the last issue of the Bulletin, we have discussed recent implications for privacy on the Internet. But privacy of personal data is just one facet of data protection. Confidentiality is another one. However, confidentiality and data protection are often perceived as not relevant in the academic environment of CERN.   But think twice! At CERN, your personal data, e-mails, medical records, financial and contractual documents, MARS forms, group meeting minutes (and of course your password!) are all considered to be sensitive, restricted or even confidential. And this is not all. Physics results, in particular when being preliminary and pending scrutiny, are sensitive, too. Just recently, an ATLAS collaborator copy/pasted the abstract of an ATLAS note onto an external public blog, despite the fact that this document was clearly marked as an "Internal Note". Such an act was not only embarrassing to the ATLAS collaboration, and had negative impact on CERN’s reputation --- i...

  1. Removal of SO2 and NO/sub x/ from flue gas by means of a spray dryer/electron beam combination: a feasibility study

    International Nuclear Information System (INIS)

    This study examines the feasibility of adding an electron beam between the spray dryer and the fabric filter of dry scrubber flue gas desulfurization (FGD) systems. The beam promises effective removal of nitrogen oxides (NO/sub x/) and sulfur dioxide (SO2), even at higher coal-sulfur levels than usually economic for dry scrubbers. The beam excites gas molecules, promoting reactions that convert SO2 and NO/sub x/ to acids that then react with calcium compounds and are removed by the filter. Concerns examined here are feasibility and waste disposal. The cost findings are promising for both manufacture and operation. The system uses commercially available components. The relatively low temperatures and high humidity downstream of the spray dryer favor economic beam operation. The beam removes SO2, so the dryer can be run for economy, not high removal. The beam's incidental heating effect reduces reheat cost. Safe landfilling of the nitrate-rich waste appears practical, with leachate carrying no more nitrate than natural rain and dustfall. We expect natural pozzolanic reactions between alumina-silica compounds in the fly ash and lime compounds from the spray dryer to form an impermeable concrete-like material within 10 days after landfilling. Dry scrubber with electron beam appears competitive with commercial FGD systems, and we recommend a pilot scale operation

  2. Self-recalibrating classifiers for intracortical brain-computer interfaces

    Science.gov (United States)

    Bishop, William; Chestek, Cynthia C.; Gilja, Vikash; Nuyujukian, Paul; Foster, Justin D.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-04-01

    Objective. Intracortical brain-computer interface (BCI) decoders are typically retrained daily to maintain stable performance. Self-recalibrating decoders aim to remove the burden this may present in the clinic by training themselves autonomously during normal use but have only been developed for continuous control. Here we address the problem for discrete decoding (classifiers). Approach. We recorded threshold crossings from 96-electrode arrays implanted in the motor cortex of two rhesus macaques performing center-out reaches in 7 directions over 41 and 36 separate days spanning 48 and 58 days in total for offline analysis. Main results. We show that for the purposes of developing a self-recalibrating classifier, tuning parameters can be considered as fixed within days and that parameters on the same electrode move up and down together between days. Further, drift is constrained across time, which is reflected in the performance of a standard classifier which does not progressively worsen if it is not retrained daily, though overall performance is reduced by more than 10% compared to a daily retrained classifier. Two novel self-recalibrating classifiers produce a \\mathord {\\sim }15% increase in classification accuracy over that achieved by the non-retrained classifier to nearly recover the performance of the daily retrained classifier. Significance. We believe that the development of classifiers that require no daily retraining will accelerate the clinical translation of BCI systems. Future work should test these results in a closed-loop setting.

  3. Neutralization efficiency of H- beams based on measurements of one- and two-electron removal from H- in H- + Arq+ (q≤8) collisions

    International Nuclear Information System (INIS)

    Employing the Giessen ion-ion crossed-beams facility, absolute cross sections have been measured for single- and double-electron removal from H- in energetic collisions with Arq+ (q ≤ 8) ions. The data is compared to CTMC calculations and is discussed with respect to conversion efficiencies of H- into H0 beams in plasma neutralizers proposed for efficient neutral beam heating of next generation fusion plasmas. (orig.)

  4. A comparative evaluation of smear layer removal by using edta, etidronic acid, and maleic acid as root canal irrigants: An in vitro scanning electron microscopic study

    OpenAIRE

    Aby Kuruvilla; Bharath Makonahalli Jaganath; Sahadev Chickmagaravalli Krishnegowda; Praveen Kumar Makonahalli Ramachandra; Dexton Antony Johns; Aby Abraham

    2015-01-01

    Aim: The purpose of this study is to evaluate and compare the efficacy of 17% EDTA, 18% etidronic acid, and 7% maleic acid in smear layer removal using scanning electron microscopic image analysis. Materials and Methods: Thirty, freshly extracted mandibular premolars were used. The teeth were decoronated to obtain working length of 17mm and instrumentation up to 40 size (K file) with 2.5% NaOCl irrigation between each file. The samples were divided into Groups I (17% ethylenediaminetetraa...

  5. A comparative evaluation of smear layer removal by using edta, etidronic acid, and maleic acid as root canal irrigants: An in vitro scanning electron microscopic study

    OpenAIRE

    Kuruvilla, Aby; Jaganath, Bharath Makonahalli; Krishnegowda, Sahadev Chickmagaravalli; Ramachandra, Praveen Kumar Makonahalli; Johns, Dexton Antony; Abraham, Aby

    2015-01-01

    Aim: The purpose of this study is to evaluate and compare the efficacy of 17% EDTA, 18% etidronic acid, and 7% maleic acid in smear layer removal using scanning electron microscopic image analysis. Materials and Methods: Thirty, freshly extracted mandibular premolars were used. The teeth were decoronated to obtain working length of 17mm and instrumentation up to 40 size (K file) with 2.5% NaOCl irrigation between each file. The samples were divided into Groups I (17% ethylenediaminetetraaceti...

  6. Al-Hadith Text Classifier

    OpenAIRE

    Mohammed Naji Al-Kabi; Ghassan Kanaan; Riyad Al-Shalabi; Saja I. Al- Sinjilawi; Ronza S. Al- Mustafa

    2005-01-01

    This study explore the implementation of a text classification method to classify the prophet Mohammed (PBUH) hadiths (sayings) using Sahih Al-Bukhari classification. The sayings explain the Holy Qur`an, which considered by Muslims to be the direct word of Allah. Present method adopts TF/IDF (Term Frequency-Inverse Document Frequency) which is used usually for text search. TF/IDF was used for term weighting, in which document weights for the selected terms are computed, to classify non-vocali...

  7. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  8. Effect of residual chips on the material removal process of the bulk metallic glass studied by in situ scratch testing inside the scanning electron microscope

    Directory of Open Access Journals (Sweden)

    Hu Huang

    2012-12-01

    Full Text Available Research on material removal mechanism is meaningful for precision and ultra-precision manufacturing. In this paper, a novel scratch device was proposed by integrating the parasitic motion principle linear actuator. The device has a compact structure and it can be installed on the stage of the scanning electron microscope (SEM to carry out in situ scratch testing. Effect of residual chips on the material removal process of the bulk metallic glass (BMG was studied by in situ scratch testing inside the SEM. The whole removal process of the BMG during the scratch was captured in real time. Formation and growth of lamellar chips on the rake face of the Cube-Corner indenter were observed dynamically. Experimental results indicate that when lots of chips are accumulated on the rake face of the indenter and obstruct forward flow of materials, materials will flow laterally and downward to find new location and direction for formation of new chips. Due to similar material removal processes, in situ scratch testing is potential to be a powerful research tool for studying material removal mechanism of single point diamond turning, single grit grinding, mechanical polishing and grating fabrication.

  9. Effect of different final irrigating solutions on smear layer removal in apical third of root canal: A scanning electron microscope study

    Directory of Open Access Journals (Sweden)

    Sayesh Vemuri

    2016-01-01

    Full Text Available Aim: The aim of this in vitro study is to compare the smear layer removal efficacy of different irrigating solutions at the apical third of the root canal. Materials and Methods: Forty human single-rooted mandibular premolar teeth were taken and decoronated to standardize the canal length to 14 mm. They were prepared by ProTaper rotary system to an apical preparation of file size F3. Prepared teeth were randomly divided into four groups (n = 10; saline (Group 1; negative control, ethylenediaminetetraacetic acid (Group 2, BioPure MTAD (Group 3, and QMix 2 in 1 (Group 4. After final irrigation with tested irrigants, the teeth were split into two halves longitudinally and observed under a scanning electron microscope (SEM for the removal of smear layer. The SEM images were then analyzed for the amount of smear layer present using a three score system. Statistical Analysis: Data are analyzed using the Kruskal-Wallis test and Mann-Whitney U-test. Results: Intergroup comparison of groups showed statistically significant difference in the smear layer removal efficacy of irrigants tested. QMix 2 in 1 is most effective in removal of smear layer when compared to other tested irrigants. Conclusion: QMix 2 in 1 is the most effective final irrigating solution for smear layer removal.

  10. Effect of residual chips on the material removal process of the bulk metallic glass studied by in situ scratch testing inside the scanning electron microscope

    International Nuclear Information System (INIS)

    Research on material removal mechanism is meaningful for precision and ultra-precision manufacturing. In this paper, a novel scratch device was proposed by integrating the parasitic motion principle linear actuator. The device has a compact structure and it can be installed on the stage of the scanning electron microscope (SEM) to carry out in situ scratch testing. Effect of residual chips on the material removal process of the bulk metallic glass (BMG) was studied by in situ scratch testing inside the SEM. The whole removal process of the BMG during the scratch was captured in real time. Formation and growth of lamellar chips on the rake face of the Cube-Corner indenter were observed dynamically. Experimental results indicate that when lots of chips are accumulated on the rake face of the indenter and obstruct forward flow of materials, materials will flow laterally and downward to find new location and direction for formation of new chips. Due to similar material removal processes, in situ scratch testing is potential to be a powerful research tool for studying material removal mechanism of single point diamond turning, single grit grinding, mechanical polishing and grating fabrication.

  11. Knowledge Uncertainty and Composed Classifier

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2007-01-01

    Roč. 1, č. 2 (2007), s. 101-105. ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Boosting architecture * contextual modelling * composed classifier * knowledge management , * knowledge * uncertainty Subject RIV: IN - Informatics, Computer Science

  12. Correlation Dimension-Based Classifier

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    2014-01-01

    Roč. 44, č. 12 (2014), s. 2253-2263. ISSN 2168-2267 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : classifier * multidimensional data * correlation dimension * scaling exponent * polynomial expansion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014

  13. Classifying unstructured text using structured training instances and ensemble classifiers

    OpenAIRE

    Lianos, Andreas; Yang, Yanyan

    2015-01-01

    Typical supervised classification techniques require training instances similar to the values that need to be classified. This research proposes a methodology that can utilize training instances found in a different format. The benefit of this approach is that it allows the use of traditional classification techniques, without the need to hand-tag training instances if the information exists in other data sources. The proposed approach is presented through a practical classification applicati...

  14. Aggregation Operator Based Fuzzy Pattern Classifier Design

    DEFF Research Database (Denmark)

    Mönks, Uwe; Larsen, Henrik Legind; Lohweg, Volker

    2009-01-01

    This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable. The...

  15. 75 FR 705 - Classified National Security Information

    Science.gov (United States)

    2010-01-05

    ... Executive Order 13526--Classified National Security Information Memorandum of December 29, 2009--Implementation of the Executive Order ``Classified National Security Information'' Order of December 29, 2009... ] Executive Order 13526 of December 29, 2009 Classified National Security Information This order prescribes...

  16. 76 FR 34761 - Classified National Security Information

    Science.gov (United States)

    2011-06-14

    ... Classified National Security Information AGENCY: Marine Mammal Commission. ACTION: Notice. SUMMARY: This... information, as directed by Information Security Oversight Office regulations. FOR FURTHER INFORMATION CONTACT..., ``Classified National Security Information,'' and 32 CFR part 2001, ``Classified National Security......

  17. A comparative evaluation of smear layer removal by using edta, etidronic acid, and maleic acid as root canal irrigants: An in vitro scanning electron microscopic study

    Directory of Open Access Journals (Sweden)

    Aby Kuruvilla

    2015-01-01

    Full Text Available Aim: The purpose of this study is to evaluate and compare the efficacy of 17% EDTA, 18% etidronic acid, and 7% maleic acid in smear layer removal using scanning electron microscopic image analysis. Materials and Methods: Thirty, freshly extracted mandibular premolars were used. The teeth were decoronated to obtain working length of 17mm and instrumentation up to 40 size (K file with 2.5% NaOCl irrigation between each file. The samples were divided into Groups I (17% ethylenediaminetetraacetic acid (EDTA, II (18% etidronic acid, and III (7% maleic acid containing 10 samples each. Longitudinal sectioning of the samples was done. Then the samples were observed under scanning electron microscope (SEM at apical, middle, and coronal levels. The images were scored according to the criteria: 1. No smear layer, 2. moderate smear layer, and 3 heavy smear layer. Statistical Analysis: Data was analyzed statistically using Kruskal-Wallis analysis of variance (ANOVA followed by Mann-Whitney U test for individual comparisons. The level for significance was set at 0.05. Results: The present study showed that all the three experimental irrigants removed the smear layer from different tooth levels (coronal, middle, and apical. Final irrigation with 7% maleic acid is more efficient than 17% EDTA and 18% etidronic acid in the removal of smear layer from the apical third of root canal.

  18. Al-Hadith Text Classifier

    Directory of Open Access Journals (Sweden)

    Mohammed Naji Al-Kabi

    2005-01-01

    Full Text Available This study explore the implementation of a text classification method to classify the prophet Mohammed (PBUH hadiths (sayings using Sahih Al-Bukhari classification. The sayings explain the Holy Qur`an, which considered by Muslims to be the direct word of Allah. Present method adopts TF/IDF (Term Frequency-Inverse Document Frequency which is used usually for text search. TF/IDF was used for term weighting, in which document weights for the selected terms are computed, to classify non-vocalized sayings, after their terms (keywords have been transformed to the corresponding canonical form (i.e., roots, to one of eight Books (classes, according to Al-Bukhari classification. A term would have a higher weight if it were a good descriptor for a particular book, i.e., it appears frequently in the book but is infrequent in the entire corpus.

  19. Classifying self-gravitating radiations

    CERN Document Server

    Kim, Hyeong-Chan

    2016-01-01

    We study static systems of self-gravitating radiations confined in a sphere by using numerical and analytic calculations. We classify and analyze the solutions systematically. Due to the scaling symmetry, any solution can be represented as a segment of a solution curve on a plane of two-dimensional scale invariant variables. We find that a system can be conveniently parametrized by three parameters representing the solution curve, the scaling, and the system size, instead of the parameters defined at the outer boundary. The solution curves are classified to three types representing regular solutions, conically singular solutions with, and without an object which resembles an event horizon up to causal disconnectedness. For the last type, the behavior of a self-gravitating system is simple enough to allow analytic calculations.

  20. Comparative evaluation of 15% ethylenediamine tetra-acetic acid plus cetavlon and 5% chlorine dioxide in removal of smear layer: A scanning electron microscope study

    Directory of Open Access Journals (Sweden)

    Sandeep Singh

    2013-01-01

    Full Text Available Aims: The purpose of this study was to compare the efficacy of smear layer removal by 5% chlorine dioxide and 15% Ethylenediamine Tetra-Acetic Acid plus Cetavlon (EDTAC from the human root canal dentin. Materials >and Methods : Fifty single rooted human mandibular anterior teeth were divided into two groups of 20 teeth each and control group of 10 teeth. The root canals were prepared till F3 protaper and initially irrigated with 2% Sodium hypochlorite followed by 1 min irrigation with 15% EDTAC or 5% Chlorine dioxide respectively. The control group was irrigated with saline. The teeth were longitudinally split and observed under Scanning electron microscope SEM (×2000. Statistical Analysis Used: The statistical analysis was done using General Linear Mixed Model. Results : At the coronal thirds, no statistically significant difference was found between 15% EDTAC and 5% Chlorine dioxide in removing smear layer. In the middle and apical third region 15% EDTAC showed better smear layer removal ability than 5% Chlorine dioxide. Conclusion : Final irrigation with 15% EDTAC is superior to 5% chlorine dioxide in removing smear layer in the middle and apical third of radicular dentin.

  1. Clustering signatures classify directed networks

    Science.gov (United States)

    Ahnert, S. E.; Fink, T. M. A.

    2008-09-01

    We use a clustering signature, based on a recently introduced generalization of the clustering coefficient to directed networks, to analyze 16 directed real-world networks of five different types: social networks, genetic transcription networks, word adjacency networks, food webs, and electric circuits. We show that these five classes of networks are cleanly separated in the space of clustering signatures due to the statistical properties of their local neighborhoods, demonstrating the usefulness of clustering signatures as a classifier of directed networks.

  2. Classifying Southern Hemisphere extratropical cyclones

    Science.gov (United States)

    Catto, Jennifer

    2015-04-01

    There is a wide variety of flavours of extratropical cyclones in the Southern Hemisphere, with differing structures and lifecycles. Previous studies have classified these manually using upper level flow features or satellite data. In order to be able to evaluate climate models and understand how extratropical cyclones might change in the future, we need to be able to use an automated method to classify cyclones. Extratropical cyclones have been identified in the Southern Hemisphere from the ERA-Interim reanalysis dataset with a commonly used identification and tracking algorithm that employs 850hPa relative vorticity. A clustering method applied to large-scale fields from ERA-Interim at the time of cyclone genesis (when the cyclone is first identified), has been used to objectively classify these cyclones in the Southern Hemisphere. This simple method is able to separate the cyclones into classes with quite different development mechanisms and lifecycle characteristics. Some of the classes seem to coincide with previous manual classifications on shorter timescales, showing their utility for climate model evaluation and climate change studies.

  3. ANALYSIS OF BAYESIAN CLASSIFIER ACCURACY

    Directory of Open Access Journals (Sweden)

    Felipe Schneider Costa

    2013-01-01

    Full Text Available The naïve Bayes classifier is considered one of the most effective classification algorithms today, competing with more modern and sophisticated classifiers. Despite being based on unrealistic (naïve assumption that all variables are independent, given the output class, the classifier provides proper results. However, depending on the scenario utilized (network structure, number of samples or training cases, number of variables, the network may not provide appropriate results. This study uses a process variable selection, using the chi-squared test to verify the existence of dependence between variables in the data model in order to identify the reasons which prevent a Bayesian network to provide good performance. A detailed analysis of the data is also proposed, unlike other existing work, as well as adjustments in case of limit values between two adjacent classes. Furthermore, variable weights are used in the calculation of a posteriori probabilities, calculated with mutual information function. Tests were applied in both a naïve Bayesian network and a hierarchical Bayesian network. After testing, a significant reduction in error rate has been observed. The naïve Bayesian network presented a drop in error rates from twenty five percent to five percent, considering the initial results of the classification process. In the hierarchical network, there was not only a drop in fifteen percent error rate, but also the final result came to zero.

  4. Technical and economical aspects of technology of simultaneous SO2 and NOx removal from flue gases using electron beam

    International Nuclear Information System (INIS)

    The pilot scale investigations carried out in Poland and another countries (Japan, Germany, U.S.A.) has created background for design and construction of full scale installation for electron-beam flue gases purification. In this paper the design data for industrial demonstration plant of electron-beam process of the flue gases treatment are presented. Also the comparison of cost e-b process with conventional FGD/SCR technology are given. (author). 2 refs, 2 figs, 4 tabs

  5. Energy Efficient Removal of Volatile Organic Compounds (VOCs) and Organic Hazardous Air Pollutants (o-HAPs) from Industrial Waste Streams by Direct Electron Oxidation

    Energy Technology Data Exchange (ETDEWEB)

    Testoni, A. L.

    2011-10-19

    This research program investigated and quantified the capability of direct electron beam destruction of volatile organic compounds and organic hazardous air pollutants in model industrial waste streams and calculated the energy savings that would be realized by the widespread adoption of the technology over traditional pollution control methods. Specifically, this research determined the quantity of electron beam dose required to remove 19 of the most important non-halogenated air pollutants from waste streams and constructed a technical and economic model for the implementation of the technology in key industries including petroleum refining, organic & solvent chemical production, food & beverage production, and forest & paper products manufacturing. Energy savings of 75 - 90% and green house gas reductions of 66 - 95% were calculated for the target market segments.

  6. Whole toxicity removal for industrial and domestic effluents treated with electron beam radiation, evaluated with Vibrio fischeri, Daphnia similis and Poecilia reticulata

    International Nuclear Information System (INIS)

    Several studies have been performed in order to apply ionizing radiation to treat real complexes effluents from different sources, at IPEN. This paper shows the results of such kind of application devoted to influents and effluents from Suzano Wastewater Treatment Plant, Sao Paulo, Suzano WTP, from SABESP. The purpose of the work was to evaluate the radiation technology according to ecotoxicological aspects. The evaluation was carried out on the toxicity bases which included three sampling sites as follows: complex industrial effluents; domestic sewage mixed to the industrial discharge (GM) and final secondary effluent. The tested-organisms for toxicity evaluation were: the marine bacteria Vibrio fischeri, the microcrustacean Daphnia similis and the guppy Poecilia reticulata. The fish tests were applied only for secondary final effluents. The results demonstrated the original acute toxicity levels as well as the efficiency of electron beam for its reduction. An important acute toxicity removal was achieved: from 75% up to 95% with 50 kGy (UNA), 20 kGy (GM) and 5.0 kGy for the final effluent. The toxicity removal was a consequence of several organic solvents decomposed by radiation and acute toxicity reduction was about 95%. When the toxicity was evaluated for fish the radiation efficiency reached from 40% to 60%. The hypothesis tests showed a statistical significant removal in the developed studies condition. No residual hydrogen peroxide was found after 5.0 kGy was applied to final effluent. (author)

  7. Effectiveness of four different final irrigation activation techniques on smear layer removal in curved root canals : a scanning electron microscopy study.

    Directory of Open Access Journals (Sweden)

    Puneet Ahuja

    2014-02-01

    Full Text Available The aim of this study was to assess the efficacy of apical negative pressure (ANP, manual dynamic agitation (MDA, passive ultrasonic irrigation (PUI and needle irrigation (NI as final irrigation activation techniques for smear layer removal in curved root canals.Mesiobuccal root canals of 80 freshly extracted maxillary first molars with curvatures ranging between 25° and 35° were used. A glide path with #08-15 K files was established before cleaning and shaping with Mtwo rotary instruments (VDW, Munich, Germany up to size 35/0.04 taper. During instrumentation, 1 ml of 2.5% NaOCl was used at each change of file. Samples were divided into 4 equal groups (n=20 according to the final irrigation activation technique: group 1, apical negative pressure (ANP (EndoVac; group 2, manual dynamic agitation (MDA; group 3, passive ultrasonic irrigation (PUI; and group 4, needle irrigation (NI. Root canals were split longitudinally and subjected to scanning electron microscopy. The presence of smear layer at coronal, middle and apical levels was evaluated by superimposing 300-μm square grid over the obtained photomicrographs using a four-score scale with X1,000 magnification.Amongst all the groups tested, ANP showed the overall best smear layer removal efficacy (p < 0.05. Removal of smear layer was least effective with the NI technique.ANP (EndoVac system can be used as the final irrigation activation technique for effective smear layer removal in curved root canals.

  8. Waste classifying and separation device

    International Nuclear Information System (INIS)

    A flexible plastic bags containing solid wastes of indefinite shape is broken and the wastes are classified. The bag cutting-portion of the device has an ultrasonic-type or a heater-type cutting means, and the cutting means moves in parallel with the transferring direction of the plastic bags. A classification portion separates and discriminates the plastic bag from the contents and conducts classification while rotating a classification table. Accordingly, the plastic bag containing solids of indefinite shape can be broken and classification can be conducted efficiently and reliably. The device of the present invention has a simple structure which requires small installation space and enables easy maintenance. (T.M.)

  9. Defining and Classifying Interest Groups

    DEFF Research Database (Denmark)

    Baroni, Laura; Carroll, Brendan; Chalmers, Adam;

    2014-01-01

    The interest group concept is defined in many different ways in the existing literature and a range of different classification schemes are employed. This complicates comparisons between different studies and their findings. One of the important tasks faced by interest group scholars engaged in...... large-N studies is therefore to define the concept of an interest group and to determine which classification scheme to use for different group types. After reviewing the existing literature, this article sets out to compare different approaches to defining and classifying interest groups with a sample...... cluster actors according to a number of key background characteristics and second assess how the categories of the different interest group typologies relate to these clusters. We demonstrate that background characteristics do align to a certain extent with certain interest group types but also find...

  10. Analytical methods and monitoring system for industrial plant for electron beam simultaneous SO2 and NOx removal from flue gases

    International Nuclear Information System (INIS)

    The reliable and precise measurements of gas parameters in different points of industrial plant are necessary for its proper operation and control. Natural flue gases there are only at the inlet. At other points of plant gas parameters are strongly modified by process control system. The principal role of process monitoring system is to provide the Computer System for Monitoring and Control and the operator with instantaneous values of alarm states, media consumption and continuous recording and controlling of process parameters. The structure of the process control system is based on algorithms describing functional dependence of SO2 and NOx removal efficiencies. The best available techniques should be used for measurements of flue gases parameters at critical points of installation. (author)

  11. Scanning electron microscopy analysis of the growth of dental plaque on the surfaces of removable orthodontic aligners after the use of different cleaning methods

    Directory of Open Access Journals (Sweden)

    Levrini L

    2015-12-01

    Full Text Available Luca Levrini, Francesca Novara, Silvia Margherini, Camilla Tenconi, Mario Raspanti Department of Surgical and Morphological Sciences, Dental Hygiene School, Research Centre Cranio Facial Disease and Medicine, University of Insubria, Varese, Italy Background: Advances in orthodontics are leading to the use of minimally invasive technologies, such as transparent removable aligners, and are able to meet high demands in terms of performance and esthetics. However, the most correct method of cleaning these appliances, in order to minimize the effects of microbial colonization, remains to be determined. Purpose: The aim of the present study was to identify the most effective method of cleaning removable orthodontic aligners, analyzing the growth of dental plaque as observed under scanning electron microscopy. Methods: Twelve subjects were selected for the study. All were free from caries and periodontal disease and were candidates for orthodontic therapy with invisible orthodontic aligners. The trial had a duration of 6 weeks, divided into three 2-week stages, during which three sets of aligners were used. In each stage, the subjects were asked to use a different method of cleaning their aligners: 1 running water (control condition; 2 effervescent tablets containing sodium carbonate and sulfate crystals followed by brushing with a toothbrush; and 3 brushing alone (with a toothbrush and toothpaste. At the end of each 2-week stage, the surfaces of the aligners were analyzed under scanning electron microscopy. Results: The best results were obtained with brushing combined with the use of sodium carbonate and sulfate crystals; brushing alone gave slightly inferior results. Conclusion: On the basis of previous literature results relating to devices in resin, studies evaluating the reliability of domestic ultrasonic baths for domestic use should be encouraged. At present, pending the availability of experimental evidence, it can be suggested that dental

  12. Electronic structure calculations of mercury mobilization from mineral phases and photocatalytic removal from water and the atmosphere.

    Science.gov (United States)

    Da Pieve, Fabiana; Stankowski, Martin; Hogan, Conor

    2014-09-15

    Mercury is a hazardous environmental pollutant mobilized from natural sources, and anthropogenically contaminated and disturbed areas. Current methods to assess mobility and environmental impact are mainly based on field measurements, soil monitoring, and kinetic modelling. In order to understand in detail the extent to which different mineral sources can give rise to mercury release it is necessary to investigate the complexity at the microscopic level and the possible degradation/dissolution processes. In this work, we investigated the potential for mobilization of mercury structurally trapped in three relevant minerals occurring in hot spring environments and mining areas, namely, cinnabar (α-HgS), corderoite (α-Hg3S2Cl2), and mercuric chloride (HgCl2). Quantum chemical methods based on density functional theory as well as more sophisticated approaches are used to assess the possibility of a) direct photoreduction and formation of elemental Hg at the surface of the minerals, providing a path for ready release in the environment; and b) reductive dissolution of the minerals in the presence of solutions containing halogens. Furthermore, we study the use of TiO2 as a potential photocatalyst for decontamination of polluted waters (mainly Hg(2+)-containing species) and air (atmospheric Hg(0)). Our results partially explain the observed pathways of Hg mobilization from relevant minerals and the microscopic mechanisms behind photocatalytic removal of Hg-based pollutants. Possible sources of disagreement with observations are discussed and further improvements to our approach are suggested. PMID:24982025

  13. Hybrid k -Nearest Neighbor Classifier.

    Science.gov (United States)

    Yu, Zhiwen; Chen, Hantao; Liuxs, Jiming; You, Jane; Leung, Hareton; Han, Guoqiang

    2016-06-01

    Conventional k -nearest neighbor (KNN) classification approaches have several limitations when dealing with some problems caused by the special datasets, such as the sparse problem, the imbalance problem, and the noise problem. In this paper, we first perform a brief survey on the recent progress of the KNN classification approaches. Then, the hybrid KNN (HBKNN) classification approach, which takes into account the local and global information of the query sample, is designed to address the problems raised from the special datasets. In the following, the random subspace ensemble framework based on HBKNN (RS-HBKNN) classifier is proposed to perform classification on the datasets with noisy attributes in the high-dimensional space. Finally, the nonparametric tests are proposed to be adopted to compare the proposed method with other classification approaches over multiple datasets. The experiments on the real-world datasets from the Knowledge Extraction based on Evolutionary Learning dataset repository demonstrate that RS-HBKNN works well on real datasets, and outperforms most of the state-of-the-art classification approaches. PMID:26126291

  14. Use of information barriers to protect classified information

    International Nuclear Information System (INIS)

    This paper discusses the detailed requirements for an information barrier (IB) for use with verification systems that employ intrusive measurement technologies. The IB would protect classified information in a bilateral or multilateral inspection of classified fissile material. Such a barrier must strike a balance between providing the inspecting party the confidence necessary to accept the measurement while protecting the inspected party's classified information. The authors discuss the structure required of an IB as well as the implications of the IB on detector system maintenance. A defense-in-depth approach is proposed which would provide assurance to the inspected party that all sensitive information is protected and to the inspecting party that the measurements are being performed as expected. The barrier could include elements of physical protection (such as locks, surveillance systems, and tamper indicators), hardening of key hardware components, assurance of capabilities and limitations of hardware and software systems, administrative controls, validation and verification of the systems, and error detection and resolution. Finally, an unclassified interface could be used to display and, possibly, record measurement results. The introduction of an IB into an analysis system may result in many otherwise innocuous components (detectors, analyzers, etc.) becoming classified and unavailable for routine maintenance by uncleared personnel. System maintenance and updating will be significantly simplified if the classification status of as many components as possible can be made reversible (i.e. the component can become unclassified following the removal of classified objects)

  15. 15 CFR 4.8 - Classified Information.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily...

  16. Electronic removal of encrustations inside the Steinheim cranium reveals paranasal sinus features and deformations, and provides a revised endocranial volume estimate.

    Science.gov (United States)

    Prossinger, Hermann; Seidler, Horst; Wicke, Lothar; Weaver, Dave; Recheis, Wolfgang; Stringer, Chris; Müller, Gerd B

    2003-07-01

    Features in the endocranium, as revealed by computed tomography (CT) scans of largely complete mid-Pleistocene crania, have helped elucidate unexpected affinities in the genus Homo. Because of its extensive encrustations and deformations, it has been difficult to repeat such analyses with the Steinheim cranium. Here, we present several advances in the analysis of this Homo heidelbergensis cranium by applying filter algorithms and image editing techniques to its CT scan. First, we show how the encrustations have been removed electronically, revealing interesting peculiarities, particularly the many directions of the deformations. Second, we point out similarities and differences between the frontal and sphenoidal sinuses of the Steinheim, Petralona, and Broken Hill (Kabwe) crania. Third, we assess the extent of the endocranial deformations and, fourth, their implications for our estimation of the braincase volume. PMID:12833273

  17. Effectiveness of different irrigation techniques on smear layer removal in apical thirds of mesial root canals of permanent mandibular first molar: A scanning electron microscopic study

    Directory of Open Access Journals (Sweden)

    Pranav Khaord

    2015-01-01

    Full Text Available Aim: The aim of this study was to compare smear layer removal after final irrigant activation with sonic irrigation (SI, manual dynamic agitation (MDA, passive ultrasonic irrigation (PUI, and conventional syringe irrigation (CI. Materials and Methods: Forty mesial canals of mandibular first molars (mesial roots were cleaned and shaped by using ProTaper system to size F1 and sodium hypochlorite 3% and 17% ethylenediaminetetraacetic acid. The specimens were divided into 4 equal groups (n = 10 according to the final irrigation activation technique: Group 1, PUI; group 2, manual dynamic activation (MDA; group 3, SI; and group 4, control group (simple irrigation. Samples were split longitudinally and examined under scanning electron microscope for smear layer presence. Results: Control groups had the highest smear scores, which showed the statistically significant highest mean score at P < 0.05. This was followed by ultrasonic, MDA, and finally sonic, with no significant differences between them. Conclusions: Final irrigant activation with sonic and MDA resulted in the better removal of the smear layer than with CI.

  18. A comparative evaluation of different irrigation activation systems on smear layer removal from root canal: An in-vitro scanning electron microscope study

    Directory of Open Access Journals (Sweden)

    Nishi Singh

    2014-01-01

    Full Text Available Aim: The aim of the following study is to compare the evaluation of different irrigation activation system-F-File, CanalBrush (CB and EndoActivator (EA in removing smear layer from root canal. Materials and Methods: Root canals of eighty single rooted decoronated premolar teeth were instrumented using crown-down technique and then equally divided into four groups on basis of irrigation activation methods used: Without irrigation - control group, irrigation with F-File, CB, EA into Group I, II, III respectively. Samples were then longitudinally sectioned and examined under scanning electron microscope by three qualified observers using score from 1 to 4. Data was analyzed using Statistical Package for Social Sciences (SPSS, version 15.0 (SPSS Inc., Chicago IL at significance level of P ≤ 0.05. Results: Minimum mean score was observed in Group II at coronal, apical locations. Group III had minimum score at middle third. Groups difference in score were found to be significant statistically for all three locations as well as for overall assessment (P < 0.001. Conclusion: CB remove smear layer more efficiently from the root canal than F-File and EA in coronal and apical region.

  19. Energy-efficient neuromorphic classifiers

    OpenAIRE

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2015-01-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. Neuromorphic engineering promises extremely low energy consumptions, comparable to those of the nervous system. However, until now the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, rendering el...

  20. Entropic One-Class Classifiers.

    Science.gov (United States)

    Livi, Lorenzo; Sadeghian, Alireza; Pedrycz, Witold

    2015-12-01

    The one-class classification problem is a well-known research endeavor in pattern recognition. The problem is also known under different names, such as outlier and novelty/anomaly detection. The core of the problem consists in modeling and recognizing patterns belonging only to a so-called target class. All other patterns are termed nontarget, and therefore, they should be recognized as such. In this paper, we propose a novel one-class classification system that is based on an interplay of different techniques. Primarily, we follow a dissimilarity representation-based approach; we embed the input data into the dissimilarity space (DS) by means of an appropriate parametric dissimilarity measure. This step allows us to process virtually any type of data. The dissimilarity vectors are then represented by weighted Euclidean graphs, which we use to determine the entropy of the data distribution in the DS and at the same time to derive effective decision regions that are modeled as clusters of vertices. Since the dissimilarity measure for the input data is parametric, we optimize its parameters by means of a global optimization scheme, which considers both mesoscopic and structural characteristics of the data represented through the graphs. The proposed one-class classifier is designed to provide both hard (Boolean) and soft decisions about the recognition of test patterns, allowing an accurate description of the classification process. We evaluate the performance of the system on different benchmarking data sets, containing either feature-based or structured patterns. Experimental results demonstrate the effectiveness of the proposed technique. PMID:25879977

  1. Injector for CESAR (2 MeV electron storage ring): 2-beam, 2 MV van de Graaff generator; tank removed.

    CERN Multimedia

    1968-01-01

    The van de Graaff generator in its tank. For voltage-holding, the tank was filled with pressurized extra-dry nitrogen. 2 beams emanated from 2 separate electron-guns. The left beam, for injection into the CESAR ring, was pulsed at 50 Hz, with currents of up to 1 A for 400 ns. The right beam was sent to a spectrometer line. Its pulselength was also 400 ns, but the pulse current was 12 microA, at a rate variable from 50 kHz to 1 MHz. This allowed stabilization of the top-terminal voltage to an unprecedented stability of +- 100 V, i.e. 6E-5. Although built for a nominal voltage of 2 MV, the operational voltage was limited to 1.75 MV in order to minimize voltage break-down events. CESAR was terminated at the end of 1967 and dismantled in 1968. R.Nettleton (left) and H.Burridge (right) are preparing the van de Graaff for shipment to the University of Swansea.

  2. Improving hole injection and carrier distribution in InGaN light-emitting diodes by removing the electron blocking layer and including a unique last quantum barrier

    International Nuclear Information System (INIS)

    The effects of removing the AlGaN electron blocking layer (EBL), and using a last quantum barrier (LQB) with a unique design in conventional blue InGaN light-emitting diodes (LEDs), were investigated through simulations. Compared with the conventional LED design that contained a GaN LQB and an AlGaN EBL, the LED that contained an AlGaN LQB with a graded-composition and no EBL exhibited enhanced optical performance and less efficiency droop. This effect was caused by an enhanced electron confinement and hole injection efficiency. Furthermore, when the AlGaN LQB was replaced with a triangular graded-composition, the performance improved further and the efficiency droop was lowered. The simulation results indicated that the enhanced hole injection efficiency and uniform distribution of carriers observed in the quantum wells were caused by the smoothing and thinning of the potential barrier for the holes. This allowed a greater number of holes to tunnel into the quantum wells from the p-type regions in the proposed LED structure

  3. The pilot plant experiment of electron beam irradiation process for removal of NOx and SOx from sinter plant exhaust gas in the iron and steel industry

    International Nuclear Information System (INIS)

    Air pollution problem has become more important in the progress of industry. Nitrogen oxides (NOx, mostly NO) and sulfur oxides (SOx, mostly SO2) which are contained in a sinter plant exhaust gas, are known as serious air pollutants. In such circumstances, an attempt has been made to simultaneously remove NOx and SOx from the sinter plant exhaust gas by means of a new electron beam irradiation process. The process consists of adding a small amount of NH3 to the exhaust gas, irradiating the gas by electron beam, forming ammonium salts by reactions of NOx and SOx with the NH3 and collecting ammonium salts by dry electrostatic precipitator (E.P.). Basic research on the present process had been performed using heavy oil combustion gas. Based on the results research was launched to study the applicability of the process to the treatment of sinter plant exhaust gas. A pilot plant, capable of treating a gas flow of 3000 Nm3/H was set up, and experiments were performed from July 1977 to June 1978. The plant is described and the results are presented. (author)

  4. Rotary fluidized dryer classifier for coal

    Energy Technology Data Exchange (ETDEWEB)

    Sakaba, M.; Ueki, S.; Matsumoto, T.

    1985-01-01

    The development of equipment is reproted which uses a heat transfer medium and hot air to dry metallurgical coal to a predetermined moisture level, and which simultaneously classifies the dust-producing fine coal content. The integral construction of the drying and classifying zones results in a very compact configuration, with an installation area of 1/2 to 1/3 of that required for systems in which a separate dryer and classifier are combined. 6 references.

  5. Electrons

    International Nuclear Information System (INIS)

    Fast electrons are used to produce isotopes for studying the cooper metabolism: Cu-64 in a cyclotron and Cu-67 in a linear accelerator. Localized electrons are responsible for the chemical and physiological characteristics of the trace elements. Studied are I, Cu, Co, Zn, Mo, Mn, Fe, Se, Mg. The Cu/Mo and Cu/Zn interactions are investigated. The levels of molybdenum, sulfate and zinc in the food are analysed. The role of the electrons in free radicals is discussed. The protection action of peroxidases and super oxidases against electron dangerous effect on normal physiology is also considered. Calculation of radiation damage and radiation protection is made. (author)

  6. Serefind: A Social Networking Website for Classifieds

    OpenAIRE

    Verma, Pramod

    2014-01-01

    This paper presents the design and implementation of a social networking website for classifieds, called Serefind. We designed search interfaces with focus on security, privacy, usability, design, ranking, and communications. We deployed this site at the Johns Hopkins University, and the results show it can be used as a self-sustaining classifieds site for public or private communities.

  7. A review of learning vector quantization classifiers

    CERN Document Server

    Nova, David

    2015-01-01

    In this work we present a review of the state of the art of Learning Vector Quantization (LVQ) classifiers. A taxonomy is proposed which integrates the most relevant LVQ approaches to date. The main concepts associated with modern LVQ approaches are defined. A comparison is made among eleven LVQ classifiers using one real-world and two artificial datasets.

  8. A fuzzy classifier system for process control

    Science.gov (United States)

    Karr, C. L.; Phillips, J. C.

    1994-01-01

    A fuzzy classifier system that discovers rules for controlling a mathematical model of a pH titration system was developed by researchers at the U.S. Bureau of Mines (USBM). Fuzzy classifier systems successfully combine the strengths of learning classifier systems and fuzzy logic controllers. Learning classifier systems resemble familiar production rule-based systems, but they represent their IF-THEN rules by strings of characters rather than in the traditional linguistic terms. Fuzzy logic is a tool that allows for the incorporation of abstract concepts into rule based-systems, thereby allowing the rules to resemble the familiar 'rules-of-thumb' commonly used by humans when solving difficult process control and reasoning problems. Like learning classifier systems, fuzzy classifier systems employ a genetic algorithm to explore and sample new rules for manipulating the problem environment. Like fuzzy logic controllers, fuzzy classifier systems encapsulate knowledge in the form of production rules. The results presented in this paper demonstrate the ability of fuzzy classifier systems to generate a fuzzy logic-based process control system.

  9. 32 CFR 775.5 - Classified actions.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Classified actions. 775.5 Section 775.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY MISCELLANEOUS RULES PROCEDURES FOR IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT § 775.5 Classified actions. (a) The fact that a...

  10. Adaboost Ensemble Classifiers for Corporate Default Prediction

    Directory of Open Access Journals (Sweden)

    Suresh Ramakrishnan

    2015-01-01

    Full Text Available This study aims to show a substitute technique to corporate default prediction. Data mining techniques have been extensively applied for this task, due to its ability to notice non-linear relationships and show a good performance in presence of noisy information, as it usually happens in corporate default prediction problems. In spite of several progressive methods that have widely been proposed, this area of research is not out dated and still needs further examination. In this study, the performance of multiple classifier systems is assessed in terms of their capability to appropriately classify default and non-default Malaysian firms listed in Bursa Malaysia. Multi-stage combination classifiers provided significant improvements over the single classifiers. In addition, Adaboost shows improvement in performance over the single classifiers.

  11. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  12. Designing Kernel Scheme for Classifiers Fusion

    CERN Document Server

    Haghighi, Mehdi Salkhordeh; Vahedian, Abedin; Modaghegh, Hamed

    2009-01-01

    In this paper, we propose a special fusion method for combining ensembles of base classifiers utilizing new neural networks in order to improve overall efficiency of classification. While ensembles are designed such that each classifier is trained independently while the decision fusion is performed as a final procedure, in this method, we would be interested in making the fusion process more adaptive and efficient. This new combiner, called Neural Network Kernel Least Mean Square1, attempts to fuse outputs of the ensembles of classifiers. The proposed Neural Network has some special properties such as Kernel abilities,Least Mean Square features, easy learning over variants of patterns and traditional neuron capabilities. Neural Network Kernel Least Mean Square is a special neuron which is trained with Kernel Least Mean Square properties. This new neuron is used as a classifiers combiner to fuse outputs of base neural network classifiers. Performance of this method is analyzed and compared with other fusion m...

  13. Tick Removal

    Science.gov (United States)

    ... ticks Tickborne diseases abroad Borrelia miyamotoi Borrelia mayonii Tick Removal Recommend on Facebook Tweet Share Compartir If ... a tick quite effectively. How to remove a tick Use fine-tipped tweezers to grasp the tick ...

  14. Molecular analysis by electron microscopy of the removal of psoralen-photoinduced DNA cross-links in normal and Fanconi's anemia fibroblasts

    International Nuclear Information System (INIS)

    The induction and fate of psoralen-photoinduced DNA interstrand cross-links in the genome of Fanconi's anemia (FA) fibroblasts of complementation groups A and B, and of normal human fibroblasts, were investigated by quantitative analysis of totally denatured DNA fragments visualized by electron microscopy. 8-Methoxypsoralen (5 x 10(-5) M) interstrand cross-links were induced as a function of the near ultraviolet light dose. With time of postexposure incubation, a fraction of interstrand cross-links disappeared in all cell lines. However, 24 h after treatment, this removal was significantly lower in the two FA group A cell lines examined (34-39%) than in the FA group B and normal cell lines (43-53 and 47-57%, respectively). These data indicate that FA cells are at least able to recognize and incise interstrand cross-links, as normal cells do, although group A cells seem somewhat hampered in this process. This is in accord with data obtained on the same cell lines using another biochemical assay. Since the fate of cross-links in FA constituted a controversial matter, it is important to stress that two different methodologies applied to genetically well defined cell lines led to the same conclusions

  15. Parallelism and programming in classifier systems

    CERN Document Server

    Forrest, Stephanie

    1990-01-01

    Parallelism and Programming in Classifier Systems deals with the computational properties of the underlying parallel machine, including computational completeness, programming and representation techniques, and efficiency of algorithms. In particular, efficient classifier system implementations of symbolic data structures and reasoning procedures are presented and analyzed in detail. The book shows how classifier systems can be used to implement a set of useful operations for the classification of knowledge in semantic networks. A subset of the KL-ONE language was chosen to demonstrate these o

  16. Classifier Risk Estimation under Limited Labeling Resources

    OpenAIRE

    Kumar, Anurag; Raj, Bhiksha

    2016-01-01

    In this paper we propose strategies for estimating performance of a classifier when labels cannot be obtained for the whole test set. The number of test instances which can be labeled is very small compared to the whole test data size. The goal then is to obtain a precise estimate of classifier performance using as little labeling resource as possible. Specifically, we try to answer, how to select a subset of the large test set for labeling such that the performance of a classifier estimated ...

  17. Dengue—How Best to Classify It

    OpenAIRE

    Srikiatkhachorn, Anon; Rothman, Alan L.; Robert V Gibbons; Sittisombut, Nopporn; Malasit, Prida; Ennis, Francis A.; Nimmannitya, Suchitra; Kalayanarooj, Siripen

    2011-01-01

    Since the 1970s, dengue has been classified as dengue fever and dengue hemorrhagic fever. In 2009, the World Health Organization issued a new, severity-based clinical classification which differs greatly from the previous classification.

  18. Local Component Analysis for Nonparametric Bayes Classifier

    CERN Document Server

    Khademi, Mahmoud; safayani, Meharn

    2010-01-01

    The decision boundaries of Bayes classifier are optimal because they lead to maximum probability of correct decision. It means if we knew the prior probabilities and the class-conditional densities, we could design a classifier which gives the lowest probability of error. However, in classification based on nonparametric density estimation methods such as Parzen windows, the decision regions depend on the choice of parameters such as window width. Moreover, these methods suffer from curse of dimensionality of the feature space and small sample size problem which severely restricts their practical applications. In this paper, we address these problems by introducing a novel dimension reduction and classification method based on local component analysis. In this method, by adopting an iterative cross-validation algorithm, we simultaneously estimate the optimal transformation matrices (for dimension reduction) and classifier parameters based on local information. The proposed method can classify the data with co...

  19. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  20. Electronics

    International Nuclear Information System (INIS)

    Some of the electronic equipment used in pulse counting and mean current radiation detection systems is described. This includes the high voltage supply, amplifier, amplitude discriminator, scalers or counters, ratemeters, single-channel pulse height analyser, multi-channel pulse height analyser, d.c. amplifiers, coincidence and anticoincidence units and gain stabilisers

  1. Classifying Genomic Sequences by Sequence Feature Analysis

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Liu; Dian Jiao; Xiao Sun

    2005-01-01

    Traditional sequence analysis depends on sequence alignment. In this study, we analyzed various functional regions of the human genome based on sequence features, including word frequency, dinucleotide relative abundance, and base-base correlation. We analyzed the human chromosome 22 and classified the upstream,exon, intron, downstream, and intergenic regions by principal component analysis and discriminant analysis of these features. The results show that we could classify the functional regions of genome based on sequence feature and discriminant analysis.

  2. Quality Classifiers for Open Source Software Repositories

    OpenAIRE

    Tsatsaronis, George; Halkidi, Maria; Giakoumakis, Emmanouel A.

    2009-01-01

    Open Source Software (OSS) often relies on large repositories, like SourceForge, for initial incubation. The OSS repositories offer a large variety of meta-data providing interesting information about projects and their success. In this paper we propose a data mining approach for training classifiers on the OSS meta-data provided by such data repositories. The classifiers learn to predict the successful continuation of an OSS project. The `successfulness' of projects is defined in terms of th...

  3. Searching and Classifying non-textual information

    OpenAIRE

    Arentz, Will Archer

    2004-01-01

    This dissertation contains a set of contributions that deal with search or classification of non-textual information. Each contribution can be considered a solution to a specific problem, in an attempt to map out a common ground. The problems cover a wide range of research fields, including search in music, classifying digitally sampled music, visualization and navigation in search results, and classifying images and Internet sites.On classification of digitally sample music, as method for ex...

  4. Nomograms for Visualization of Naive Bayesian Classifier

    OpenAIRE

    Možina, Martin; Demšar, Janez; Michael W Kattan; Zupan, Blaz

    2004-01-01

    Besides good predictive performance, the naive Bayesian classifier can also offer a valuable insight into the structure of the training data and effects of the attributes on the class probabilities. This structure may be effectively revealed through visualization of the classifier. We propose a new way to visualize the naive Bayesian model in the form of a nomogram. The advantages of the proposed method are simplicity of presentation, clear display of the effects of individual attribute value...

  5. Probabilistic classifiers with high-dimensional data

    OpenAIRE

    Kim, Kyung In; Simon, Richard

    2010-01-01

    For medical classification problems, it is often desirable to have a probability associated with each class. Probabilistic classifiers have received relatively little attention for small n large p classification problems despite of their importance in medical decision making. In this paper, we introduce 2 criteria for assessment of probabilistic classifiers: well-calibratedness and refinement and develop corresponding evaluation measures. We evaluated several published high-dimensional probab...

  6. Classifier Aggregation Using Local Classification Confidence

    Czech Academy of Sciences Publication Activity Database

    Štefka, David; Holeňa, Martin

    Setúbal: INSTICC, 2009, s. 173-178. ISBN 978-989-8111-66-1. [ICAART 2009. International Conference on Agents and Artificial Intelligence /1./. Porto (PT), 19.01.2009-21.01.2009] R&D Projects: GA AV ČR 1ET100300517 Institutional research plan: CEZ:AV0Z10300504 Keywords : classifier aggregation * classifier combining * classification confidence Subject RIV: IN - Informatics, Computer Science

  7. Evolving Classifiers: Methods for Incremental Learning

    OpenAIRE

    Hulley, Greg; Marwala, Tshilidzi

    2007-01-01

    The ability of a classifier to take on new information and classes by evolving the classifier without it having to be fully retrained is known as incremental learning. Incremental learning has been successfully applied to many classification problems, where the data is changing and is not all available at once. In this paper there is a comparison between Learn++, which is one of the most recent incremental learning algorithms, and the new proposed method of Incremental Learning Using Genetic ...

  8. Binary Classifier Calibration: Non-parametric approach

    OpenAIRE

    Naeini, Mahdi Pakdaman; Cooper, Gregory F.; Hauskrecht, Milos

    2014-01-01

    Accurate calibration of probabilistic predictive models learned is critical for many practical prediction and decision-making tasks. There are two main categories of methods for building calibrated classifiers. One approach is to develop methods for learning probabilistic models that are well-calibrated, ab initio. The other approach is to use some post-processing methods for transforming the output of a classifier to be well calibrated, as for example histogram binning, Platt scaling, and is...

  9. What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers?

    CERN Document Server

    Hu, Bao-Gang

    2011-01-01

    In this study, both Bayesian classifiers and mutual information classifiers are examined for binary classifications with or without a reject option. The general decision rules in terms of distinctions on error types and reject types are derived for Bayesian classifiers. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of "non-consistency" for interpreting cost terms. If no data is given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the theoretical differences, including the extremely-class-imbalanced cases. Finally, we briefly summarize the Bayesian classifiers and mutual-info...

  10. COMBINING CLASSIFIERS FOR CREDIT RISK PREDICTION

    Institute of Scientific and Technical Information of China (English)

    Bhekisipho TWALA

    2009-01-01

    Credit risk prediction models seek to predict quality factors such as whether an individual will default (bad applicant) on a loan or not (good applicant). This can be treated as a kind of machine learning (ML) problem. Recently, the use of ML algorithms has proven to be of great practical value in solving a variety of risk problems including credit risk prediction. One of the most active areas of recent research in ML has been the use of ensemble (combining) classifiers. Research indicates that ensemble individual classifiers lead to a significant improvement in classification performance by having them vote for the most popular class. This paper explores the predicted behaviour of five classifiers for different types of noise in terms of credit risk prediction accuracy, and how could such accuracy be improved by using pairs of classifier ensembles. Benchmarking results on five credit datasets and comparison with the performance of each individual classifier on predictive accuracy at various attribute noise levels are presented. The experimental evaluation shows that the ensemble of classifiers technique has the potential to improve prediction accuracy.

  11. What are the differences between Bayesian classifiers and mutual-information classifiers?

    Science.gov (United States)

    Hu, Bao-Gang

    2014-02-01

    In this paper, both Bayesian and mutual-information classifiers are examined for binary classifications with or without a reject option. The general decision rules are derived for Bayesian classifiers with distinctions on error types and reject types. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of nonconsistency for interpreting cost terms. If no data are given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the differences, including the extremely class-imbalanced cases. Finally, we briefly summarize the Bayesian and mutual-information classifiers in terms of their application advantages and disadvantages, respectively. PMID:24807026

  12. Dynamic Bayesian Combination of Multiple Imperfect Classifiers

    CERN Document Server

    Simpson, Edwin; Psorakis, Ioannis; Smith, Arfon

    2012-01-01

    Classifier combination methods need to make best use of the outputs of multiple, imperfect classifiers to enable higher accuracy classifications. In many situations, such as when human decisions need to be combined, the base decisions can vary enormously in reliability. A Bayesian approach to such uncertain combination allows us to infer the differences in performance between individuals and to incorporate any available prior knowledge about their abilities when training data is sparse. In this paper we explore Bayesian classifier combination, using the computationally efficient framework of variational Bayesian inference. We apply the approach to real data from a large citizen science project, Galaxy Zoo Supernovae, and show that our method far outperforms other established approaches to imperfect decision combination. We go on to analyse the putative community structure of the decision makers, based on their inferred decision making strategies, and show that natural groupings are formed. Finally we present ...

  13. Adapt Bagging to Nearest Neighbor Classifiers

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Zhou; Yang Yu

    2005-01-01

    It is well-known that in order to build a strong ensemble, the component learners should be with high diversity as well as high accuracy. If perturbing the training set can cause significant changes in the component learners constructed, then Bagging can effectively improve accuracy. However, for stable learners such as nearest neighbor classifiers, perturbing the training set can hardly produce diverse component learners, therefore Bagging does not work well. This paper adapts Bagging to nearest neighbor classifiers through injecting randomness to distance metrics. In constructing the component learners, both the training set and the distance metric employed for identifying the neighbors are perturbed. A large scale empirical study reported in this paper shows that the proposed BagInRand algorithm can effectively improve the accuracy of nearest neighbor classifiers.

  14. Evolving Classifiers: Methods for Incremental Learning

    CERN Document Server

    Hulley, Greg

    2007-01-01

    The ability of a classifier to take on new information and classes by evolving the classifier without it having to be fully retrained is known as incremental learning. Incremental learning has been successfully applied to many classification problems, where the data is changing and is not all available at once. In this paper there is a comparison between Learn++, which is one of the most recent incremental learning algorithms, and the new proposed method of Incremental Learning Using Genetic Algorithm (ILUGA). Learn++ has shown good incremental learning capabilities on benchmark datasets on which the new ILUGA method has been tested. ILUGA has also shown good incremental learning ability using only a few classifiers and does not suffer from catastrophic forgetting. The results obtained for ILUGA on the Optical Character Recognition (OCR) and Wine datasets are good, with an overall accuracy of 93% and 94% respectively showing a 4% improvement over Learn++.MT for the difficult multi-class OCR dataset.

  15. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads;

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization...

  16. Enamel Surface Evaluation after Removal of Orthodontic Composite Remnants by Intraoral Sandblasting Technique and Carbide Bur Technique: A Three-Dimensional Surface Profilometry and Scanning Electron Microscopic Study

    OpenAIRE

    Mhatre, Amol C; Tandur, Arundhati P; Reddy, Sumitra S; Karunakara, B C; Baswaraj, H

    2015-01-01

    Background: The purpose of this thesis is to present a practical and efficient clinical method of returning enamel to as near its original condition as possible following removal of bonded orthodontic attachments. The main objective of this study is to evaluate and compare the iatrogenic enamel damage caused by use of two different remnant removal techniques – sandblasting technique and carbide bur technique. Materials and Methods: 40 extracted premolar teeth were selected as sample. Premolar...

  17. Scanning electron microscopy (SEM) evaluation of sealing ability of MTA and EndoSequence as root-end filling materials with chitosan and carboxymethyl chitosan (CMC) as retrograde smear layer removing agents

    OpenAIRE

    Bolla Nagesh; Eppala Jeevani; Varri Sujana; Bharagavi Damaraju; Kaluvakolanu Sreeha; Penumaka Ramesh

    2016-01-01

    Aim: The purpose of this study was to evaluate the sealing ability of mineral trioxide aggregate (MTA) and EndoSequence with chitosan and carboxymethyl chitosan (CMC) as retrograde smear layer removing agents using scanning electron microscopy (SEM). Materials and Methods: Forty human single rooted teeth were taken. Crowns were decoronated and canals were obturated. Apically roots were resected and retrograde cavities were done. Based on the type of retrograde material placed and the typ...

  18. CLASSIFYING NODULAR LESIONS OF ORAL CAVITY

    OpenAIRE

    Sumit Bhateja

    2013-01-01

    Diagnosis of many lesions of the oral cavity is challenging to most cliniciansbecause of their uncommon prevalence. A number of cystic, osteodystrophic,microbial, tumor and tumor like lesions of the oral cavity are present withcharacteristic exophytic/raised surface; which makes their diagnosis and studysimpler. The present article is attempted at classifying the common nodular lesions ofthe oral cavity.

  19. The classifying topos of a topological bicategory

    CERN Document Server

    Bakovic, Igor

    2009-01-01

    For any topological bicategory 2C, the Duskin nerve N2C of 2C is a simplicial space. We introduce the classifying topos B2C of 2C as the Deligne topos of sheaves Sh(N2C) on the simplicial space 2NC. It is shown that the category of topos morphisms from the topos of sheaves Sh(X) on a topological space X to the Deligne classifying topos Sh(N2C) is naturally equivalent to the category of principal 2C-bundles. As a simple consequence, the geometric realization of the nerve N2C of a locally contractible topological bicategory 2C is the classifying space of principal 2C-bundles (on CW complexes), giving a variant of the result of Baas, Bokstedt and Kro derived in the context of bicategorical K-theory. We also define classifying topoi of a topological bicategory 2C using sheaves on other types of nerves of a bicategory given by Lack and Paoli, Simpson and Tamsamani by means of bisimplicial spaces, and we examine their properties.

  20. Automated mobility-classified-aerosol detector

    OpenAIRE

    Russell, Lynn M.; Flagan, Richard C.; Zhang, Shou-Hua

    2001-01-01

    An aerosol detection system for measuring particle number distribution with respect to particle dimension in an aerosol sample. The system includes an alternating dual-bag sampler, a radially classified differential mobility analyzer, and a condensation nucleus counter. Pressure variations in sampling are compensated by feedback control of volumetric flow rates using a plurality of flow control elements.

  1. Adaptively robust filtering with classified adaptive factors

    Institute of Scientific and Technical Information of China (English)

    CUI Xianqiang; YANG Yuanxi

    2006-01-01

    The key problems in applying the adaptively robust filtering to navigation are to establish an equivalent weight matrix for the measurements and a suitable adaptive factor for balancing the contributions of the measurements and the predicted state information to the state parameter estimates. In this paper, an adaptively robust filtering with classified adaptive factors was proposed, based on the principles of the adaptively robust filtering and bi-factor robust estimation for correlated observations. According to the constant velocity model of Kalman filtering, the state parameter vector was divided into two groups, namely position and velocity. The estimator of the adaptively robust filtering with classified adaptive factors was derived, and the calculation expressions of the classified adaptive factors were presented. Test results show that the adaptively robust filtering with classified adaptive factors is not only robust in controlling the measurement outliers and the kinematic state disturbing but also reasonable in balancing the contributions of the predicted position and velocity, respectively, and its filtering accuracy is superior to the adaptively robust filter with single adaptive factor based on the discrepancy of the predicted position or the predicted velocity.

  2. 75 FR 37253 - Classified National Security Information

    Science.gov (United States)

    2010-06-28

    ... and Records Administration Information Security Oversight Office 32 CFR Parts 2001 and 2003 Classified National Security Information; Final Rule #0;#0;Federal Register / Vol. 75, No. 123 / Monday, June 28, 2010 / Rules and Regulations#0;#0; ] NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Information...

  3. Tattoo removal.

    Science.gov (United States)

    Adatto, Maurice A; Halachmi, Shlomit; Lapidoth, Moshe

    2011-01-01

    Over 50,000 new tattoos are placed each year in the United States. Studies estimate that 24% of American college students have tattoos and 10% of male American adults have a tattoo. The rising popularity of tattoos has spurred a corresponding increase in tattoo removal. Not all tattoos are placed intentionally or for aesthetic reasons though. Traumatic tattoos due to unintentional penetration of exogenous pigments can also occur, as well as the placement of medical tattoos to mark treatment boundaries, for example in radiation therapy. Protocols for tattoo removal have evolved over history. The first evidence of tattoo removal attempts was found in Egyptian mummies, dated to have lived 4,000 years BC. Ancient Greek writings describe tattoo removal with salt abrasion or with a paste containing cloves of white garlic mixed with Alexandrian cantharidin. With the advent of Q-switched lasers in the late 1960s, the outcomes of tattoo removal changed radically. In addition to their selective absorption by the pigment, the extremely short pulse duration of Q-switched lasers has made them the gold standard for tattoo removal. PMID:21865802

  4. Disassembly and Sanitization of Classified Matter

    International Nuclear Information System (INIS)

    The Disassembly Sanitization Operation (DSO) process was implemented to support weapon disassembly and disposition by using recycling and waste minimization measures. This process was initiated by treaty agreements and reconfigurations within both the DOD and DOE Complexes. The DOE is faced with disassembling and disposing of a huge inventory of retired weapons, components, training equipment, spare parts, weapon maintenance equipment, and associated material. In addition, regulations have caused a dramatic increase in the need for information required to support the handling and disposition of these parts and materials. In the past, huge inventories of classified weapon components were required to have long-term storage at Sandia and at many other locations throughout the DoE Complex. These materials are placed in onsite storage unit due to classification issues and they may also contain radiological and/or hazardous components. Since no disposal options exist for this material, the only choice was long-term storage. Long-term storage is costly and somewhat problematic, requiring a secured storage area, monitoring, auditing, and presenting the potential for loss or theft of the material. Overall recycling rates for materials sent through the DSO process have enabled 70 to 80% of these components to be recycled. These components are made of high quality materials and once this material has been sanitized, the demand for the component metals for recycling efforts is very high. The DSO process for NGPF, classified components established the credibility of this technique for addressing the long-term storage requirements of the classified weapons component inventory. The success of this application has generated interest from other Sandia organizations and other locations throughout the complex. Other organizations are requesting the help of the DSO team and the DSO is responding to these requests by expanding its scope to include Work-for- Other projects. For example

  5. Classification Studies in an Advanced Air Classifier

    Science.gov (United States)

    Routray, Sunita; Bhima Rao, R.

    2016-01-01

    In the present paper, experiments are carried out using VSK separator which is an advanced air classifier to recover heavy minerals from beach sand. In classification experiments the cage wheel speed and the feed rate are set and the material is fed to the air cyclone and split into fine and coarse particles which are collected in separate bags. The size distribution of each fraction was measured by sieve analysis. A model is developed to predict the performance of the air classifier. The objective of the present model is to predict the grade efficiency curve for a given set of operating parameters such as cage wheel speed and feed rate. The overall experimental data with all variables studied in this investigation is fitted to several models. It is found that the present model is fitting good to the logistic model.

  6. NETWORK FAULT DIAGNOSIS USING DATA MINING CLASSIFIERS

    Directory of Open Access Journals (Sweden)

    Eleni Rozaki

    2015-04-01

    Full Text Available Mobile networks are under more pressure than ever before because of the increasing number of smartphone users and the number of people relying on mobile data networks. With larger numbers of users, the issue of service quality has become more important for network operators. Identifying faults in mobile networks that reduce the quality of service must be found within minutes so that problems can be addressed and networks returned to optimised performance. In this paper, a method of automated fault diagnosis is presented using decision trees, rules and Bayesian classifiers for visualization of network faults. Using data mining techniques the model classifies optimisation criteria based on the key performance indicators metrics to identify network faults supporting the most efficient optimisation decisions. The goal is to help wireless providers to localize the key performance indicator alarms and determine which Quality of Service factors should be addressed first and at which locations.

  7. Comparing cosmic web classifiers using information theory

    CERN Document Server

    Leclercq, Florent; Jasche, Jens; Wandelt, Benjamin

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-web, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  8. Semantic Features for Classifying Referring Search Terms

    Energy Technology Data Exchange (ETDEWEB)

    May, Chandler J.; Henry, Michael J.; McGrath, Liam R.; Bell, Eric B.; Marshall, Eric J.; Gregory, Michelle L.

    2012-05-11

    When an internet user clicks on a result in a search engine, a request is submitted to the destination web server that includes a referrer field containing the search terms given by the user. Using this information, website owners can analyze the search terms leading to their websites to better understand their visitors needs. This work explores some of the features that can be used for classification-based analysis of such referring search terms. We present initial results for the example task of classifying HTTP requests countries of origin. A system that can accurately predict the country of origin from query text may be a valuable complement to IP lookup methods which are susceptible to the obfuscation of dereferrers or proxies. We suggest that the addition of semantic features improves classifier performance in this example application. We begin by looking at related work and presenting our approach. After describing initial experiments and results, we discuss paths forward for this work.

  9. Comparing cosmic web classifiers using information theory

    Science.gov (United States)

    Leclercq, Florent; Lavaux, Guilhem; Jasche, Jens; Wandelt, Benjamin

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  10. Design and evaluation of neural classifiers

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Pedersen, Morten With; Hansen, Lars Kai; Larsen, Jan

    In this paper we propose a method for the design of feedforward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme we derive a modified form of the entropy error measure and an algebraic estimate of the test error. In conjunction wit...... optimal brain damage pruning the test error estimate is used to optimize the network architecture. The scheme is evaluated on an artificial and a real world problem......In this paper we propose a method for the design of feedforward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme we derive a modified form of the entropy error measure and an algebraic estimate of the test error. In conjunction with...

  11. Hair removal

    DEFF Research Database (Denmark)

    Haedersdal, Merete; Haak, Christina S

    2011-01-01

    suitable for targeting follicular and hair shaft melanin: normal mode ruby laser (694 nm), normal mode alexandrite laser (755 nm), pulsed diode lasers (800, 810 nm), long-pulse Nd:YAG laser (1,064 nm), and intense pulsed light (IPL) sources (590-1,200 nm). The ideal patient has thick dark terminal hair......Hair removal with optical devices has become a popular mainstream treatment that today is considered the most efficient method for the reduction of unwanted hair. Photothermal destruction of hair follicles constitutes the fundamental concept of hair removal with red and near-infrared wavelengths......, white skin, and a normal hormonal status. Currently, no method of lifelong permanent hair eradication is available, and it is important that patients have realistic expectations. Substantial evidence has been found for short-term hair removal efficacy of up to 6 months after treatment with the available...

  12. Hair Removal

    DEFF Research Database (Denmark)

    Hædersdal, Merete

    2011-01-01

    suitable for targeting follicular and hair shaft melanin: normal mode ruby laser (694 nm), normal mode alexandrite laser (755 nm), pulsed diode lasers (800, 810 nm), long-pulse Nd:YAG laser (1,064 nm), and intense pulsed light (IPL) sources (590-1,200 nm). The ideal patient has thick dark terminal hair......Hair removal with optical devices has become a popular mainstream treatment that today is considered the most efficient method for the reduction of unwanted hair. Photothermal destruction of hair follicles constitutes the fundamental concept of hair removal with red and near-infrared wavelengths......, white skin, and a normal hormonal status. Currently, no method of lifelong permanent hair eradication is available, and it is important that patients have realistic expectations. Substantial evidence has been found for short-term hair removal efficacy of up to 6 months after treatment with the available...

  13. Robot Learning Using Learning Classifier Systems Approach

    OpenAIRE

    Jabin, Suraiya

    2010-01-01

    In this chapter, I have presented Learning Classifier Systems, which add to the classical Reinforcement Learning framework the possibility of representing the state as a vector of attributes and finding a compact expression of the representation so induced. Their formalism conveys a nice interaction between learning and evolution, which makes them a class of particularly rich systems, at the intersection of several research domains. As a result, they profit from the accumulated extensions of ...

  14. Classifiers Based on Inverted Distances. Chapter 19

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    Rijeka: InTech, 2011 - (Funatsu, K.; Hasegawa, K.), s. 369-386 ISBN 978-953-307-547-1 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : classification * neighbor distances * correlation dimension * Zipfian distribution Subject RIV: BB - Applied Statistics, Operational Research http://www.intechopen.com/books/new-fundamental-technologies-in-data-mining/classifiers-based-on-inverted-distances

  15. Use Restricted - Classified information sharing, case NESA

    OpenAIRE

    El-Bash, Amira

    2015-01-01

    This Thesis is written for the Laurea University of Applied Sciences under the Bachelor’s Degree in Security Management. The empirical research of the thesis was supported by the National Emergency Supply Agency as a CASE study, in classified information sharing in the organization. The National Emergency Supply Agency was chosen for the research because of its social significance and distinctively wide operation field. Being one of the country’s administrator’s actors, its range of tasks in ...

  16. Comparing cosmic web classifiers using information theory

    OpenAIRE

    Leclercq, Florent; Lavaux, Guilhem; Jasche, Jens; Wandelt, Benjamin

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative perf...

  17. Using Singularity Exponent in Distance based Classifier

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    Los Alamitos : IEEE, 2010, s. 220-224 ISBN 978-1-4244-8135-4. [ISDA 2010. International Conference on Intelligent Systems Design and Applications /10./. Cairo (EG), 29.11.2010-01.12.2010] R&D Projects: GA MŠk 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : singularity exponent * nearest neighbor * classifier Subject RIV: IN - Informatics, Computer Science

  18. Classifying and evaluating architecture design methods

    OpenAIRE

    Aksit, Mehmet; Tekinerdogan, Bedir

    1999-01-01

    The concept of software architecture has gained a wide popularity and is generally considered to play a fundamental role in coping with the inherent difficulties of the development of large-scale and complex software systems. This document first gives a definition of architectures. Second, a meta-model for architecture design methods is presented. This model is used for classifying and evaluating various architecture design approaches. The document concludes with the description of the identi...

  19. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428. ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant ostatní: MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  20. Nevus Removal

    Science.gov (United States)

    ... can be fundamental to improving a patient’s overall psychosocial state. Other reasons to remove a nevus may ... This is not commonly done and presents many risks and challenges. Can’t they ... on all these same factors again. Different patients are more prone or less ...

  1. Quality of life in urban-classified and rural-classified English local authority areas

    OpenAIRE

    Josep M. Campanera; Paul Higgins

    2011-01-01

    This paper presents the results of an analysis of the Audit Commission’s local quality-of-life indicators dataset to compare reported outcomes amongst 208 urban-classified and 144 rural-classified English local authority areas. We contextualise the demarcation of the urban and rural by reference to the transformational politics of the previous Labour government and its establishment of the sustainable communities initiative, whose controversial ‘place-based’ revitalisation essence continues t...

  2. Evaluation of the electron beam flue gas treatment process to remove SO2 and NOx emission from coal thermal power plants in Turkey

    International Nuclear Information System (INIS)

    In this study, both the current energy consumption and production and SO2 and NOx emission in Turkey is analyzed. The electron beam FGT is compared with preferred limestone/gypsum wet-scrubbing process and evaluated for each power plant. As shown, the investments and the operational costs of electron beam FGT are higher than preferred conventional FGD except 1x210 MWe Orhaneli plant. As a result, if investment and operational costs are reduced, in the future the electron beam FGT may be the solution for reduction of both SO2 and NOx emission from small to mid-sized coal thermal power plants

  3. Efficacy of various root canal irrigants on removal of smear layer in the primary root canals after hand instrumentation: A scanning electron microscopy study

    OpenAIRE

    Hariharan V; Nandlal B; Srilatha K

    2010-01-01

    Aim: The purpose of this in-vitro study is to determine the efficacy of various irrigants in removing the smear layer in primary teeth root canals after hand instrumentation. Materials and Methods: The present study consisted of 30 human primary incisors which were sectioned at the cementoenamel junction horizontally. The specimens were divided randomly into four experimental and one control group having six teeth each and each group was treated with the specific irrigant. 5.25% NaOCl,...

  4. A classifier neural network for rotordynamic systems

    Science.gov (United States)

    Ganesan, R.; Jionghua, Jin; Sankar, T. S.

    1995-07-01

    A feedforward backpropagation neural network is formed to identify the stability characteristic of a high speed rotordynamic system. The principal focus resides in accounting for the instability due to the bearing clearance effects. The abnormal operating condition of 'normal-loose' Coulomb rub, that arises in units supported by hydrodynamic bearings or rolling element bearings, is analysed in detail. The multiple-parameter stability problem is formulated and converted to a set of three-parameter algebraic inequality equations. These three parameters map the wider range of physical parameters of commonly-used rotordynamic systems into a narrow closed region, that is used in the supervised learning of the neural network. A binary-type state of the system is expressed through these inequalities that are deduced from the analytical simulation of the rotor system. Both the hidden layer as well as functional-link networks are formed and the superiority of the functional-link network is established. Considering the real time interpretation and control of the rotordynamic system, the network reliability and the learning time are used as the evaluation criteria to assess the superiority of the functional-link network. This functional-link network is further trained using the parameter values of selected rotor systems, and the classifier network is formed. The success rate of stability status identification is obtained to assess the potentials of this classifier network. The classifier network is shown that it can also be used, for control purposes, as an 'advisory' system that suggests the optimum way of parameter adjustment.

  5. 10 CFR 110.126 - Protection of classified information.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Protection of classified information. 110.126 Section 110... MATERIAL Special Procedures for Classified Information in Hearings § 110.126 Protection of classified information. Nothing in this subpart shall relieve any person from safeguarding classified information...

  6. 46 CFR 503.59 - Safeguarding classified information.

    Science.gov (United States)

    2010-10-01

    ... maintain: (1) A classified document register or log containing a listing of all classified holdings, and (2) A classified document destruction register or log containing the title and date of all classified... documents. (m) Combinations to dial-type locks shall be changed only by persons having an...

  7. Support Vector classifiers for Land Cover Classification

    CERN Document Server

    Pal, Mahesh

    2008-01-01

    Support vector machines represent a promising development in machine learning research that is not widely used within the remote sensing community. This paper reports the results of Multispectral(Landsat-7 ETM+) and Hyperspectral DAIS)data in which multi-class SVMs are compared with maximum likelihood and artificial neural network methods in terms of classification accuracy. Our results show that the SVM achieves a higher level of classification accuracy than either the maximum likelihood or the neural classifier, and that the support vector machine can be used with small training datasets and high-dimensional data.

  8. Gearbox Condition Monitoring Using Advanced Classifiers

    Directory of Open Access Journals (Sweden)

    P. Večeř

    2010-01-01

    Full Text Available New efficient and reliable methods for gearbox diagnostics are needed in automotive industry because of growing demand for production quality. This paper presents the application of two different classifiers for gearbox diagnostics – Kohonen Neural Networks and the Adaptive-Network-based Fuzzy Interface System (ANFIS. Two different practical applications are presented. In the first application, the tested gearboxes are separated into two classes according to their condition indicators. In the second example, ANFIS is applied to label the tested gearboxes with a Quality Index according to the condition indicators. In both applications, the condition indicators were computed from the vibration of the gearbox housing. 

  9. Learnability of min-max pattern classifiers

    Science.gov (United States)

    Yang, Ping-Fai; Maragos, Petros

    1991-11-01

    This paper introduces the class of thresholded min-max functions and studies their learning under the probably approximately correct (PAC) model introduced by Valiant. These functions can be used as pattern classifiers of both real-valued and binary-valued feature vectors. They are a lattice-theoretic generalization of Boolean functions and are also related to three-layer perceptrons and morphological signal operators. Several subclasses of the thresholded min- max functions are shown to be learnable under the PAC model.

  10. Classifying LEP Data with Support Vector Algorithms

    CERN Document Server

    Vannerem, P; Schölkopf, B; Smola, A J; Söldner-Rembold, S

    1999-01-01

    We have studied the application of different classification algorithms in the analysis of simulated high energy physics data. Whereas Neural Network algorithms have become a standard tool for data analysis, the performance of other classifiers such as Support Vector Machines has not yet been tested in this environment. We chose two different problems to compare the performance of a Support Vector Machine and a Neural Net trained with back-propagation: tagging events of the type e+e- -> ccbar and the identification of muons produced in multihadronic e+e- annihilation events.

  11. Comparison of Current Frame-Based Phoneme Classifiers

    Directory of Open Access Journals (Sweden)

    Vaclav Pfeifer

    2011-01-01

    Full Text Available This paper compares today’s most common frame-based classifiers. These classifiers can be divided into the two main groups – generic classifiers which creates the most probable model based on the training data (for example GMM and discriminative classifiers which focues on creating decision hyperplane. A lot of research has been done with the GMM classifiers and therefore this paper will be mainly focused on the frame-based classifiers. Two discriminative classifiers will be presented. These classifiers implements a hieararchical tree root structure over the input phoneme group which shown to be an effective. Based on these classifiers, two efficient training algorithms will be presented. We demonstrate advantages of our training algorithms by evaluating all classifiers over the TIMIT speech corpus.

  12. Removal of

    OpenAIRE

    Roohan Rakhshaee; Zahra Zamiraee; Somaieh Baghipour; Mohammad Panahandeh

    2013-01-01

    Background and Objectives: Azolla Filiculoides as a non-living fern was used in a batch system to remove "Basic Blue 3", which is a cationic dye and a carcinogenic agent.Materials and Methods: We used a batch system by applying certain concentrations of dye contaminant and in the presence of a certain amount of adsorbent under optimum conditions. The main groups presenting in the Azolla cell wall were evaluated by acidification and alkalization of Azolla's media and then potentiometric titrat...

  13. Removal of

    Directory of Open Access Journals (Sweden)

    Roohan Rakhshaee

    2013-02-01

    Full Text Available Background and Objectives: Azolla Filiculoides as a non-living fern was used in a batch system to remove "Basic Blue 3", which is a cationic dye and a carcinogenic agent.Materials and Methods: We used a batch system by applying certain concentrations of dye contaminant and in the presence of a certain amount of adsorbent under optimum conditions. The main groups presenting in the Azolla cell wall were evaluated by acidification and alkalization of Azolla's media and then potentiometric titration with standard basic and acidic solutions. Results: It was observed that the removal efficiency of dye using non-living Azolla in accordance with the Langmuir isotherms was 82% for the initial dye concentration of 200 mg/lit under reaction conditions consisting of contact time 6 h, pH= 6, temperature 25 ˚C, and dose 5 g/lit. Qmax (maximum uptake capacity by the activated Azolla at three temperatures 5, 25 and 50 ˚C was 0.732, 0.934, and 1.176 mmol/g respectively. ΔG (Gibbs free energy changes was obtained for these temperatures as -0.457, -0.762, and -1.185 kJ/mol respectively.Conclusion: Removal of basic blue 3 using Azolla is an economically and effective method.

  14. Cross-classified occupational exposure data.

    Science.gov (United States)

    Jones, Rachael M; Burstyn, Igor

    2016-09-01

    We demonstrate the regression analysis of exposure determinants using cross-classified random effects in the context of lead exposures resulting from blasting surfaces in advance of painting. We had three specific objectives for analysis of the lead data, and observed: (1) high within-worker variability in personal lead exposures, explaining 79% of variability; (2) that the lead concentration outside of half-mask respirators was 2.4-fold higher than inside supplied-air blasting helmets, suggesting that the exposure reduction by blasting helmets may be lower than expected by the Assigned Protection Factor; and (3) that lead concentrations at fixed area locations in containment were not associated with personal lead exposures. In addition, we found that, on average, lead exposures among workers performing blasting and other activities was 40% lower than among workers performing only blasting. In the process of obtaining these analyses objectives, we determined that the data were non-hierarchical: repeated exposure measurements were collected for a worker while the worker was a member of several groups, or cross-classified among groups. Since the worker is a member of multiple groups, the exposure data do not adhere to the traditionally assumed hierarchical structure. Forcing a hierarchical structure on these data led to similar within-group and between-group variability, but decreased precision in the estimate of effect of work activity on lead exposure. We hope hygienists and exposure assessors will consider non-hierarchical models in the design and analysis of exposure assessments. PMID:27029937

  15. A systematic comparison of supervised classifiers.

    Directory of Open Access Journals (Sweden)

    Diego Raphael Amancio

    Full Text Available Pattern recognition has been employed in a myriad of industrial, commercial and academic applications. Many techniques have been devised to tackle such a diversity of applications. Despite the long tradition of pattern recognition research, there is no technique that yields the best classification in all scenarios. Therefore, as many techniques as possible should be considered in high accuracy applications. Typical related works either focus on the performance of a given algorithm or compare various classification methods. In many occasions, however, researchers who are not experts in the field of machine learning have to deal with practical classification tasks without an in-depth knowledge about the underlying parameters. Actually, the adequate choice of classifiers and parameters in such practical circumstances constitutes a long-standing problem and is one of the subjects of the current paper. We carried out a performance study of nine well-known classifiers implemented in the Weka framework and compared the influence of the parameter configurations on the accuracy. The default configuration of parameters in Weka was found to provide near optimal performance for most cases, not including methods such as the support vector machine (SVM. In addition, the k-nearest neighbor method frequently allowed the best accuracy. In certain conditions, it was possible to improve the quality of SVM by more than 20% with respect to their default parameter configuration.

  16. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  17. Comparative evaluation of 15% ethylenediamine tetra-acetic acid plus cetavlon and 5% chlorine dioxide in removal of smear layer: A scanning electron microscope study

    OpenAIRE

    Sandeep Singh; Vimal Arora; Inderpal Majithia; Rakesh Kumar Dhiman; Dinesh Kumar; Amber Ather

    2013-01-01

    Aims: The purpose of this study was to compare the efficacy of smear layer removal by 5% chlorine dioxide and 15% Ethylenediamine Tetra-Acetic Acid plus Cetavlon (EDTAC) from the human root canal dentin. Materials >and Methods : Fifty single rooted human mandibular anterior teeth were divided into two groups of 20 teeth each and control group of 10 teeth. The root canals were prepared till F3 protaper and initially irrigated with 2% Sodium hypochlorite followed by 1 min irrigation with 15% ED...

  18. Effect of different final irrigating solutions on smear layer removal in apical third of root canal: A scanning electron microscope study

    OpenAIRE

    Sayesh Vemuri; Sreeha Kaluva Kolanu; Sujana Varri; Ravi Kumar Pabbati; Ramesh Penumaka; Nagesh Bolla

    2016-01-01

    Aim: The aim of this in vitro study is to compare the smear layer removal efficacy of different irrigating solutions at the apical third of the root canal. Materials and Methods: Forty human single-rooted mandibular premolar teeth were taken and decoronated to standardize the canal length to 14 mm. They were prepared by ProTaper rotary system to an apical preparation of file size F3. Prepared teeth were randomly divided into four groups (n = 10); saline (Group 1; negative control), ethyle...

  19. Comparison of removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether-N, N, N', N'-tetraacetic acid and citric acid in primary teeth: A scanning electron microscopic study

    Directory of Open Access Journals (Sweden)

    Rahul J Hegde

    2016-01-01

    Full Text Available Background: Root canal irrigants are considered momentous in their tissue dissolving property, eliminating microorganisms, and removing smear layer. The present study was aimed to compare the removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether-N, N, N', N'-tetraacetic acid (EGTA and citric acid solutions with saline as a control in primary anterior teeth. Materials and Methods: Thirty primary anterior teeth were chosen for the study. The teeth were distributed into three groups having ten teeth each. Following instrumentation, root canals of the first group were treated with 17% EGTA and the second group with 6% citric acid. Only saline was used as an irrigant for the control group. Then, the teeth were subjected to scanning electron microscopy (SEM study. The scale given by Rome et al. for the smear layer removal was used in the present study. Results: The pictures from the SEM showed that among the tested irrigants, 17% EGTA + 5% sodium hypochlorite (NaOCl group showed the best results when compared to other groups. Conclusion: The results advocate that the sequential irrigation of the pulp canal walls with 17% EGTA followed by 5% NaOCl produced efficacious and smear-free root canal walls.

  20. Comparison of removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether)-N, N, N', N'-tetraacetic acid and citric acid in primary teeth: A scanning electron microscopic study

    Science.gov (United States)

    Hegde, Rahul J.; Bapna, Kavita

    2016-01-01

    Background: Root canal irrigants are considered momentous in their tissue dissolving property, eliminating microorganisms, and removing smear layer. The present study was aimed to compare the removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether)-N, N, N', N'-tetraacetic acid (EGTA) and citric acid solutions with saline as a control in primary anterior teeth. Materials and Methods: Thirty primary anterior teeth were chosen for the study. The teeth were distributed into three groups having ten teeth each. Following instrumentation, root canals of the first group were treated with 17% EGTA and the second group with 6% citric acid. Only saline was used as an irrigant for the control group. Then, the teeth were subjected to scanning electron microscopy (SEM) study. The scale given by Rome et al. for the smear layer removal was used in the present study. Results: The pictures from the SEM showed that among the tested irrigants, 17% EGTA + 5% sodium hypochlorite (NaOCl) group showed the best results when compared to other groups. Conclusion: The results advocate that the sequential irrigation of the pulp canal walls with 17% EGTA followed by 5% NaOCl produced efficacious and smear-free root canal walls. PMID:27307670

  1. Effectiveness of hydrogen peroxide and electron-beam irradiation treatment for removal and inactivation of viruses in equine-derived xenografts.

    Science.gov (United States)

    Cusinato, Riccardo; Pacenti, Monia; Martello, Thomas; Fattori, Paolo; Morroni, Marco; Palù, Giorgio

    2016-06-01

    Bone grafting is a common procedure for bone reconstruction in dentistry, orthopedics, and neurosurgery. A wide range of grafts are currently used, and xenografts are regarded as an interesting alternative to autogenous bone because all mammals share the same bone mineral component composition and morphology. Antigens must be eliminated from bone grafts derived from animal tissues in order to make them biocompatible. Moreover, the processing method must also safely inactivate and/or remove viruses or other potential infectious agents. This study assessed the efficacy of two steps applied in manufacturing some equine-derived xenografts: hydrogen-peroxide and e-beam sterilization treatments for inactivation and removal of viruses in equine bone granules (cortical and cancellous) and collagen and pericardium membranes. Viruses belonging to three different human viral species (Herpes simplex virus type 1, Coxsackievirus B1, and Influenzavirus type A H1N1) were selected and used to spike semi-processed biomaterials. For each viral species, the tissue culture infective dose (TCID50) on cell lines and the number of genome copies through qPCR were assessed. Both treatments were found to be effective at virus inactivation. Considering the model viruses studied, the application of hydrogen peroxide and e-beam irradiation could also be considered effective for processing bone tissue of human origin. PMID:26969529

  2. A Spiking Neural Learning Classifier System

    CERN Document Server

    Howard, Gerard; Lanzi, Pier-Luca

    2012-01-01

    Learning Classifier Systems (LCS) are population-based reinforcement learners used in a wide variety of applications. This paper presents a LCS where each traditional rule is represented by a spiking neural network, a type of network with dynamic internal state. We employ a constructivist model of growth of both neurons and dendrites that realise flexible learning by evolving structures of sufficient complexity to solve a well-known problem involving continuous, real-valued inputs. Additionally, we extend the system to enable temporal state decomposition. By allowing our LCS to chain together sequences of heterogeneous actions into macro-actions, it is shown to perform optimally in a problem where traditional methods can fail to find a solution in a reasonable amount of time. Our final system is tested on a simulated robotics platform.

  3. Learning Vector Quantization for Classifying Astronomical Objects

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The sizes of astronomical surveys in different wavebands are increas-ing rapidly. Therefore, automatic classification of objects is becoming ever moreimportant. We explore the performance of learning vector quantization (LVQ) inclassifying multi-wavelength data. Our analysis concentrates on separating activesources from non-active ones. Different classes of X-ray emitters populate distinctregions of a multidimensional parameter space. In order to explore the distributionof various objects in a multidimensional parameter space, we positionally cross-correlate the data of quasars, BL Lacs, active galaxies, stars and normal galaxiesin the optical, X-ray and infrared bands. We then apply LVQ to classify them withthe obtained data. Our results show that LVQ is an effective method for separatingAGNs from stars and normal galaxies with multi-wavelength data.

  4. Automatic Fracture Detection Using Classifiers- A Review

    Directory of Open Access Journals (Sweden)

    S.K.Mahendran

    2011-11-01

    Full Text Available X-Ray is one the oldest and frequently used devices, that makes images of any bone in the body, including the hand, wrist, arm, elbow, shoulder, foot, ankle, leg (shin, knee, thigh, hip, pelvis or spine. A typical bone ailment is the fracture, which occurs when bone cannot withstand outside force like direct blows, twisting injuries and falls. Fractures are cracks in bones and are defined as a medical condition in which there is a break in the continuity of the bone. Detection and correct treatment of fractures are considered important, as a wrong diagnosis often lead to ineffective patient management, increased dissatisfaction and expensive litigation. The main focus of this paper is a review study that discusses about various classification algorithms that can be used to classify x-ray images as normal or fractured.

  5. Combining Heterogeneous Classifiers for Relational Databases

    CERN Document Server

    Manjunatha, Geetha; Sitaram, Dinkar

    2012-01-01

    Most enterprise data is distributed in multiple relational databases with expert-designed schema. Using traditional single-table machine learning techniques over such data not only incur a computational penalty for converting to a 'flat' form (mega-join), even the human-specified semantic information present in the relations is lost. In this paper, we present a practical, two-phase hierarchical meta-classification algorithm for relational databases with a semantic divide and conquer approach. We propose a recursive, prediction aggregation technique over heterogeneous classifiers applied on individual database tables. The proposed algorithm was evaluated on three diverse datasets, namely TPCH, PKDD and UCI benchmarks and showed considerable reduction in classification time without any loss of prediction accuracy.

  6. Making classifying selectors work for foam elimination in the activated-sludge process.

    Science.gov (United States)

    Parker, Denny; Geary, Steve; Jones, Garr; McIntyre, Lori; Oppenheim, Stuart; Pedregon, Vick; Pope, Rod; Richards, Tyler; Voigt, Christine; Volpe, Gary; Willis, John; Witzgall, Robert

    2003-01-01

    Classifying selectors are used to control the population of foam-causing organisms in activated-sludge plants to prevent the development of nuisance foams. The term, classifying selector, refers to the physical mechanism by which these organisms are selected against; foam-causing organisms are enriched into the solids in the foam and their rapid removal controls their population at low levels in the mixed liquor. Foam-causing organisms are wasted "first" rather than accumulating on the surface of tanks and thereby being wasted "last", which is typical of the process. This concept originated in South Africa, where pilot studies showed that placement of a flotation tank for foam removal prior to secondary clarifiers would eliminate foam-causing organisms. It was later simplified in the United States by using the aeration in aeration tanks or aerated channels coupled with simple baffling and adjustable weirs to make continuous separation of nuisance organisms from the mixed liquor. PMID:12683467

  7. Classifying gauge anomalies through SPT orders and classifying anomalies through topological orders

    CERN Document Server

    Wen, Xiao-Gang

    2013-01-01

    In this paper, we systematically study gauge anomalies in bosonic and fermionic weak-coupling gauge theories with gauge group G (which can be continuous or discrete). We argue that, in d space-time dimensions, the gauge anomalies are described by the elements in Free[H^{d+1}(G,R/Z)]\\oplus H_\\pi^{d+1}(BG,R/Z). The well known Adler-Bell-Jackiw anomalies are classified by the free part of the group cohomology class H^{d+1}(G,R/Z) of the gauge group G (denoted as Free[H^{d+1}(G,\\R/\\Z)]). We refer other kinds of gauge anomalies beyond Adler-Bell-Jackiw anomalies as nonABJ gauge anomalies, which include Witten SU(2) global gauge anomaly. We introduce a notion of \\pi-cohomology group, H_\\pi^{d+1}(BG,R/Z), for the classifying space BG, which is an Abelian group and include Tor[H^{d+1}(G,R/Z)] and topological cohomology group H^{d+1}(BG,\\R/\\Z) as subgroups. We argue that H_\\pi^{d+1}(BG,R/Z) classifies the bosonic nonABJ gauge anomalies, and partially classifies fermionic nonABJ anomalies. We also show a very close rel...

  8. 22 CFR 125.3 - Exports of classified technical data and classified defense articles.

    Science.gov (United States)

    2010-04-01

    ... classified defense articles. 125.3 Section 125.3 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC... commodity for shipment. A nontransfer and use certificate (Form DSP-83) executed by the applicant, foreign... reexport after a temporary import will be transferred or disclosed only in accordance with the...

  9. Gene-expression Classifier in Papillary Thyroid Carcinoma: Validation and Application of a Classifier for Prognostication

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Jespersen, Marie Louise; Krogdahl, Annelise;

    2016-01-01

    frozen tissue from 38 patients was collected between the years 1986 and 2009. Validation cohort: formalin-fixed paraffin-embedded tissues were collected from 183 consecutively treated patients. RESULTS: A 17-gene classifier was identified based on the expression values in patients with and without...

  10. Classifying gauge anomalies through symmetry-protected trivial orders and classifying gravitational anomalies through topological orders

    Science.gov (United States)

    Wen, Xiao-Gang

    2013-08-01

    In this paper, we systematically study gauge anomalies in bosonic and fermionic weak-coupling gauge theories with gauge group G (which can be continuous or discrete) in d space-time dimensions. We show a very close relation between gauge anomalies for gauge group G and symmetry-protected trivial (SPT) orders (also known as symmetry-protected topological (SPT) orders) with symmetry group G in one-higher dimension. The SPT phases are classified by group cohomology class Hd+1(G,R/Z). Through a more careful consideration, we argue that the gauge anomalies are described by the elements in Free[Hd+1(G,R/Z)]⊕Hπ˙d+1(BG,R/Z). The well known Adler-Bell-Jackiw anomalies are classified by the free part of Hd+1(G,R/Z) (denoted as Free[Hd+1(G,R/Z)]). We refer to other kinds of gauge anomalies beyond Adler-Bell-Jackiw anomalies as non-ABJ gauge anomalies, which include Witten SU(2) global gauge anomalies. We introduce a notion of π-cohomology group, Hπ˙d+1(BG,R/Z), for the classifying space BG, which is an Abelian group and include Tor[Hd+1(G,R/Z)] and topological cohomology group Hd+1(BG,R/Z) as subgroups. We argue that Hπ˙d+1(BG,R/Z) classifies the bosonic non-ABJ gauge anomalies and partially classifies fermionic non-ABJ anomalies. Using the same approach that shows gauge anomalies to be connected to SPT phases, we can also show that gravitational anomalies are connected to topological orders (i.e., patterns of long-range entanglement) in one-higher dimension.

  11. Comparative study of smear layer removal by different etching modalities and Er:YAG laser irradiation on the root surface: a scanning electron microscopy study

    International Nuclear Information System (INIS)

    The aim of this study was to compare the effects of citric acid, EDTA, citric acid with tetracycline, and Er:YAG laser to smear layer removal on the root surface after scaling with manual instruments by SEM. Thirty specimens (n=30) of root surface before scaling were divided into 6 groups (n=5). The Control Group (G1) was not treated; Group 2 (G2) was conditioned with citric acid gel 24%, pH1, during 2 minutes; Group 3 (G3) was conditioned with EDTA gel 24%, pH 7, during 2 minutes; Group 4 (G4) was conditioned with citric acid and tetracycline gel 50%, pH1 during 2 minutes; Group 5 (G5) was irradiated with Er:YAG laser (2.94 μm), 47 mJ/10 Hz, focused, under water spray during 15 seconds and fluence of 0.58 J/cm2; Group 6 (G6) was irradiated with Er:YAG laser (2.94μm), 83 mJ/10 Hz, focused, under water spray during 15 seconds and fluence of 1.03 J/cm2. The micrographic were analyzed by scores and following the statistical analysis with Kruskal Wallis (p<0.05) H=20,31. The G1 was significantly different of all groups (28.0); the G2 (13.4), G3 (11.7), and G4 (13.6) showed no difference in relation to G5 (20.3) and G6 (6.0), but the G6 was significantly different from G5. From the results, it can be conclude that: 1) there was intensity smear layer after scaling and root planing; 2) all treatments were effective to smear layer remove with significantly difference to G2, G3, G4, G5 and G6; G2, G3 and G4 were not statistically different from G5 and G6; 3) G6 was more effective in the smear layer remotion in relation to G5 and both presented irregular root surface. (author)

  12. MISR Level 2 TOA/Cloud Classifier parameters V003

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 TOA/Cloud Classifiers Product. It contains the Angular Signature Cloud Mask (ASCM), Regional Cloud Classifiers, Cloud Shadow Mask, and...

  13. Optimized Radial Basis Function Classifier for Multi Modal Biometrics

    Directory of Open Access Journals (Sweden)

    Anand Viswanathan

    2014-07-01

    Full Text Available Biometric systems can be used for the identification or verification of humans based on their physiological or behavioral features. In these systems the biometric characteristics such as fingerprints, palm-print, iris or speech can be recorded and are compared with the samples for the identification or verification. Multimodal biometrics is more accurate and solves spoof attacks than the single modal bio metrics systems. In this study, a multimodal biometric system using fingerprint images and finger-vein patterns is proposed and also an optimized Radial Basis Function (RBF kernel classifier is proposed to identify the authorized users. The extracted features from these modalities are selected by PCA and kernel PCA and combined to classify by RBF classifier. The parameters of RBF classifier is optimized by using BAT algorithm with local search. The performance of the proposed classifier is compared with the KNN classifier, Naïve Bayesian classifier and non-optimized RBF classifier.

  14. Arrhythmia management after device removal.

    Science.gov (United States)

    Nishii, Nobuhiro

    2016-08-01

    Arrhythmic management is needed after removal of cardiac implantable electronic devices (CIEDs). Patients completely dependent on CIEDs need temporary device back-up until new CIEDs are implanted. Various methods are available for device back-up, and the appropriate management varies among patients. The duration from CIED removal to implantation of a new CIED also differs among patients. Temporary pacing is needed for patients with bradycardia, a wearable cardioverter defibrillator (WCD) or catheter ablation is needed for patients with tachyarrhythmia, and sequential pacing is needed for patients dependent on cardiac resynchronization therapy. The present review focuses on arrhythmic management after CIED removal. PMID:27588151

  15. Classifying Unidentified Gamma-ray Sources

    CERN Document Server

    Salvetti, David

    2016-01-01

    During its first 2 years of mission the Fermi-LAT instrument discovered more than 1,800 gamma-ray sources in the 100 MeV to 100 GeV range. Despite the application of advanced techniques to identify and associate the Fermi-LAT sources with counterparts at other wavelengths, about 40% of the LAT sources have no a clear identification remaining "unassociated". The purpose of my Ph.D. work has been to pursue a statistical approach to identify the nature of each Fermi-LAT unassociated source. To this aim, we implemented advanced machine learning techniques, such as logistic regression and artificial neural networks, to classify these sources on the basis of all the available gamma-ray information about location, energy spectrum and time variability. These analyses have been used for selecting targets for AGN and pulsar searches and planning multi-wavelength follow-up observations. In particular, we have focused our attention on the search of possible radio-quiet millisecond pulsar (MSP) candidates in the sample of...

  16. Is it important to classify ischaemic stroke?

    LENUS (Irish Health Repository)

    Iqbal, M

    2012-02-01

    Thirty-five percent of all ischemic events remain classified as cryptogenic. This study was conducted to ascertain the accuracy of diagnosis of ischaemic stroke based on information given in the medical notes. It was tested by applying the clinical information to the (TOAST) criteria. Hundred and five patients presented with acute stroke between Jan-Jun 2007. Data was collected on 90 patients. Male to female ratio was 39:51 with age range of 47-93 years. Sixty (67%) patients had total\\/partial anterior circulation stroke; 5 (5.6%) had a lacunar stroke and in 25 (28%) the mechanism of stroke could not be identified. Four (4.4%) patients with small vessel disease were anticoagulated; 5 (5.6%) with atrial fibrillation received antiplatelet therapy and 2 (2.2%) patients with atrial fibrillation underwent CEA. This study revealed deficiencies in the clinical assessment of patients and treatment was not tailored to the mechanism of stroke in some patients.

  17. Stress fracture development classified by bone scintigraphy

    International Nuclear Information System (INIS)

    There is no consensus on classifying stress fractures (SF) appearing on bone scans. The authors present a system of classification based on grading the severity and development of bone lesions by visual inspection, according to three main scintigraphic criteria: focality and size, intensity of uptake compare to adjacent bone, and local medular extension. Four grades of development (I-IV) were ranked, ranging from ill defined slightly increased cortical uptake to well defined regions with markedly increased uptake extending transversely bicortically. 310 male subjects aged 19-2, suffering several weeks from leg pains occurring during intensive physical training underwent bone scans of the pelvis and lower extremities using Tc-99-m-MDP. 76% of the scans were positive with 354 lesions, of which 88% were in th4e mild (I-II) grades and 12% in the moderate (III) and severe (IV) grades. Post-treatment scans were obtained in 65 cases having 78 lesions during 1- to 6-month intervals. Complete resolution was found after 1-2 months in 36% of the mild lesions but in only 12% of the moderate and severe ones, and after 3-6 months in 55% of the mild lesions and 15% of the severe ones. 75% of the moderate and severe lesions showed residual uptake in various stages throughout the follow-up period. Early recognition and treatment of mild SF lesions in this study prevented protracted disability and progression of the lesions and facilitated complete healing

  18. Classifying auroras using artificial neural networks

    Science.gov (United States)

    Rydesater, Peter; Brandstrom, Urban; Steen, Ake; Gustavsson, Bjorn

    1999-03-01

    In Auroral Large Imaging System (ALIS) there is need of stable methods for analysis and classification of auroral images and images with for example mother of pearl clouds. This part of ALIS is called Selective Imaging Techniques (SIT) and is intended to sort out images of scientific interest. It's also used to find out what and where in the images there is for example different auroral phenomena's. We will discuss some about the SIT units main functionality but this work is mainly concentrated on how to find auroral arcs and how they are placed in images. Special case have been taken to make the algorithm robust since it's going to be implemented in a SIT unit which will work automatic and often unsupervised and some extends control the data taking of ALIS. The method for finding auroral arcs is based on a local operator that detects intensity differens. This gives arc orientation values as a preprocessing which is fed to a neural network classifier. We will show some preliminary results and possibilities to use and improve this algorithm for use in the future SIT unit.

  19. A Neural Network Classifier of Volume Datasets

    CERN Document Server

    Zukić, Dženan; Kolb, Andreas

    2009-01-01

    Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets,...

  20. Colorization by classifying the prior knowledge

    Institute of Scientific and Technical Information of China (English)

    DU Weiwei

    2011-01-01

    When a one-dimensional luminance scalar is replaced by a vector of a colorful multi-dimension for every pixel of a monochrome image,the process is called colorization.However,colorization is under-constrained.Therefore,the prior knowledge is considered and given to the monochrome image.Colorization using optimization algorithm is an effective algorithm for the above problem.However,it cannot effectively do with some images well without repeating experiments for confirming the place of scribbles.In this paper,a colorization algorithm is proposed,which can automatically generate the prior knowledge.The idea is that firstly,the prior knowledge crystallizes into some points of the prior knowledge which is automatically extracted by downsampling and upsampling method.And then some points of the prior knowledge are classified and given with corresponding colors.Lastly,the color image can be obtained by the color points of the prior knowledge.It is demonstrated that the proposal can not only effectively generate the prior knowledge but also colorize the monochrome image according to requirements of user with some experiments.

  1. Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers

    Science.gov (United States)

    Favorskaya, M.; Nosov, A.; Popov, A.

    2015-05-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case). Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset "Multi-modal Gesture Recognition Challenge 2013: Dataset and Results" including 393 dynamic hand-gestures was chosen. The proposed method yielded 84-91% recognition accuracy, in average, for restricted set of dynamic gestures.

  2. A Novel Design of 4-Class BCI Using Two Binary Classifiers and Parallel Mental Tasks

    Directory of Open Access Journals (Sweden)

    Tao Geng

    2008-01-01

    Full Text Available A novel 4-class single-trial brain computer interface (BCI based on two (rather than four or more binary linear discriminant analysis (LDA classifiers is proposed, which is called a “parallel BCI.” Unlike other BCIs where mental tasks are executed and classified in a serial way one after another, the parallel BCI uses properly designed parallel mental tasks that are executed on both sides of the subject body simultaneously, which is the main novelty of the BCI paradigm used in our experiments. Each of the two binary classifiers only classifies the mental tasks executed on one side of the subject body, and the results of the two binary classifiers are combined to give the result of the 4-class BCI. Data was recorded in experiments with both real movement and motor imagery in 3 able-bodied subjects. Artifacts were not detected or removed. Offline analysis has shown that, in some subjects, the parallel BCI can generate a higher accuracy than a conventional 4-class BCI, although both of them have used the same feature selection and classification algorithms.

  3. Classifying rock lithofacies using petrophysical data

    Science.gov (United States)

    Al-Omair, Osamah; Garrouch, Ali A.

    2010-09-01

    This study automates a type-curve technique for estimating the rock pore-geometric factor (λ) from capillary pressure measurements. The pore-geometric factor is determined by matching the actual rock capillary pressure versus wetting-phase saturation (Pc-Sw) profile with that obtained from the Brooks and Corey model (1966 J. Irrigation Drainage Proc. Am. Soc. Civ. Eng. 61-88). The pore-geometric factor values are validated by comparing the actual measured rock permeability to the permeability values estimated using the Wyllie and Gardner model (1958 World Oil (April issue) 210-28). Petrophysical data for both carbonate and sandstone rocks, along with the pore-geometric factor derived from the type-curve matching, are used in a discriminant analysis for the purpose of developing a model for rock typing. The petrophysical parameters include rock porosity (phi), irreducible water saturation (Swi), permeability (k), the threshold capillary-entry-pressure (Pd), a pore-shape factor (β), and a flow-impedance parameter (n) which is a property that reflects the flow impedance caused by the irreducible wetting-phase saturation. The results of the discriminant analysis indicate that five of the parameters (phi, k, Pd, λ and n) are sufficient for classifying rocks according to two broad lithology classes: sandstones and carbonates. The analysis reveals the existence of a significant discriminant function that is mostly sensitive to the pore-geometric factor values (λ). A discriminant-analysis classification model that honours both static and dynamic petrophysical rock properties is, therefore, introduced. When tested on two distinct data sets, the discriminant-analysis model was able to predict the correct lithofacies for approximately 95% of the tested samples. A comprehensive database of the experimentally collected petrophysical properties of 215 carbonate and sandstone rocks is provided with this study.

  4. Scanning electron microscopy (SEM) evaluation of sealing ability of MTA and EndoSequence as root-end filling materials with chitosan and carboxymethyl chitosan (CMC) as retrograde smear layer removing agents

    Science.gov (United States)

    Nagesh, Bolla; Jeevani, Eppala; Sujana, Varri; Damaraju, Bharagavi; Sreeha, Kaluvakolanu; Ramesh, Penumaka

    2016-01-01

    Aim: The purpose of this study was to evaluate the sealing ability of mineral trioxide aggregate (MTA) and EndoSequence with chitosan and carboxymethyl chitosan (CMC) as retrograde smear layer removing agents using scanning electron microscopy (SEM). Materials and Methods: Forty human single rooted teeth were taken. Crowns were decoronated and canals were obturated. Apically roots were resected and retrograde cavities were done. Based on the type of retrograde material placed and the type of smear layer removal agent used for retrograde cavities, they were divided into four groups (N = 10): Group I chitosan with EndoSequence, group II chitosan with MTA, group III CMC with EndoSequence, and Group IV CMC with MTA. All the samples were longitudinally sectioned, and the SEM analysis was done for marginal adaptation. Statistical Analysis: Kruskal-Wallis and Mann-Witney analysis tests. Results: SEM images showed the presence of less gaps in group III, i.e., CMC with EndoSequence when compared to other groups with statistically significant difference. Conclusion: Within the limited scope of this study, it was concluded that EndoSequence as retrograde material showed better marginal sealing ability. PMID:27099420

  5. Scanning electron microscopy (SEM evaluation of sealing ability of MTA and EndoSequence as root-end filling materials with chitosan and carboxymethyl chitosan (CMC as retrograde smear layer removing agents

    Directory of Open Access Journals (Sweden)

    Bolla Nagesh

    2016-01-01

    Full Text Available Aim: The purpose of this study was to evaluate the sealing ability of mineral trioxide aggregate (MTA and EndoSequence with chitosan and carboxymethyl chitosan (CMC as retrograde smear layer removing agents using scanning electron microscopy (SEM. Materials and Methods: Forty human single rooted teeth were taken. Crowns were decoronated and canals were obturated. Apically roots were resected and retrograde cavities were done. Based on the type of retrograde material placed and the type of smear layer removal agent used for retrograde cavities, they were divided into four groups (N = 10: Group I chitosan with EndoSequence, group II chitosan with MTA, group III CMC with EndoSequence, and Group IV CMC with MTA. All the samples were longitudinally sectioned, and the SEM analysis was done for marginal adaptation. Statistical Analysis: Kruskal-Wallis and Mann-Witney analysis tests. Results: SEM images showed the presence of less gaps in group III, i.e., CMC with EndoSequence when compared to other groups with statistically significant difference. Conclusion: Within the limited scope of this study, it was concluded that EndoSequence as retrograde material showed better marginal sealing ability.

  6. A new approach to classifier fusion based on upper integral.

    Science.gov (United States)

    Wang, Xi-Zhao; Wang, Ran; Feng, Hui-Min; Wang, Hua-Chao

    2014-05-01

    Fusing a number of classifiers can generally improve the performance of individual classifiers, and the fuzzy integral, which can clearly express the interaction among the individual classifiers, has been acknowledged as an effective tool of fusion. In order to make the best use of the individual classifiers and their combinations, we propose in this paper a new scheme of classifier fusion based on upper integrals, which differs from all the existing models. Instead of being a fusion operator, the upper integral is used to reasonably arrange the finite resources, and thus to maximize the classification efficiency. By solving an optimization problem of upper integrals, we obtain a scheme for assigning proportions of examples to different individual classifiers and their combinations. According to these proportions, new examples could be classified by different individual classifiers and their combinations, and the combination of classifiers that specific examples should be submitted to depends on their performance. The definition of upper integral guarantees such a conclusion that the classification efficiency of the fused classifier is not less than that of any individual classifier theoretically. Furthermore, numerical simulations demonstrate that most existing fusion methodologies, such as bagging and boosting, can be improved by our upper integral model. PMID:23782843

  7. Face Detection Using a First-Order RCE Classifier

    Directory of Open Access Journals (Sweden)

    Byeong Hwan Jeon

    2003-08-01

    Full Text Available We present a new face detection algorithm based on a first-order reduced Coulomb energy (RCE classifier. The algorithm locates frontal views of human faces at any degree of rotation and scale in complex scenes. The face candidates and their orientations are first determined by computing the Hausdorff distance between simple face abstraction models and binary test windows in an image pyramid. Then, after normalizing the energy, each face candidate is verified by two subsequent classifiers: a binary image classifier and the first-order RCE classifier. While the binary image classifier is employed as a preclassifier to discard nonfaces with minimum computational complexity, the first-order RCE classifier is used as the main face classifier for final verification. An optimal training method to construct the representative face model database is also presented. Experimental results show that the proposed algorithm yields a high detection ratio while yielding no false alarm.

  8. Peat classified as slowly renewable biomass fuel

    International Nuclear Information System (INIS)

    thousands of years. The report states also that peat should be classified as biomass fuel instead of biofuels, such as wood, or fossil fuels such as coal. According to the report peat is a renewable biomass fuel like biofuels, but due to slow accumulation it should be considered as slowly renewable fuel. The report estimates that bonding of carbon in both virgin and forest drained peatlands are so high that it can compensate the emissions formed in combustion of energy peat

  9. Rule Based Ensembles Using Pair Wise Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    Moslem Mohammadi Jenghara

    2015-03-01

    Full Text Available In value estimation, the inexperienced people's estimation average is good approximation to true value, provided that the answer of these individual are independent. Classifier ensemble is the implementation of mentioned principle in classification tasks that are investigated in two aspects. In the first aspect, feature space is divided into several local regions and each region is assigned with a highly competent classifier and in the second, the base classifiers are applied in parallel and equally experienced in some ways to achieve a group consensus. In this paper combination of two methods are used. An important consideration in classifier combination is that much better results can be achieved if diverse classifiers, rather than similar classifiers, are combined. To achieve diversity in classifiers output, the symmetric pairwise weighted feature space is used and the outputs of trained classifiers over the weighted feature space are combined to inference final result. In this paper MLP classifiers are used as the base classifiers. The Experimental results show that the applied method is promising.

  10. Classifying short genomic fragments from novel lineages using composition and homology

    Directory of Open Access Journals (Sweden)

    Beiko Robert G

    2011-08-01

    Full Text Available Abstract Background The assignment of taxonomic attributions to DNA fragments recovered directly from the environment is a vital step in metagenomic data analysis. Assignments can be made using rank-specific classifiers, which assign reads to taxonomic labels from a predetermined level such as named species or strain, or rank-flexible classifiers, which choose an appropriate taxonomic rank for each sequence in a data set. The choice of rank typically depends on the optimal model for a given sequence and on the breadth of taxonomic groups seen in a set of close-to-optimal models. Homology-based (e.g., LCA and composition-based (e.g., PhyloPythia, TACOA rank-flexible classifiers have been proposed, but there is at present no hybrid approach that utilizes both homology and composition. Results We first develop a hybrid, rank-specific classifier based on BLAST and Naïve Bayes (NB that has comparable accuracy and a faster running time than the current best approach, PhymmBL. By substituting LCA for BLAST or allowing the inclusion of suboptimal NB models, we obtain a rank-flexible classifier. This hybrid classifier outperforms established rank-flexible approaches on simulated metagenomic fragments of length 200 bp to 1000 bp and is able to assign taxonomic attributions to a subset of sequences with few misclassifications. We then demonstrate the performance of different classifiers on an enhanced biological phosphorous removal metagenome, illustrating the advantages of rank-flexible classifiers when representative genomes are absent from the set of reference genomes. Application to a glacier ice metagenome demonstrates that similar taxonomic profiles are obtained across a set of classifiers which are increasingly conservative in their classification. Conclusions Our NB-based classification scheme is faster than the current best composition-based algorithm, Phymm, while providing equally accurate predictions. The rank-flexible variant of NB, which we

  11. Image Classifying Registration and Dynamic Region Merging

    Directory of Open Access Journals (Sweden)

    Himadri Nath Moulick

    2013-07-01

    Full Text Available In this paper, we address a complex image registration issue arising when the dependencies between intensities of images to be registered are not spatially homogeneous. Such a situation is frequentlyencountered in medical imaging when a pathology present in one of the images modifies locally intensity dependencies observed on normal tissues. Usual image registration models, which are based on a single global intensity similarity criterion, fail to register such images, as they are blind to local deviations of intensity dependencies. Such a limitation is also encountered in contrast enhanced images where there exist multiple pixel classes having different properties of contrast agent absorption. In this paper, we propose a new model in which the similarity criterion is adapted locally to images by classification of image intensity dependencies. Defined in a Bayesian framework, the similarity criterion is a mixture of probability distributions describing dependencies on two classes. The model also includes a class map which locates pixels of the two classes and weights the two mixture components. The registration problem is formulated both as an energy minimization problem and as a Maximum A Posteriori (MAP estimation problem. It is solved using a gradient descent algorithm. In the problem formulation and resolution, the image deformation and the class map are estimated at the same time, leading to an original combination of registration and classification that we call image classifying registration. Whenever sufficient information about class location is available in applications, the registration can also be performed on its own by fixing a given class map. Finally, we illustrate the interest of our model on two real applications from medical imaging: template-based segmentation of contrast-enhanced images and lesion detection in mammograms. We also conduct an evaluation of our model on simulated medical data and show its ability to take into

  12. Removing Defects From Silicon Ribbon

    Science.gov (United States)

    Shimada, K.

    1982-01-01

    Proposal for removing impurities from silicon ribbon and sheet could be developed into an automated production-line process. New technique which combines ion-cluster bombardment, electron-gun heating, and plasma etching, could be key step in fabricating inexpensive solar-cell arrays. Silicon sheets and ribbons treated this way could have enhanced carrier lifetimes necessary for satisfactory solar-cell performance.

  13. To fuse or not to fuse: Fuser versus best classifier

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.

    1998-04-01

    A sample from a class defined on a finite-dimensional Euclidean space and distributed according to an unknown distribution is given. The authors are given a set of classifiers each of which chooses a hypothesis with least misclassification error from a family of hypotheses. They address the question of choosing the classifier with the best performance guarantee versus combining the classifiers using a fuser. They first describe a fusion method based on isolation property such that the performance guarantee of the fused system is at least as good as the best of the classifiers. For a more restricted case of deterministic classes, they present a method based on error set estimation such that the performance guarantee of fusing all classifiers is at least as good as that of fusing any subset of classifiers.

  14. Evidential multinomial logistic regression for multiclass classifier calibration

    OpenAIRE

    Xu, Philippe; Davoine, Franck; Denoeux, Thierry

    2015-01-01

    The calibration of classifiers is an important task in information fusion. To compare or combine the outputs of several classifiers, they need to be represented in a common space. Probabilistic calibration methods transform the output of a classifier into a posterior probability distribution. In this paper, we introduce an evidential calibration method for multiclass classification problems. Our approach uses an extension of multinomial logistic regression to the theory of belief functions. W...

  15. Taxonomy grounded aggregation of classifiers with different label sets

    OpenAIRE

    SAHA, AMRITA; Indurthi, Sathish; Godbole, Shantanu; Rongali, Subendhu; Raykar, Vikas C.

    2015-01-01

    We describe the problem of aggregating the label predictions of diverse classifiers using a class taxonomy. Such a taxonomy may not have been available or referenced when the individual classifiers were designed and trained, yet mapping the output labels into the taxonomy is desirable to integrate the effort spent in training the constituent classifiers. A hierarchical taxonomy representing some domain knowledge may be different from, but partially mappable to, the label sets of the individua...

  16. Investigating The Fusion of Classifiers Designed Under Different Bayes Errors

    Directory of Open Access Journals (Sweden)

    Fuad M. Alkoot

    2004-12-01

    Full Text Available We investigate a number of parameters commonly affecting the design of a multiple classifier system in order to find when fusing is most beneficial. We extend our previous investigation to the case where unequal classifiers are combined. Results indicate that Sum is not affected by this parameter, however, Vote degrades when a weaker classifier is introduced in the combining system. This is more obvious when estimation error with uniform distribution exists.

  17. The SVM Classifier Based on the Modified Particle Swarm Optimization

    OpenAIRE

    Liliya Demidova; Evgeny Nikulchev; Yulia Sokolova

    2016-01-01

    The problem of development of the SVM classifier based on the modified particle swarm optimization has been considered. This algorithm carries out the simultaneous search of the kernel function type, values of the kernel function parameters and value of the regularization parameter for the SVM classifier. Such SVM classifier provides the high quality of data classification. The idea of particles' {\\guillemotleft}regeneration{\\guillemotright} is put on the basis of the modified particle swarm ...

  18. The analysis of cross-classified categorical data

    CERN Document Server

    Fienberg, Stephen E

    2007-01-01

    A variety of biological and social science data come in the form of cross-classified tables of counts, commonly referred to as contingency tables. Until recent years the statistical and computational techniques available for the analysis of cross-classified data were quite limited. This book presents some of the recent work on the statistical analysis of cross-classified data using longlinear models, especially in the multidimensional situation.

  19. Near-Optimal Evasion of Convex-Inducing Classifiers

    CERN Document Server

    Nelson, Blaine; Huang, Ling; Joseph, Anthony D; Lau, Shing-hon; Lee, Steven J; Rao, Satish; Tran, Anthony; Tygar, J D

    2010-01-01

    Classifiers are often used to detect miscreant activities. We study how an adversary can efficiently query a classifier to elicit information that allows the adversary to evade detection at near-minimal cost. We generalize results of Lowd and Meek (2005) to convex-inducing classifiers. We present algorithms that construct undetected instances of near-minimal cost using only polynomially many queries in the dimension of the space and without reverse engineering the decision boundary.

  20. 6 CFR 7.23 - Emergency release of classified information.

    Science.gov (United States)

    2010-01-01

    ... Classified Information Non-disclosure Form. In emergency situations requiring immediate verbal release of... information through approved communication channels by the most secure and expeditious method possible, or...

  1. Unsupervised Supervised Learning II: Training Margin Based Classifiers without Labels

    CERN Document Server

    Donmez, Pinar; Lebanon, Guy

    2010-01-01

    Many popular linear classifiers, such as logistic regression, boosting, or SVM, are trained by optimizing a margin-based risk function. Traditionally, these risk functions are computed based on a labeled dataset. We develop a novel technique for estimating such risks using only unlabeled data and p(y). We prove that the technique is consistent for high-dimensional linear classifiers and demonstrate it on synthetic and real-world data. In particular, we show how the estimate is used for evaluating classifiers in transfer learning, and for training classifiers with no labeled data whatsoever.

  2. Construction of unsupervised sentiment classifier on idioms resources

    Institute of Scientific and Technical Information of China (English)

    谢松县; 王挺

    2014-01-01

    Sentiment analysis is the computational study of how opinions, attitudes, emotions, and perspectives are expressed in language, and has been the important task of natural language processing. Sentiment analysis is highly valuable for both research and practical applications. The focuses were put on the difficulties in the construction of sentiment classifiers which normally need tremendous labeled domain training data, and a novel unsupervised framework was proposed to make use of the Chinese idiom resources to develop a general sentiment classifier. Furthermore, the domain adaption of general sentiment classifier was improved by taking the general classifier as the base of a self-training procedure to get a domain self-training sentiment classifier. To validate the effect of the unsupervised framework, several experiments were carried out on publicly available Chinese online reviews dataset. The experiments show that the proposed framework is effective and achieves encouraging results. Specifically, the general classifier outperforms two baselines (a Naïve 50% baseline and a cross-domain classifier), and the bootstrapping self-training classifier approximates the upper bound domain-specific classifier with the lowest accuracy of 81.5%, but the performance is more stable and the framework needs no labeled training dataset.

  3. Facial expression recognition with facial parts based sparse representation classifier

    Science.gov (United States)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  4. Using Classifiers to Identify Binge Drinkers Based on Drinking Motives.

    Science.gov (United States)

    Crutzen, Rik; Giabbanelli, Philippe

    2013-08-21

    A representative sample of 2,844 Dutch adult drinkers completed a questionnaire on drinking motives and drinking behavior in January 2011. Results were classified using regressions, decision trees, and support vector machines (SVMs). Using SVMs, the mean absolute error was minimal, whereas performance on identifying binge drinkers was high. Moreover, when comparing the structure of classifiers, there were differences in which drinking motives contribute to the performance of classifiers. Thus, classifiers are worthwhile to be used in research regarding (addictive) behaviors, because they contribute to explaining behavior and they can give different insights from more traditional data analytical approaches. PMID:23964957

  5. Removing Hair Safely

    Science.gov (United States)

    ... For Consumers Home For Consumers Consumer Updates Removing Hair Safely Share Tweet Linkedin Pin it More sharing ... methods of hair removal. back to top Laser Hair Removal In this method, a laser destroys hair ...

  6. Performance of classification confidence measures in dynamic classifier systems

    Czech Academy of Sciences Publication Activity Database

    Štefka, D.; Holeňa, Martin

    2013-01-01

    Roč. 23, č. 4 (2013), s. 299-319. ISSN 1210-0552 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : classifier combining * dynamic classifier systems * classification confidence Subject RIV: IN - Informatics, Computer Science Impact factor: 0.412, year: 2013

  7. 3 CFR - Classified Information and Controlled Unclassified Information

    Science.gov (United States)

    2010-01-01

    ... 3 The President 1 2010-01-01 2010-01-01 false Classified Information and Controlled Unclassified Information Presidential Documents Other Presidential Documents Memorandum of May 27, 2009 Classified... and perceived technological obstacles to moving toward an information sharing culture, continue...

  8. 32 CFR 2400.30 - Reproduction of classified information.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Reproduction of classified information. 2400.30... SECURITY PROGRAM Safeguarding § 2400.30 Reproduction of classified information. Documents or portions of... the originator or higher authority. Any stated prohibition against reproduction shall be...

  9. Classifying spaces with virtually cyclic stabilizers for linear groups

    DEFF Research Database (Denmark)

    Degrijse, Dieter Dries; Köhl, Ralf; Petrosyan, Nansen

    2015-01-01

    We show that every discrete subgroup of GL(n, ℝ) admits a finite-dimensional classifying space with virtually cyclic stabilizers. Applying our methods to SL(3, ℤ), we obtain a four-dimensional classifying space with virtually cyclic stabilizers and a decomposition of the algebraic K-theory of its...

  10. 40 CFR 152.175 - Pesticides classified for restricted use.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Pesticides classified for restricted...) PESTICIDE PROGRAMS PESTICIDE REGISTRATION AND CLASSIFICATION PROCEDURES Classification of Pesticides § 152.175 Pesticides classified for restricted use. The following uses of pesticide products containing...

  11. Characteristics of the molar surface after removal of cervical enamel projections: comparison of three different rotating instruments

    Science.gov (United States)

    2016-01-01

    Purpose The aim of this study was to evaluate and compare tooth surface characteristics in extracted human molars after cervical enamel projections (CEPs) were removed with the use of three rotating instruments. Methods We classified 60 extracted molars due to periodontal lesion with CEPs into grade I, II, or III, according to the Masters and Hoskins’ criteria. Each group contained 20 specimens. Three rotating instruments were used to remove the CEPs: a piezoelectric ultrasonic scaler, a periodontal bur, and a diamond bur. Tooth surface characteristics before and after removal of the projections were then evaluated with scanning electron microscopy (SEM). We analyzed the characteristics of the tooth surfaces with respect to roughness and whether the enamel projections had been completely removed. Results In SEM images, surfaces treated with the diamond bur were smoothest, but this instrument caused considerable harm to tooth structures near the CEPs. The piezoelectric ultrasonic scaler group produced the roughest surface but caused less harm to the tooth structure near the furcation. In general, the surfaces treated with the periodontal bur were smoother than those treated with the ultrasonic scaler, and the periodontal bur did not invade adjacent tooth structures. Conclusions For removal of grade II CEPs, the most effective instrument was the diamond bur. However, in removing grade III projections, the diamond bur can destroy both adjacent tooth structures and the periodontal apparatus. In such cases, careful use of the periodontal bur may be an appropriate substitute. PMID:27127691

  12. Analysis of Sequence Based Classifier Prediction for HIV Subtypes

    Directory of Open Access Journals (Sweden)

    S. Santhosh Kumar

    2012-10-01

    Full Text Available Human immunodeficiency virus (HIV is a lent virus that causes acquired immunodeficiency syndrome (AIDS. The main drawback in HIV treatment process is its sub type prediction. The sub type and group classification of HIV is based on its genetic variability and location. HIV can be divided into two major types, HIV type 1 (HIV-1 and HIV type 2 (HIV-2. Many classifier approaches have been used to classify HIV subtypes based on their group, but some of cases are having two groups in one; in such cases the classification becomes more complex. The methodology used is this paper based on the HIV sequences. For this work several classifier approaches are used to classify the HIV1 and HIV2. For implementation of the work a real time patient database is taken and the patient records are experimented and the final best classifier is identified with quick response time and least error rate.

  13. Algorithm for classifying multiple targets using acoustic signatures

    Science.gov (United States)

    Damarla, Thyagaraju; Pham, Tien; Lake, Douglas

    2004-08-01

    In this paper we discuss an algorithm for classification and identification of multiple targets using acoustic signatures. We use a Multi-Variate Gaussian (MVG) classifier for classifying individual targets based on the relative amplitudes of the extracted harmonic set of frequencies. The classifier is trained on high signal-to-noise ratio data for individual targets. In order to classify and further identify each target in a multi-target environment (e.g., a convoy), we first perform bearing tracking and data association. Once the bearings of the targets present are established, we next beamform in the direction of each individual target to spatially isolate it from the other targets (or interferers). Then, we further process and extract a harmonic feature set from each beamformed output. Finally, we apply the MVG classifier on each harmonic feature set for vehicle classification and identification. We present classification/identification results for convoys of three to five ground vehicles.

  14. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    International Nuclear Information System (INIS)

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity

  15. An ensemble of SVM classifiers based on gene pairs.

    Science.gov (United States)

    Tong, Muchenxuan; Liu, Kun-Hong; Xu, Chungui; Ju, Wenbin

    2013-07-01

    In this paper, a genetic algorithm (GA) based ensemble support vector machine (SVM) classifier built on gene pairs (GA-ESP) is proposed. The SVMs (base classifiers of the ensemble system) are trained on different informative gene pairs. These gene pairs are selected by the top scoring pair (TSP) criterion. Each of these pairs projects the original microarray expression onto a 2-D space. Extensive permutation of gene pairs may reveal more useful information and potentially lead to an ensemble classifier with satisfactory accuracy and interpretability. GA is further applied to select an optimized combination of base classifiers. The effectiveness of the GA-ESP classifier is evaluated on both binary-class and multi-class datasets. PMID:23668348

  16. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    Science.gov (United States)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  17. Query Strategies for Evading Convex-Inducing Classifiers

    CERN Document Server

    Nelson, Blaine; Huang, Ling; Joseph, Anthony D; Lee, Steven J; Rao, Satish; Tygar, J D

    2010-01-01

    Classifiers are often used to detect miscreant activities. We study how an adversary can systematically query a classifier to elicit information that allows the adversary to evade detection while incurring a near-minimal cost of modifying their intended malfeasance. We generalize the theory of Lowd and Meek (2005) to the family of convex-inducing classifiers that partition input space into two sets one of which is convex. We present query algorithms for this family that construct undetected instances of approximately minimal cost using only polynomially-many queries in the dimension of the space and in the level of approximation. Our results demonstrate that near-optimal evasion can be accomplished without reverse-engineering the classifier's decision boundary. We also consider general lp costs and show that near-optimal evasion on the family of convex-inducing classifiers is generally efficient for both positive and negative convexity for all levels of approximation if p=1.

  18. Construction of High-accuracy Ensemble of Classifiers

    Directory of Open Access Journals (Sweden)

    Hedieh Sajedi

    2014-04-01

    Full Text Available There have been several methods developed to construct ensembles. Some of these methods, such as Bagging and Boosting are meta-learners, i.e. they can be applied to any base classifier. The combination of methods should be selected in order that classifiers cover each other weaknesses. In ensemble, the output of several classifiers is used only when they disagree on some inputs. The degree of disagreement is called diversity of the ensemble. Another factor that plays a significant role in performing an ensemble is accuracy of the basic classifiers. It can be said that all the procedures of constructing ensembles seek to achieve a balance between these two parameters, and successful methods can reach a better balance. The diversity of the members of an ensemble is known as an important factor in determining its generalization error. In this paper, we present a new approach for generating ensembles. The proposed approach uses Bagging and Boosting as the generators of base classifiers. Subsequently, the classifiers are partitioned by means of a clustering algorithm. We introduce a selection phase for construction the final ensemble and three different selection methods are proposed for applying in this phase. In the first proposed selection method, a classifier is selected randomly from each cluster. The second method selects the most accurate classifier from each cluster and the third one selects the nearest classifier to the center of each cluster to construct the final ensemble. The results of the experiments on well-known datasets demonstrate the strength of our proposed approach, especially applying the selection of the most accurate classifiers from clusters and employing Bagging generator.

  19. Cooling system for electronic components

    Energy Technology Data Exchange (ETDEWEB)

    Anderl, William James; Colgan, Evan George; Gerken, James Dorance; Marroquin, Christopher Michael; Tian, Shurong

    2016-05-17

    Embodiments of the present invention provide for non interruptive fluid cooling of an electronic enclosure. One or more electronic component packages may be removable from a circuit card having a fluid flow system. When installed, the electronic component packages are coincident to and in a thermal relationship with the fluid flow system. If a particular electronic component package becomes non-functional, it may be removed from the electronic enclosure without affecting either the fluid flow system or other neighboring electronic component packages.

  20. Cooling system for electronic components

    Science.gov (United States)

    Anderl, William James; Colgan, Evan George; Gerken, James Dorance; Marroquin, Christopher Michael; Tian, Shurong

    2015-12-15

    Embodiments of the present invention provide for non interruptive fluid cooling of an electronic enclosure. One or more electronic component packages may be removable from a circuit card having a fluid flow system. When installed, the electronic component packages are coincident to and in a thermal relationship with the fluid flow system. If a particular electronic component package becomes non-functional, it may be removed from the electronic enclosure without affecting either the fluid flow system or other neighboring electronic component packages.

  1. Intelligent Bayes Classifier (IBC for ENT infection classification in hospital environment

    Directory of Open Access Journals (Sweden)

    Dutta Ritabrata

    2006-12-01

    Full Text Available Abstract Electronic Nose based ENT bacteria identification in hospital environment is a classical and challenging problem of classification. In this paper an electronic nose (e-nose, comprising a hybrid array of 12 tin oxide sensors (SnO2 and 6 conducting polymer sensors has been used to identify three species of bacteria, Escherichia coli (E. coli, Staphylococcus aureus (S. aureus, and Pseudomonas aeruginosa (P. aeruginosa responsible for ear nose and throat (ENT infections when collected as swab sample from infected patients and kept in ISO agar solution in the hospital environment. In the next stage a sub-classification technique has been developed for the classification of two different species of S. aureus, namely Methicillin-Resistant S. aureus (MRSA and Methicillin Susceptible S. aureus (MSSA. An innovative Intelligent Bayes Classifier (IBC based on "Baye's theorem" and "maximum probability rule" was developed and investigated for these three main groups of ENT bacteria. Along with the IBC three other supervised classifiers (namely, Multilayer Perceptron (MLP, Probabilistic neural network (PNN, and Radial Basis Function Network (RBFN were used to classify the three main bacteria classes. A comparative evaluation of the classifiers was conducted for this application. IBC outperformed MLP, PNN and RBFN. The best results suggest that we are able to identify and classify three bacteria main classes with up to 100% accuracy rate using IBC. We have also achieved 100% classification accuracy for the classification of MRSA and MSSA samples with IBC. We can conclude that this study proves that IBC based e-nose can provide very strong and rapid solution for the identification of ENT infections in hospital environment.

  2. Computer-aided diagnosis system for classifying benign and malignant thyroid nodules in multi-stained FNAB cytological images

    International Nuclear Information System (INIS)

    An automated computer-aided diagnosis system is developed to classify benign and malignant thyroid nodules using multi-stained fine needle aspiration biopsy (FNAB) cytological images. In the first phase, the image segmentation is performed to remove the background staining information and retain the appropriate foreground cell objects in cytological images using mathematical morphology and watershed transform segmentation methods. Subsequently, statistical features are extracted using two-level discrete wavelet transform (DWT) decomposition, gray level co-occurrence matrix (GLCM) and Gabor filter based methods. The classifiers k-nearest neighbor (k-NN), Elman neural network (ENN) and support vector machine (SVM) are tested for classifying benign and malignant thyroid nodules. The combination of watershed segmentation, GLCM features and k-NN classifier results a lowest diagnostic accuracy of 60 %. The highest diagnostic accuracy of 93.33 % is achieved by ENN classifier trained with the statistical features extracted by Gabor filter bank from the images segmented by morphology and watershed transform segmentation methods. It is also observed that SVM classifier results its highest diagnostic accuracy of 90 % for DWT and Gabor filter based features along with morphology and watershed transform segmentation methods. The experimental results suggest that the developed system with multi-stained thyroid FNAB images would be useful for identifying thyroid cancer irrespective of staining protocol used.

  3. Class-specific Error Bounds for Ensemble Classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Prenger, R; Lemmond, T; Varshney, K; Chen, B; Hanley, W

    2009-10-06

    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missed detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.

  4. Glycosylation site prediction using ensembles of Support Vector Machine classifiers

    Directory of Open Access Journals (Sweden)

    Silvescu Adrian

    2007-11-01

    Full Text Available Abstract Background Glycosylation is one of the most complex post-translational modifications (PTMs of proteins in eukaryotic cells. Glycosylation plays an important role in biological processes ranging from protein folding and subcellular localization, to ligand recognition and cell-cell interactions. Experimental identification of glycosylation sites is expensive and laborious. Hence, there is significant interest in the development of computational methods for reliable prediction of glycosylation sites from amino acid sequences. Results We explore machine learning methods for training classifiers to predict the amino acid residues that are likely to be glycosylated using information derived from the target amino acid residue and its sequence neighbors. We compare the performance of Support Vector Machine classifiers and ensembles of Support Vector Machine classifiers trained on a dataset of experimentally determined N-linked, O-linked, and C-linked glycosylation sites extracted from O-GlycBase version 6.00, a database of 242 proteins from several different species. The results of our experiments show that the ensembles of Support Vector Machine classifiers outperform single Support Vector Machine classifiers on the problem of predicting glycosylation sites in terms of a range of standard measures for comparing the performance of classifiers. The resulting methods have been implemented in EnsembleGly, a web server for glycosylation site prediction. Conclusion Ensembles of Support Vector Machine classifiers offer an accurate and reliable approach to automated identification of putative glycosylation sites in glycoprotein sequences.

  5. Malignancy and Abnormality Detection of Mammograms using Classifier Ensembling

    Directory of Open Access Journals (Sweden)

    Nawazish Naveed

    2011-07-01

    Full Text Available The breast cancer detection and diagnosis is a critical and complex procedure that demands high degree of accuracy. In computer aided diagnostic systems, the breast cancer detection is a two stage procedure. First, to classify the malignant and benign mammograms, while in second stage, the type of abnormality is detected. In this paper, we have developed a novel architecture to enhance the classification of malignant and benign mammograms using multi-classification of malignant mammograms into six abnormality classes. DWT (Discrete Wavelet Transformation features are extracted from preprocessed images and passed through different classifiers. To improve accuracy, results generated by various classifiers are ensembled. The genetic algorithm is used to find optimal weights rather than assigning weights to the results of classifiers on the basis of heuristics. The mammograms declared as malignant by ensemble classifiers are divided into six classes. The ensemble classifiers are further used for multiclassification using one-against-all technique for classification. The output of all ensemble classifiers is combined by product, median and mean rule. It has been observed that the accuracy of classification of abnormalities is more than 97% in case of mean rule. The Mammographic Image Analysis Society dataset is used for experimentation.

  6. A Comparison of Unsupervised Classifiers on BATSE Catalog Data

    Science.gov (United States)

    Hakkila, Jon; Roiger, Richard J.; Haglin, David J.; Giblin, Timothy W.; Paciesas, William S.

    2003-04-01

    We classify BATSE gamma-ray bursts using unsupervised clustering algorithms in order to compare classification with statistical clustering techniques. BATSE bursts detected with homogeneous trigger criteria and measured with a limited attribute set (duration, hardness, and fluence) are classified using four unsupervised algorithms (the concept hierarchy classifier ESX, the EM algorithm, the Kmeans algorithm, and a kohonen neural network). The classifiers prefer three-class solutions to two-class and four-class solutions. When forced to find two classes, the classifiers do not find the traditional long and short classes; many short soft events are placed in a class with the short hard bursts. When three classes are found, the classifiers clearly identify the short bursts, but place far more members in an intermediate duration soft class than have been found using statistical clustering techniques. It appears that the boundary between short faint and long bright bursts is more important to the classifiers than is the boundary between short hard and long soft bursts. We conclude that the boundary between short faint and long hard bursts is the result of data bias and poor attribute selection. We recommend that future gamma-ray burst classification avoid using extrinsic parameters such as fluence, and should instead concentrate on intrinsic properties such as spectral, temporal, and (when available) luminosity characteristics. Future classification should also be wary of correlated attributes (such as fluence and duration), as these bias classification results.

  7. Use of Mamdani-Assilian Fuzzy Controller for Combining Classifiers

    Czech Academy of Sciences Publication Activity Database

    Štefka, David; Holeňa, Martin

    Praha : Matfyzpress, 2007 - (Obdržálek, D.; Štanclová, J.; Plátek, M.), s. 88-97 ISBN 978-80-7378-033-3. [ MIS 2007. Malý informatický seminář /24./. Josefův důl (CZ), 13.01.2007-20.01.2007] R&D Projects: GA AV ČR 1ET100300517; GA ČR GA201/05/0325 Institutional research plan: CEZ:AV0Z10300504 Keywords : fuzzy control * classifier fusion * classifier aggregation * classifier combining Subject RIV: IN - Informatics, Computer Science

  8. Classifying Regularized Sensor Covariance Matrices: An Alternative to CSP.

    Science.gov (United States)

    Roijendijk, Linsey; Gielen, Stan; Farquhar, Jason

    2016-08-01

    Common spatial patterns (CSP) is a commonly used technique for classifying imagined movement type brain-computer interface (BCI) datasets. It has been very successful with many extensions and improvements on the basic technique. However, a drawback of CSP is that the signal processing pipeline contains two supervised learning stages: the first in which class- relevant spatial filters are learned and a second in which a classifier is used to classify the filtered variances. This may lead to potential overfitting issues, which are generally avoided by limiting CSP to only a few filters. PMID:26372428

  9. Remote Sensing Data Binary Classification Using Boosting with Simple Classifiers

    Directory of Open Access Journals (Sweden)

    Nowakowski Artur

    2015-10-01

    Full Text Available Boosting is a classification method which has been proven useful in non-satellite image processing while it is still new to satellite remote sensing. It is a meta-algorithm, which builds a strong classifier from many weak ones in iterative way. We adapt the AdaBoost.M1 boosting algorithm in a new land cover classification scenario based on utilization of very simple threshold classifiers employing spectral and contextual information. Thresholds for the classifiers are automatically calculated adaptively to data statistics.

  10. High speed intelligent classifier of tomatoes by colour, size and weight

    Energy Technology Data Exchange (ETDEWEB)

    Cement, J.; Novas, N.; Gazquez, J. A.; Manzano-Agugliaro, F.

    2012-11-01

    At present most horticultural products are classified and marketed according to quality standards, which provide a common language for growers, packers, buyers and consumers. The standardisation of both product and packaging enables greater speed and efficiency in management and marketing. Of all the vegetables grown in greenhouses, tomatoes are predominant in both surface area and tons produced. This paper will present the development and evaluation of a low investment classification system of tomatoes with these objectives: to put it at the service of producing farms and to classify for trading standards. An intelligent classifier of tomatoes has been developed by weight, diameter and colour. This system has optimised the necessary algorithms for data processing in the case of tomatoes, so that productivity is greatly increased, with the use of less expensive and lower performance electronics. The prototype is able to achieve very high speed classification, 12.5 ratings per second, using accessible and low cost commercial equipment for this. It decreases fourfold the manual sorting time and is not sensitive to the variety of tomato classified. This system facilitates the processes of standardisation and quality control, increases the competitiveness of tomato farms and impacts positively on profitability. The automatic classification system described in this work represents a contribution from the economic point of view, as it is profitable for a farm in the short term (less than six months), while the existing systems, can only be used in large trading centers. (Author) 36 refs.

  11. Classifier performance estimation under the constraint of a finite sample size: resampling schemes applied to neural network classifiers.

    Science.gov (United States)

    Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir

    2008-01-01

    In a practical classifier design problem the sample size is limited, and the available finite sample needs to be used both to design a classifier and to predict the classifier's performance for the true population. Since a larger sample is more representative of the population, it is advantageous to design the classifier with all the available cases, and to use a resampling technique for performance prediction. We conducted a Monte Carlo simulation study to compare the ability of different resampling techniques in predicting the performance of a neural network (NN) classifier designed with the available sample. We used the area under the receiver operating characteristic curve as the performance index for the NN classifier. We investigated resampling techniques based on the cross-validation, the leave-one-out method, and three different types of bootstrapping, namely, the ordinary, .632, and .632+ bootstrap. Our results indicated that, under the study conditions, there can be a large difference in the accuracy of the prediction obtained from different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited data set. PMID:18234468

  12. A NON-PARAMETER BAYESIAN CLASSIFIER FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Liu Qingshan; Lu Hanqing; Ma Songde

    2003-01-01

    A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional density is estimated by KDE and the bandwidthof the kernel function is estimated by Expectation Maximum (EM) algorithm. Two subspaceanalysis methods-linear Principal Component Analysis (PCA) and Kernel-based PCA (KPCA)are respectively used to extract features, and the proposed method is compared with ProbabilisticReasoning Models (PRM), Nearest Center (NC) and NN classifiers which are widely used in facerecognition systems. The experiments are performed on two benchmarks and the experimentalresults show that the KDE outperforms PRM, NC and NN classifiers.

  13. Remote Sensing Data Binary Classification Using Boosting with Simple Classifiers

    Science.gov (United States)

    Nowakowski, Artur

    2015-10-01

    Boosting is a classification method which has been proven useful in non-satellite image processing while it is still new to satellite remote sensing. It is a meta-algorithm, which builds a strong classifier from many weak ones in iterative way. We adapt the AdaBoost.M1 boosting algorithm in a new land cover classification scenario based on utilization of very simple threshold classifiers employing spectral and contextual information. Thresholds for the classifiers are automatically calculated adaptively to data statistics. The proposed method is employed for the exemplary problem of artificial area identification. Classification of IKONOS multispectral data results in short computational time and overall accuracy of 94.4% comparing to 94.0% obtained by using AdaBoost.M1 with trees and 93.8% achieved using Random Forest. The influence of a manipulation of the final threshold of the strong classifier on classification results is reported.

  14. A semi-automated approach to building text summarisation classifiers

    Directory of Open Access Journals (Sweden)

    Matias Garcia-Constantino

    2012-12-01

    Full Text Available An investigation into the extraction of useful information from the free text element of questionnaires, using a semi-automated summarisation extraction technique, is described. The summarisation technique utilises the concept of classification but with the support of domain/human experts during classifier construction. A realisation of the proposed technique, SARSET (Semi-Automated Rule Summarisation Extraction Tool, is presented and evaluated using real questionnaire data. The results of this evaluation are compared against the results obtained using two alternative techniques to build text summarisation classifiers. The first of these uses standard rule-based classifier generators, and the second is founded on the concept of building classifiers using secondary data. The results demonstrate that the proposed semi-automated approach outperforms the other two approaches considered.

  15. Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS - R code

    OpenAIRE

    Irawan, Dasapta Erwin; Gio, Prana Ugiana

    2016-01-01

    The following R code was used in this paper "Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS" authors: Prihadi Sumintadireja1, Dasapta Erwin Irawan1, Yuano Rezky2, Prana Ugiana Gio3, Anggita Agustin1

  16. Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS

    OpenAIRE

    Sumintadireja, Prihadi; Irawan, Dasapta Erwin; Rezky, Yuanno; Gio, Prana Ugiana; Agustin, Anggita

    2016-01-01

    This file is the dataset for the following paper "Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS". Authors: Prihadi Sumintadireja1, Dasapta Erwin Irawan1, Yuano Rezky2, Prana Ugiana Gio3, Anggita Agustin1

  17. Which Is Better: Holdout or Full-Sample Classifier Design?

    Directory of Open Access Journals (Sweden)

    Edward R. Dougherty

    2008-04-01

    Full Text Available Is it better to design a classifier and estimate its error on the full sample or to design a classifier on a training subset and estimate its error on the holdout test subset? Full-sample design provides the better classifier; nevertheless, one might choose holdout with the hope of better error estimation. A conservative criterion to decide the best course is to aim at a classifier whose error is less than a given bound. Then the choice between full-sample and holdout designs depends on which possesses the smaller expected bound. Using this criterion, we examine the choice between holdout and several full-sample error estimators using covariance models and a patient-data model. Full-sample design consistently outperforms holdout design. The relation between the two designs is revealed via a decomposition of the expected bound into the sum of the expected true error and the expected conditional standard deviation of the true error.

  18. AUTO CLAIM FRAUD DETECTION USING MULTI CLASSIFIER SYSTEM

    Directory of Open Access Journals (Sweden)

    Luis Alexandre Rodrigues

    2014-06-01

    Full Text Available Through a cost matrix and a combination of classifiers, this work identifies the most economical model to perform the detection of suspected cases of fraud in a dataset of automobile claims. The experiments performed by this work show that working more deeply in sampled data in the training phase and test phase of each classifier is possible obtain a more economic model than other model presented in the literature.

  19. Automatic Genre Classification of Latin Music Using Ensemble of Classifiers

    OpenAIRE

    Silla Jr, Carlos N.; Kaestner, Celso A.A.; Koerich, Alessandro L.

    2006-01-01

    This paper presents a novel approach to the task of automatic music genre classification which is based on ensemble learning. Feature vectors are extracted from three 30-second music segments from the beginning, middle and end of each music piece. Individual classifiers are trained to account for each music segment. During classification, the output provided by each classifier is combined with the aim of improving music genre classification accuracy. Experiments carried out on a dataset conta...

  20. One pass learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance. PMID

  1. Subtractive Fuzzy Classifier Based Driver Distraction Levels Classification Using EEG

    OpenAIRE

    Wali, Mousa Kadhim; Murugappan, Murugappan; Ahmad, Badlishah

    2013-01-01

    [Purpose] In earlier studies of driver distraction, researchers classified distraction into two levels (not distracted, and distracted). This study classified four levels of distraction (neutral, low, medium, high). [Subjects and Methods] Fifty Asian subjects (n=50, 43 males, 7 females), age range 20–35 years, who were free from any disease, participated in this study. Wireless EEG signals were recorded by 14 electrodes during four types of distraction stimuli (Global Position Systems (GPS), ...

  2. Classification of Breast Cancer Using SVM Classifier Technique

    OpenAIRE

    B.Senthil Murugan; S.Srirambabu; Santhosh Kumar. V

    2010-01-01

    This paper proposes a technique for classifying the breast cancer from mammogram. The proposed system aims at developing the visualization tool for detecting the breast cancer and minimizing the scheme of detection. The detection method is organized as follows: (a) Image Enhancement (b) Segmentation (c) Feature extraction (d) Classification using SVM classifier Technique. Image enhancement step concentrates on converting an image to more and better understandable level thereby applying Median...

  3. Classifying pedestrian shopping behaviour according to implied heuristic choice rules

    OpenAIRE

    Shigeyuki Kurose; Aloys W J Borgers; Timmermans, Harry J. P.

    2001-01-01

    Our aim in this paper is to build and test a model which classifies and identifies pedestrian shopping behaviour in a shopping centre by using temporal and spatial choice heuristics. In particular, the temporal local-distance-minimising, total-distance-minimising, and global-distance-minimising heuristic choice rules and spatial nearest-destination-oriented, farthest-destination-oriented, and intermediate-destination-oriented choice rules are combined to classify and identify the stop sequenc...

  4. Classifying Tweet Level Judgements of Rumours in Social Media

    OpenAIRE

    Lukasik, Michal; Cohn, Trevor; Bontcheva, Kalina

    2015-01-01

    Social media is a rich source of rumours and corresponding community reactions. Rumours reflect different characteristics, some shared and some individual. We formulate the problem of classifying tweet level judgements of rumours as a supervised learning task. Both supervised and unsupervised domain adaptation are considered, in which tweets from a rumour are classified on the basis of other annotated rumours. We demonstrate how multi-task learning helps achieve good results on rumours from t...

  5. MASTER REGULATORS USED AS BREAST CANCER METASTASIS CLASSIFIER*

    OpenAIRE

    Lim, Wei Keat; Lyashenko, Eugenia; Califano, Andrea

    2009-01-01

    Computational identification of prognostic biomarkers capable of withstanding follow-up validation efforts is still an open challenge in cancer research. For instance, several gene expression profiles analysis methods have been developed to identify gene signatures that can classify cancer sub-phenotypes associated with poor prognosis. However, signatures originating from independent studies show only minimal overlap and perform poorly when classifying datasets other than the ones they were g...

  6. Mining housekeeping genes with a Naive Bayes classifier

    OpenAIRE

    Aitken Stuart; De Ferrari Luna

    2006-01-01

    Abstract Background Traditionally, housekeeping and tissue specific genes have been classified using direct assay of mRNA presence across different tissues, but these experiments are costly and the results not easy to compare and reproduce. Results In this work, a Naive Bayes classifier based only on physical and functional characteristics of genes already available in databases, like exon length and measures of chromatin compactness, has achieved a 97% success rate in classification of human...

  7. Mining housekeeping genes with a Naive Bayes classifier

    OpenAIRE

    Ferrari, Luna De; Aitken, Stuart

    2006-01-01

    BACKGROUND: Traditionally, housekeeping and tissue specific genes have been classified using direct assay of mRNA presence across different tissues, but these experiments are costly and the results not easy to compare and reproduce.RESULTS: In this work, a Naive Bayes classifier based only on physical and functional characteristics of genes already available in databases, like exon length and measures of chromatin compactness, has achieved a 97% success rate in classification of human houseke...

  8. Dealing with contaminated datasets: An approach to classifier training

    Science.gov (United States)

    Homenda, Wladyslaw; Jastrzebska, Agnieszka; Rybnik, Mariusz

    2016-06-01

    The paper presents a novel approach to classification reinforced with rejection mechanism. The method is based on a two-tier set of classifiers. First layer classifies elements, second layer separates native elements from foreign ones in each distinguished class. The key novelty presented here is rejection mechanism training scheme according to the philosophy "one-against-all-other-classes". Proposed method was tested in an empirical study of handwritten digits recognition.

  9. The Virtually Cyclic Classifying Space of the Heisenberg Group

    OpenAIRE

    Manion, Andrew; Pham, Lisa; Poelhuis, Jonathan

    2008-01-01

    We are interested in the relationship between the virtual cohomological dimension (or vcd) of a discrete group Gamma and the smallest possible dimension of a model for the classifying space of Gamma relative to its family of virtually cyclic subgroups. In this paper we construct a model for the virtually cyclic classifying space of the Heisenberg group. This model has dimension 3, which equals the vcd of the Heisenberg group. We also prove that there exists no model of dimension less than 3.

  10. A cardiorespiratory classifier of voluntary and involuntary electrodermal activity

    Directory of Open Access Journals (Sweden)

    Sejdic Ervin

    2010-02-01

    Full Text Available Abstract Background Electrodermal reactions (EDRs can be attributed to many origins, including spontaneous fluctuations of electrodermal activity (EDA and stimuli such as deep inspirations, voluntary mental activity and startling events. In fields that use EDA as a measure of psychophysiological state, the fact that EDRs may be elicited from many different stimuli is often ignored. This study attempts to classify observed EDRs as voluntary (i.e., generated from intentional respiratory or mental activity or involuntary (i.e., generated from startling events or spontaneous electrodermal fluctuations. Methods Eight able-bodied participants were subjected to conditions that would cause a change in EDA: music imagery, startling noises, and deep inspirations. A user-centered cardiorespiratory classifier consisting of 1 an EDR detector, 2 a respiratory filter and 3 a cardiorespiratory filter was developed to automatically detect a participant's EDRs and to classify the origin of their stimulation as voluntary or involuntary. Results Detected EDRs were classified with a positive predictive value of 78%, a negative predictive value of 81% and an overall accuracy of 78%. Without the classifier, EDRs could only be correctly attributed as voluntary or involuntary with an accuracy of 50%. Conclusions The proposed classifier may enable investigators to form more accurate interpretations of electrodermal activity as a measure of an individual's psychophysiological state.

  11. Classifying Emotion in News Sentences: When Machine Classification Meets Human Classification

    Directory of Open Access Journals (Sweden)

    Plaban Kumar Bhowmick

    2010-01-01

    Full Text Available Multiple emotions are often evoked in readers in response to text stimuli like news article. In this paper, we present a method for classifying news sentences into multiple emotion categories. The corpus consists of 1000 news sentences and the emotion tag considered was anger, disgust, fear, happiness, sadness and surprise. We performed different experiments to compare the machine classification with human classification of emotion. In both the cases, it has been observed that combining anger and disgust class results in better classification and removing surprise, which is a highly ambiguous class in human classification, improves the performance. Words present in the sentences and the polarity of the subject, object and verb were used as features. The classifier performs better with the word and polarity feature combination compared to feature set consisting only of words. The best performance has been achieved with the corpus where anger and disgust classes are combined and surprise class is removed. In this experiment, the average precision was computed to be 79.5% and the average class wise micro F1 is found to be 59.52%.

  12. A Lightweight Data Preprocessing Strategy with Fast Contradiction Analysis for Incremental Classifier Learning

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2015-01-01

    Full Text Available A prime objective in constructing data streaming mining models is to achieve good accuracy, fast learning, and robustness to noise. Although many techniques have been proposed in the past, efforts to improve the accuracy of classification models have been somewhat disparate. These techniques include, but are not limited to, feature selection, dimensionality reduction, and the removal of noise from training data. One limitation common to all of these techniques is the assumption that the full training dataset must be applied. Although this has been effective for traditional batch training, it may not be practical for incremental classifier learning, also known as data stream mining, where only a single pass of the data stream is seen at a time. Because data streams can amount to infinity and the so-called big data phenomenon, the data preprocessing time must be kept to a minimum. This paper introduces a new data preprocessing strategy suitable for the progressive purging of noisy data from the training dataset without the need to process the whole dataset at one time. This strategy is shown via a computer simulation to provide the significant benefit of allowing for the dynamic removal of bad records from the incremental classifier learning process.

  13. Evaluating and classifying the readiness of technology specifications for national standardization.

    Science.gov (United States)

    Baker, Dixie B; Perlin, Jonathan B; Halamka, John

    2015-05-01

    The American Recovery and Reinvestment Act (ARRA) of 2009 clearly articulated the central role that health information technology (HIT) standards would play in improving healthcare quality, safety, and efficiency through the meaningful use of certified, standards based, electronic health record (EHR) technology. In 2012, the Office of the National Coordinator (ONC) asked the Nationwide Health Information Network (NwHIN) Power Team of the Health Information Technology Standards Committee (HITSC) to develop comprehensive, objective, and, to the extent practical, quantitative criteria for evaluating technical standards and implementation specifications and classifying their readiness for national adoption. The Power Team defined criteria, attributes, and metrics for evaluating and classifying technical standards and specifications as 'emerging,' 'pilot,' or 'ready for national standardization' based on their maturity and adoptability. The ONC and the HITSC are now using these metrics for assessing the readiness of technical standards for national adoption. PMID:24872342

  14. Teaching with Crystal Structures: Helping Students Recognize and Classify the Smallest Repeating Particle in a Given Substance

    Science.gov (United States)

    Smithenry, Dennis W.

    2009-01-01

    Classifying a particle requires an understanding of the type of bonding that exists within and among the particles, which requires an understanding of atomic structure and electron configurations, which requires an understanding of the elements of periodic properties, and so on. Rather than getting tangled up in all of these concepts at the start…

  15. China's Electronic Information Product Energy Consumption Standard

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ The electronic information industry of China is facing increasingly urgent ecological challenges. This year, China will study and advance an electronic information product energy consumption standard, and establish a key list of pollution controls and classified frame system.

  16. Laser Hair Removal

    Science.gov (United States)

    ... rashes clinical tools newsletter | contact Share | Hair Removal, Laser A A A AFTER: Two laser hair removal treatments were performed. This picture is ... Procedure Overview With just the right type of laser or Intense Pulsed Light (IPL) technology, suitable hairs ...

  17. Neural network classifier of attacks in IP telephony

    Science.gov (United States)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  18. [Horticultural plant diseases multispectral classification using combined classified methods].

    Science.gov (United States)

    Feng, Jie; Li, Hong-Ning; Yang, Wei-Ping; Hou, De-Dong; Liao, Ning-Fang

    2010-02-01

    The research on multispectral data disposal is getting more and more attention with the development of multispectral technique, capturing data ability and application of multispectral technique in agriculture practice. In the present paper, a cultivated plant cucumber' familiar disease (Trichothecium roseum, Sphaerotheca fuliginea, Cladosporium cucumerinum, Corynespora cassiicola, Pseudoperonospora cubensis) is the research objects. The cucumber leaves multispectral images of 14 visible light channels, near infrared channel and panchromatic channel were captured using narrow-band multispectral imaging system under standard observation and illumination environment, and 210 multispectral data samples which are the 16 bands spectral reflectance of different cucumber disease were obtained. The 210 samples were classified by distance, relativity and BP neural network to discuss effective combination of classified methods for making a diagnosis. The result shows that the classified effective combination of distance and BP neural network classified methods has superior performance than each method, and the advantage of each method is fully used. And the flow of recognizing horticultural plant diseases using combined classified methods is presented. PMID:20384138

  19. SpectraClassifier 1.0: a user friendly, automated MRS-based classifier-development system

    Directory of Open Access Journals (Sweden)

    Julià-Sapé Margarida

    2010-02-01

    Full Text Available Abstract Background SpectraClassifier (SC is a Java solution for designing and implementing Magnetic Resonance Spectroscopy (MRS-based classifiers. The main goal of SC is to allow users with minimum background knowledge of multivariate statistics to perform a fully automated pattern recognition analysis. SC incorporates feature selection (greedy stepwise approach, either forward or backward, and feature extraction (PCA. Fisher Linear Discriminant Analysis is the method of choice for classification. Classifier evaluation is performed through various methods: display of the confusion matrix of the training and testing datasets; K-fold cross-validation, leave-one-out and bootstrapping as well as Receiver Operating Characteristic (ROC curves. Results SC is composed of the following modules: Classifier design, Data exploration, Data visualisation, Classifier evaluation, Reports, and Classifier history. It is able to read low resolution in-vivo MRS (single-voxel and multi-voxel and high resolution tissue MRS (HRMAS, processed with existing tools (jMRUI, INTERPRET, 3DiCSI or TopSpin. In addition, to facilitate exchanging data between applications, a standard format capable of storing all the information needed for a dataset was developed. Each functionality of SC has been specifically validated with real data with the purpose of bug-testing and methods validation. Data from the INTERPRET project was used. Conclusions SC is a user-friendly software designed to fulfil the needs of potential users in the MRS community. It accepts all kinds of pre-processed MRS data types and classifies them semi-automatically, allowing spectroscopists to concentrate on interpretation of results with the use of its visualisation tools.

  20. A novel statistical method for classifying habitat generalists and specialists

    DEFF Research Database (Denmark)

    Chazdon, Robin L; Chao, Anne; Colwell, Robert K;

    2011-01-01

    We develop a novel statistical approach for classifying generalists and specialists in two distinct habitats. Using a multinomial model based on estimated species relative abundance in two habitats, our method minimizes bias due to differences in sampling intensities between two habitat types...... as well as bias due to insufficient sampling within each habitat. The method permits a robust statistical classification of habitat specialists and generalists, without excluding rare species a priori. Based on a user-defined specialization threshold, the model classifies species into one of four groups......: (1) generalist; (2) habitat A specialist; (3) habitat B specialist; and (4) too rare to classify with confidence. We illustrate our multinomial classification method using two contrasting data sets: (1) bird abundance in woodland and heath habitats in southeastern Australia and (2) tree abundance...

  1. COMPARISON OF SVM AND FUZZY CLASSIFIER FOR AN INDIAN SCRIPT

    Directory of Open Access Journals (Sweden)

    M. J. Baheti

    2012-01-01

    Full Text Available With the advent of technological era, conversion of scanned document (handwritten or printed into machine editable format has attracted many researchers. This paper deals with the problem of recognition of Gujarati handwritten numerals. Gujarati numeral recognition requires performing some specific steps as a part of preprocessing. For preprocessing digitization, segmentation, normalization and thinning are done with considering that the image have almost no noise. Further affine invariant moments based model is used for feature extraction and finally Support Vector Machine (SVM and Fuzzy classifiers are used for numeral classification. . The comparison of SVM and Fuzzy classifier is made and it can be seen that SVM procured better results as compared to Fuzzy Classifier.

  2. A Topic Model Approach to Representing and Classifying Football Plays

    KAUST Repository

    Varadarajan, Jagannadan

    2013-09-09

    We address the problem of modeling and classifying American Football offense teams’ plays in video, a challenging example of group activity analysis. Automatic play classification will allow coaches to infer patterns and tendencies of opponents more ef- ficiently, resulting in better strategy planning in a game. We define a football play as a unique combination of player trajectories. To this end, we develop a framework that uses player trajectories as inputs to MedLDA, a supervised topic model. The joint maximiza- tion of both likelihood and inter-class margins of MedLDA in learning the topics allows us to learn semantically meaningful play type templates, as well as, classify different play types with 70% average accuracy. Furthermore, this method is extended to analyze individual player roles in classifying each play type. We validate our method on a large dataset comprising 271 play clips from real-world football games, which will be made publicly available for future comparisons.

  3. Multiple-instance learning as a classifier combining problem

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M.J.; Duin, Robert P.W.;

    2013-01-01

    In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the...... assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification....... The method is tested on a toy data set and various benchmark data sets, and shown to provide results comparable to state-of-the-art MIL methods. (C) 2012 Elsevier Ltd. All rights reserved....

  4. WORD SENSE DISAMBIGUATION BASED ON IMPROVED BAYESIAN CLASSIFIERS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Word Sense Disambiguation (WSD) is to decide the sense of an ambiguous word on particular context. Most of current studies on WSD only use several ambiguous words as test samples, thus leads to some limitation in practical application. In this paper, we perform WSD study based on large scale real-world corpus using two unsupervised learning algorithms based on ±n-improved Bayesian model and Dependency Grammar(DG)-improved Bayesian model. ±n-improved classifiers reduce the window size of context of ambiguous words with close-distance feature extraction method, and decrease the jamming of useless features, thus obviously improve the accuracy, reaching 83.18% (in open test). DG-improved classifier can more effectively conquer the noise effect existing in Naive-Bayesian classifier. Experimental results show that this approach does better on Chinese WSD, and the open test achieved an accuracy of 86.27%.

  5. Iris Recognition Based on LBP and Combined LVQ Classifier

    CERN Document Server

    Shams, M Y; Nomir, O; El-Awady, R M; 10.5121/ijcsit.2011.3506

    2011-01-01

    Iris recognition is considered as one of the best biometric methods used for human identification and verification, this is because of its unique features that differ from one person to another, and its importance in the security field. This paper proposes an algorithm for iris recognition and classification using a system based on Local Binary Pattern and histogram properties as a statistical approaches for feature extraction, and Combined Learning Vector Quantization Classifier as Neural Network approach for classification, in order to build a hybrid model depends on both features. The localization and segmentation techniques are presented using both Canny edge detection and Hough Circular Transform in order to isolate an iris from the whole eye image and for noise detection .Feature vectors results from LBP is applied to a Combined LVQ classifier with different classes to determine the minimum acceptable performance, and the result is based on majority voting among several LVQ classifier. Different iris da...

  6. A History of Classified Activities at Oak Ridge National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    2001-01-30

    The facilities that became Oak Ridge National Laboratory (ORNL) were created in 1943 during the United States' super-secret World War II project to construct an atomic bomb (the Manhattan Project). During World War II and for several years thereafter, essentially all ORNL activities were classified. Now, in 2000, essentially all ORNL activities are unclassified. The major purpose of this report is to provide a brief history of ORNL's major classified activities from 1943 until the present (September 2000). This report is expected to be useful to the ORNL Classification Officer and to ORNL's Authorized Derivative Classifiers and Authorized Derivative Declassifiers in their classification review of ORNL documents, especially those documents that date from the 1940s and 1950s.

  7. What Does(n't) K-theory Classify?

    CERN Document Server

    Evslin, J

    2006-01-01

    We review various K-theory classification conjectures in string theory. Sen conjecture based proposals classify D-brane trajectories in backgrounds with no H flux, while Freed-Witten anomaly based proposals classify conserved RR charges and magnetic RR fluxes in topologically time-independent backgrounds. In exactly solvable CFTs a classification of well-defined boundary states implies that there are branes representing every twisted K-theory class. Some of these proposals fail to respect the self-duality of the RR fields in the democratic formulation of type II supergravity and none respect S-duality in type IIB string theory. We discuss two applications. The twisted K-theory classification has led to a conjecture for the topology of the T-dual of any configuration. In the Klebanov-Strassler geometry twisted K-theory classifies universality classes of baryonic vacua.

  8. Examining the significance of fingerprint-based classifiers

    Directory of Open Access Journals (Sweden)

    Collins Jack R

    2008-12-01

    Full Text Available Abstract Background Experimental examinations of biofluids to measure concentrations of proteins or their fragments or metabolites are being explored as a means of early disease detection, distinguishing diseases with similar symptoms, and drug treatment efficacy. Many studies have produced classifiers with a high sensitivity and specificity, and it has been argued that accurate results necessarily imply some underlying biology-based features in the classifier. The simplest test of this conjecture is to examine datasets designed to contain no information with classifiers used in many published studies. Results The classification accuracy of two fingerprint-based classifiers, a decision tree (DT algorithm and a medoid classification algorithm (MCA, are examined. These methods are used to examine 30 artificial datasets that contain random concentration levels for 300 biomolecules. Each dataset contains between 30 and 300 Cases and Controls, and since the 300 observed concentrations are randomly generated, these datasets are constructed to contain no biological information. A modest search of decision trees containing at most seven decision nodes finds a large number of unique decision trees with an average sensitivity and specificity above 85% for datasets containing 60 Cases and 60 Controls or less, and for datasets with 90 Cases and 90 Controls many DTs have an average sensitivity and specificity above 80%. For even the largest dataset (300 Cases and 300 Controls the MCA procedure finds several unique classifiers that have an average sensitivity and specificity above 88% using only six or seven features. Conclusion While it has been argued that accurate classification results must imply some biological basis for the separation of Cases from Controls, our results show that this is not necessarily true. The DT and MCA classifiers are sufficiently flexible and can produce good results from datasets that are specifically constructed to contain no

  9. Text Classification: Classifying Plain Source Files with Neural Network

    Directory of Open Access Journals (Sweden)

    Jaromir Veber

    2010-10-01

    Full Text Available The automated text file categorization has an important place in computer engineering, particularly in the process called data management automation. A lot has been written about text classification and the methods allowing classification of these files are well known. Unfortunately most studies are theoretical and for practical implementation more research is needed. I decided to contribute with a research focused on creating of a classifier for different kinds of programs (source files, scripts…. This paper will describe practical implementation of the classifier for text files depending on file content.

  10. Dynamic Classifier Systems and their Applications to Random Forest Ensembles

    Czech Academy of Sciences Publication Activity Database

    Štefka, David; Holeňa, Martin

    Berlin: Springer, 2009 - (Kolehmainen, M.; Toivanen, P.; Beliczynski, B.), s. 458-468. (Lecture Notes in Computer Science. 5495). ISBN 978-3-642-04920-0. [ICANNGA'2009. International conference /9./. Kuopio (FI), 23.04.2009-25.04.2009] R&D Projects: GA AV ČR 1ET100300517; GA ČR GA201/08/0802 Institutional research plan: CEZ:AV0Z10300504 Keywords : classifier combining * dynamic classifier aggregation * random forests * classification Subject RIV: IN - Informatics, Computer Science

  11. A Vertical Search Engine – Based On Domain Classifier

    OpenAIRE

    Rajashree Shettar; Rahul Bhuptani

    2008-01-01

    The World Wide Web is growing exponentially and the dynamic, unstructured nature of the web makes it difficult to locate useful resources. Web Search engines such as Google and Alta Vista provide huge amount of information many of which might not be relevant to the users query. In this paper, we build a vertical search engine which takes a seed URL and classifies the URLs crawled as Medical or Finance domains. The filter component of the vertical search engine classifies the web pages downloa...

  12. Comparison of removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether)-N, N, N', N'-tetraacetic acid and citric acid in primary teeth: A scanning electron microscopic study

    OpenAIRE

    Hegde, Rahul J.; Kavita Bapna

    2016-01-01

    Background: Root canal irrigants are considered momentous in their tissue dissolving property, eliminating microorganisms, and removing smear layer. The present study was aimed to compare the removal of endodontic smear layer using ethylene glycol bis (beta-amino ethyl ether)-N, N, N', N'-tetraacetic acid (EGTA) and citric acid solutions with saline as a control in primary anterior teeth. Materials and Methods: Thirty primary anterior teeth were chosen for the study. The teeth were distribute...

  13. Sensitivity study of a semiautomatic supervised classifier applied to minerals from x-ray mapping images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Flesche, Harald

    2000-01-01

    spectroscopy (EDS) in a scanning electron microscope (SEM). Extensions to traditional multivariate statistical methods are applied to perform the classification. Training sets are grown from one or a few seed points by a method that ensures spatial and spectral closeness of observations. Spectral closeness is...... training, a standard quadratic classifier is applied. The performance for each parameter setting is measured by the overall misclassification rate on an independently generated validation set. The classification method is presently used as a routine petrographical analysis method at Norsk Hydro Research...

  14. Sensitivity study of a semiautomatic supervised classifier applied to minerals from x-ray mapping images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Flesche, Harald

    1999-01-01

    spectroscopy (EDS) in a scanning electron microscope (SEM). Extensions to traditional multivariate statistical methods are applied to perform the classification. Training sets are grown from one or a few seed points by a method that ensures spatial and spectral closeness of observations. Spectral closeness is...... training, a standard quadratic classifier is applied. The performance for each parameter setting is measured by the overall misclassification rate on an independently generated validation set. The classification method is presently used as a routine petrographical analysis method at Norsk Hydro Research...

  15. Discrimination-Aware Classifiers for Student Performance Prediction

    Science.gov (United States)

    Luo, Ling; Koprinska, Irena; Liu, Wei

    2015-01-01

    In this paper we consider discrimination-aware classification of educational data. Mining and using rules that distinguish groups of students based on sensitive attributes such as gender and nationality may lead to discrimination. It is desirable to keep the sensitive attributes during the training of a classifier to avoid information loss but…

  16. Support vector machines classifiers of physical activities in preschoolers

    Science.gov (United States)

    The goal of this study is to develop, test, and compare multinomial logistic regression (MLR) and support vector machines (SVM) in classifying preschool-aged children physical activity data acquired from an accelerometer. In this study, 69 children aged 3-5 years old were asked to participate in a s...

  17. Weighted Hybrid Decision Tree Model for Random Forest Classifier

    Science.gov (United States)

    Kulkarni, Vrushali Y.; Sinha, Pradeep K.; Petare, Manisha C.

    2016-06-01

    Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.

  18. Packet Payload Inspection Classifier in the Network Flow Level

    Directory of Open Access Journals (Sweden)

    N.Kannaiya Raja

    2012-06-01

    Full Text Available The network have in the world highly congested channels and topology which was dynamicallycreated with high risk. In this we need flow classifier to find the packet movement in the network.In this paper we have to be developed and evaluated TCP/UDP/FTP/ICMP based on payloadinformation and port numbers and number of flags in the packet for highly flow of packets in thenetwork. The primary motivations of this paper all the valuable protocols are used legally toprocess find out the end user by using payload packet inspection, and also used evaluationshypothesis testing approach. The effective use of tamper resistant flow classifier has used in onenetwork contexts domain and developed in a different Berkeley and Cambridge, the classificationand accuracy was easily found through the packet inspection by using different flags in thepackets. While supervised classifier training specific to the new domain results in much betterclassification accuracy, we also formed a new approach to determine malicious packet and find apacket flow classifier and send correct packet to destination address.

  19. Packet Payload Inspection Classifier in the Network Flow Level

    Directory of Open Access Journals (Sweden)

    N.Kannaiya Raja

    2012-06-01

    Full Text Available The network have in the world highly congested channels and topology which was dynamically created with high risk. In this we need flow classifier to find the packet movement in the network. In this paper we have to be developed and evaluated TCP/UDP/FTP/ICMP based on payload information and port numbers and number of flags in the packet for highly flow of packets in the network. The primary motivations of this paper all the valuable protocols are used legally to process find out the end user by using payload packet inspection, and also used evaluations hypothesis testing approach. The effective use of tamper resistant flow classifier has used in one network contexts domain and developed in a different Berkeley and Cambridge, the classification and accuracy was easily found through the packet inspection by using different flags in the packets. While supervised classifier training specific to the new domain results in much better classification accuracy, we also formed a new approach to determine malicious packet and find a packet flow classifier and send correct packet to destination address.

  20. An ensemble self-training protein interaction article classifier.

    Science.gov (United States)

    Chen, Yifei; Hou, Ping; Manderick, Bernard

    2014-01-01

    Protein-protein interaction (PPI) is essential to understand the fundamental processes governing cell biology. The mining and curation of PPI knowledge are critical for analyzing proteomics data. Hence it is desired to classify articles PPI-related or not automatically. In order to build interaction article classification systems, an annotated corpus is needed. However, it is usually the case that only a small number of labeled articles can be obtained manually. Meanwhile, a large number of unlabeled articles are available. By combining ensemble learning and semi-supervised self-training, an ensemble self-training interaction classifier called EST_IACer is designed to classify PPI-related articles based on a small number of labeled articles and a large number of unlabeled articles. A biological background based feature weighting strategy is extended using the category information from both labeled and unlabeled data. Moreover, a heuristic constraint is put forward to select optimal instances from unlabeled data to improve the performance further. Experiment results show that the EST_IACer can classify the PPI related articles effectively and efficiently. PMID:24212028

  1. Localizing genes to cerebellar layers by classifying ISH images.

    Directory of Open Access Journals (Sweden)

    Lior Kirsch

    Full Text Available Gene expression controls how the brain develops and functions. Understanding control processes in the brain is particularly hard since they involve numerous types of neurons and glia, and very little is known about which genes are expressed in which cells and brain layers. Here we describe an approach to detect genes whose expression is primarily localized to a specific brain layer and apply it to the mouse cerebellum. We learn typical spatial patterns of expression from a few markers that are known to be localized to specific layers, and use these patterns to predict localization for new genes. We analyze images of in-situ hybridization (ISH experiments, which we represent using histograms of local binary patterns (LBP and train image classifiers and gene classifiers for four layers of the cerebellum: the Purkinje, granular, molecular and white matter layer. On held-out data, the layer classifiers achieve accuracy above 94% (AUC by representing each image at multiple scales and by combining multiple image scores into a single gene-level decision. When applied to the full mouse genome, the classifiers predict specific layer localization for hundreds of new genes in the Purkinje and granular layers. Many genes localized to the Purkinje layer are likely to be expressed in astrocytes, and many others are involved in lipid metabolism, possibly due to the unusual size of Purkinje cells.

  2. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Science.gov (United States)

    Assaleh, Khaled; Al-Rousan, M.

    2005-12-01

    Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  3. Diagnostic value of perfusion MRI in classifying stroke

    International Nuclear Information System (INIS)

    Our study was designed to determine whether supplementary information obtained with perfusion MRI can enhance accuracy. We used delayed perfusion, as represented by time to peak map on perfusion MRI, to classify strokes in 39 patients. Strokes were classified as hemodynamic if delayed perfusion extended to a whole territory of the occluded arterial trunk; as embolic if delayed perfusion was absent or restricted to infarcts; as arteriosclerotic if infarcts were small, multiple, and located mainly in the basal ganglias; or as unclassified if the pathophysiology was unclear. We compared these findings with vascular lesions on cerebral angiography, neurological signs, infarction on MRI, ischemia on xenon-enhanced CT (Xe/CT) and collateral pathway development. Delayed perfusion clearly indicated the area of arterial occlusion. Strokes were classified as hemodynamic in 13 patients, embolic in 14 patients, arteriosclerotic in 6 patients and unclassified in 6 patients. Hemodynamic infarcts were seen only in deep white-matter areas such as the centrum semiovale or corona radiata, whereas embolic infarcts were in the cortex, cortex and subjacent white matter, and lenticulo-striatum. Embolic and arteriosclerotic infarcts occurred even in hemo-dynamically compromized hemispheres. Our findings indicate that perfusion MRI, in association with adetailed analysis of T2-weighted MRI of cerebral infarcts in the axial and coronal planes, can accurately classify stroke as hemodynamic, embolic or arteriosclerotic. (author)

  4. Dynamic Classifier Aggregation using Interaction-Sensitive Fuzzy Measures

    Czech Academy of Sciences Publication Activity Database

    Štefka, D.; Holeňa, Martin

    2015-01-01

    Roč. 270, 1 July (2015), s. 25-52. ISSN 0165-0114 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : Fuzzy integral * Fuzzy measure * Dynamic classifier aggregation Subject RIV: IN - Informatics, Computer Science Impact factor: 1.986, year: 2014

  5. 18 CFR 367.18 - Criteria for classifying leases.

    Science.gov (United States)

    2010-04-01

    ... classification of the lease under the criteria in paragraph (a) of this section had the changed terms been in... the lessee) must not give rise to a new classification of a lease for accounting purposes. ... ACT General Instructions § 367.18 Criteria for classifying leases. (a) If, at its inception, a...

  6. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Directory of Open Access Journals (Sweden)

    M. Al-Rousan

    2005-08-01

    Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  7. Gene-expression Classifier in Papillary Thyroid Carcinoma

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Jespersen, Marie Louise; Krogdahl, Annelise;

    2016-01-01

    BACKGROUND: No reliable biomarker for metastatic potential in the risk stratification of papillary thyroid carcinoma exists. We aimed to develop a gene-expression classifier for metastatic potential. MATERIALS AND METHODS: Genome-wide expression analyses were used. Development cohort: freshly...

  8. Classifying helicopter gearbox faults using a normalized energy metric

    Science.gov (United States)

    Samuel, Paul D.; Pines, Darryll J.

    2001-02-01

    A normalized energy metric is used to classify seeded faults of the OH-58A main transmission. This gearbox comprises a two-stage transmission with an overall reduction of 17.44:1. Loaded gearbox test runs are used to evaluate the sensitivity of a non-stationary fault metric for early fault detection and classification. The non-stationary fault metric consists of a simple normalized energy index developed to account for a redistribution of sideband energy of the dominant mesh frequency and its harmonics in the presence of actual gearbox faults. This index is used to qualitatively assess the presence, type and location of gearbox faults. In this work, elements of the normalized energy metric are assembled into a feature vector to serve as input into a self-organizing Kohonen neural network classifier. This classifier maps vibration features onto a two-dimensional grid. A feedforward back propagation neural network is then used to classify different faults according to how they cluster on the two-dimensional self-organizing map. Gearbox faults of OH-58A main transmission considered in this study include sun gear spalling and spiral bevel gear scoring. Results from the classification suggest that the normalized energy metric is reasonably robust against false alarms for certain geartrain faults.

  9. Subtractive fuzzy classifier based driver distraction levels classification using EEG.

    Science.gov (United States)

    Wali, Mousa Kadhim; Murugappan, Murugappan; Ahmad, Badlishah

    2013-09-01

    [Purpose] In earlier studies of driver distraction, researchers classified distraction into two levels (not distracted, and distracted). This study classified four levels of distraction (neutral, low, medium, high). [Subjects and Methods] Fifty Asian subjects (n=50, 43 males, 7 females), age range 20-35 years, who were free from any disease, participated in this study. Wireless EEG signals were recorded by 14 electrodes during four types of distraction stimuli (Global Position Systems (GPS), music player, short message service (SMS), and mental tasks). We derived the amplitude spectrum of three different frequency bands, theta, alpha, and beta of EEG. Then, based on fusion of discrete wavelet packet transforms and fast fourier transform yield, we extracted two features (power spectral density, spectral centroid frequency) of different wavelets (db4, db8, sym8, and coif5). Mean ± SD was calculated and analysis of variance (ANOVA) was performed. A fuzzy inference system classifier was applied to different wavelets using the two extracted features. [Results] The results indicate that the two features of sym8 posses highly significant discrimination across the four levels of distraction, and the best average accuracy achieved by the subtractive fuzzy classifier was 79.21% using the power spectral density feature extracted using the sym8 wavelet. [Conclusion] These findings suggest that EEG signals can be used to monitor distraction level intensity in order to alert drivers to high levels of distraction. PMID:24259914

  10. Scoring and Classifying Examinees Using Measurement Decision Theory

    Science.gov (United States)

    Rudner, Lawrence M.

    2009-01-01

    This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…

  11. Bayesian Classifier for Medical Data from Doppler Unit

    Directory of Open Access Journals (Sweden)

    J. Málek

    2006-01-01

    Full Text Available Nowadays, hand-held ultrasonic Doppler units (probes are often used for noninvasive screening of atherosclerosis in the arteries of the lower limbs. The mean velocity of blood flow in time and blood pressures are measured on several positions on each lower limb. By listening to the acoustic signal generated by the device or by reading the signal displayed on screen, a specialist can detect peripheral arterial disease (PAD.This project aims to design software that will be able to analyze data from such a device and classify it into several diagnostic classes. At the Department of Functional Diagnostics at the Regional Hospital in Liberec a database of several hundreds signals was collected. In cooperation with the specialist, the signals were manually classified into four classes. For each class, selected signal features were extracted and then used for training a Bayesian classifier. Another set of signals was used for evaluating and optimizing the parameters of the classifier. Slightly above 84 % of successfully recognized diagnostic states, was recently achieved on the test data. 

  12. Group-cohomology refinement to classify G-symplectic manifolds

    International Nuclear Information System (INIS)

    'Pseudo-cohomology', as a refinement of Lie group cohomology, is soundly studied aiming at classifying the symplectic manifolds associated with Lie groups. In this study, the framework of symplectic cohomology provides fundamental new insight, which enriches the analysis previously developed in the setting of Cartan-Eilenberg H2(G,U(1)) cohomology

  13. A Vertical Search Engine – Based On Domain Classifier

    Directory of Open Access Journals (Sweden)

    Rajashree Shettar

    2008-11-01

    Full Text Available The World Wide Web is growing exponentially and the dynamic, unstructured nature of the web makes it difficult to locate useful resources. Web Search engines such as Google and Alta Vista provide huge amount of information many of which might not be relevant to the users query. In this paper, we build a vertical search engine which takes a seed URL and classifies the URLs crawled as Medical or Finance domains. The filter component of the vertical search engine classifies the web pages downloaded by the crawler into appropriate domains. The web pages crawled is checked for relevance based on the domain chosen and indexed. External users query the database with keywords to search; The Domain classifiers classify the URLs into relevant domain and are presented in descending order according to the rank number. This paper focuses on two issues – page relevance to a particular domain and page contents for the search keywords to improve the quality of URLs to be listed thereby avoiding irrelevant or low-quality ones .

  14. Enhancing atlas based segmentation with multiclass linear classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR 5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne 69300 (France)

    2015-12-15

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.

  15. Enhancing atlas based segmentation with multiclass linear classifiers

    International Nuclear Information System (INIS)

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy

  16. Comparison of machine learning classifiers for influenza detection from emergency department free-text reports.

    Science.gov (United States)

    López Pineda, Arturo; Ye, Ye; Visweswaran, Shyam; Cooper, Gregory F; Wagner, Michael M; Tsui, Fuchiang Rich

    2015-12-01

    Influenza is a yearly recurrent disease that has the potential to become a pandemic. An effective biosurveillance system is required for early detection of the disease. In our previous studies, we have shown that electronic Emergency Department (ED) free-text reports can be of value to improve influenza detection in real time. This paper studies seven machine learning (ML) classifiers for influenza detection, compares their diagnostic capabilities against an expert-built influenza Bayesian classifier, and evaluates different ways of handling missing clinical information from the free-text reports. We identified 31,268 ED reports from 4 hospitals between 2008 and 2011 to form two different datasets: training (468 cases, 29,004 controls), and test (176 cases and 1620 controls). We employed Topaz, a natural language processing (NLP) tool, to extract influenza-related findings and to encode them into one of three values: Acute, Non-acute, and Missing. Results show that all ML classifiers had areas under ROCs (AUC) ranging from 0.88 to 0.93, and performed significantly better than the expert-built Bayesian model. Missing clinical information marked as a value of missing (not missing at random) had a consistently improved performance among 3 (out of 4) ML classifiers when it was compared with the configuration of not assigning a value of missing (missing completely at random). The case/control ratios did not affect the classification performance given the large number of training cases. Our study demonstrates ED reports in conjunction with the use of ML and NLP with the handling of missing value information have a great potential for the detection of infectious diseases. PMID:26385375

  17. Detection of ST segment deviation episodes in ECG using KLT with an ensemble neural classifier

    International Nuclear Information System (INIS)

    In this paper, we describe a technique for automatic detection of ST segment deviations that can be used in the diagnosis of coronary heart disease (CHD) using ambulatory electrocardiogram (ECG) recordings. Preprocessing is carried out prior to the extraction of the ST segment which involves noise and artifact filtering using a digital bandpass filter, baseline removal and application of a discrete wavelet transform (DWT) based technique for detection and delineation of the QRS complex in ECG. Lead-dependent Karhunen–Loève transform (KLT) bases are used for dimensionality reduction of the ST segment data. ST deviation episodes are detected by a classifier ensemble comprising backpropagation neural networks. Results obtained through the use of our proposed method (sensitivity/positive predictive value = 90.75%/89.2%) compare well with those given in the existing research. Hence, the proposed method exhibits the potential to be adopted in the design of a practical ischemia detection system

  18. Detection of ST segment deviation episodes in ECG using KLT with an ensemble neural classifier.

    Science.gov (United States)

    Afsar, Fayyaz A; Arif, M; Yang, J

    2008-07-01

    In this paper, we describe a technique for automatic detection of ST segment deviations that can be used in the diagnosis of coronary heart disease (CHD) using ambulatory electrocardiogram (ECG) recordings. Preprocessing is carried out prior to the extraction of the ST segment which involves noise and artifact filtering using a digital bandpass filter, baseline removal and application of a discrete wavelet transform (DWT) based technique for detection and delineation of the QRS complex in ECG. Lead-dependent Karhunen-Loève transform (KLT) bases are used for dimensionality reduction of the ST segment data. ST deviation episodes are detected by a classifier ensemble comprising backpropagation neural networks. Results obtained through the use of our proposed method (sensitivity/positive predictive value = 90.75%/89.2%) compare well with those given in the existing research. Hence, the proposed method exhibits the potential to be adopted in the design of a practical ischemia detection system. PMID:18560057

  19. Boosting 2-Thresholded Weak Classifiers over Scattered Rectangle Features for Object Detection

    Directory of Open Access Journals (Sweden)

    Weize Zhang

    2009-12-01

    Full Text Available In this paper, we extend Viola and Jones’ detection framework in two aspects. Firstly, by removing the restriction of the geometry adjacency rule over Haarlike feature, we get a richer representation called scattered rectangle feature, which explores much more orientations other than horizontal, vertical and diagonal, as well as misaligned, detached and non-rectangle shape information that is unreachable to Haar-like feature. Secondly, we strengthen the discriminating power of the weak classifiers by expanding them into 2-thresholded ones, which guarantees a better classification with smaller error, by the simple motivation that the bound on the accuracy of the final hypothesis improves when any of the weak hypotheses is improved. An optimal linear online algorithm is also proposed to determine the two thresholds. The comparison experiments on MIT+CMU upright face test set under an objective detection criterion show that the extended method outperforms the original one.

  20. Methods of urolith removal.

    Science.gov (United States)

    Langston, Cathy; Gisselman, Kelly; Palma, Douglas; McCue, John

    2010-06-01

    Multiple techniques exist to remove uroliths from each section of the urinary tract. Minimally invasive methods for removing lower urinary tract stones include voiding urohydropropulsion, retrograde urohydropropulsion followed by dissolution or removal, catheter retrieval, cystoscopic removal, and cystoscopy-assisted laser lithotripsy and surgery. Laparoscopic cystotomy is less invasive than surgical cystotomy. Extracorporeal shock wave lithotripsy can be used for nephroliths and ureteroliths. Nephrotomy, pyelotomy, or urethrotomy may be recommended in certain situations. This article discusses each technique and gives guidance for selecting the most appropriate technique for an individual patient. PMID:20949423

  1. Particle adhesion and removal

    CERN Document Server

    Mittal, K L

    2015-01-01

    The book provides a comprehensive and easily accessible reference source covering all important aspects of particle adhesion and removal.  The core objective is to cover both fundamental and applied aspects of particle adhesion and removal with emphasis on recent developments.  Among the topics to be covered include: 1. Fundamentals of surface forces in particle adhesion and removal.2. Mechanisms of particle adhesion and removal.3. Experimental methods (e.g. AFM, SFA,SFM,IFM, etc.) to understand  particle-particle and particle-substrate interactions.4. Mechanics of adhesion of micro- and  n

  2. Region 9 Removal Sites

    Data.gov (United States)

    U.S. Environmental Protection Agency — Point geospatial dataset representing locations of CERCLA (Superfund) Removal sites. CERCLA (Comprehensive Environmental Response, Compensation, and Liability Act)...

  3. Simultaneous removal of NOX and SO2 from flue gases by energizing gases with electrons having energy in the range from 5 eV to 20 eV

    International Nuclear Information System (INIS)

    These notes report the results obtained with an experimental installation able to treat 100 Nm3/h of flue gases, installed at the Thermoelectrical Power Plant at Marghera. The experimental installation, operating on the principle of gas energizing, is able to remove simultaneously 40 to 50% of the NOX and about 100% of the SO2 contained in the flue gases. It is expected to achieve better efficiency in the removal of NOX by including in the system a bag filter which should favour removal reaction in the heterogeneous phase of NOX. Particulate concentration at output is between 2 and 5 mg/Nm3. A pulse generator designed and built by Enel was tested; the results were excellent, so work has begun on the preliminary planning of a 200 kW pulse generator that operates on the same principle. (author)

  4. Skin lesion removal-aftercare

    Science.gov (United States)

    ... aftercare; Nevi - removal aftercare; Scissor excision aftercare; Skin tag removal aftercare; Mole removal aftercare; Skin cancer removal ... to the principles of the Health on the Net Foundation (www.hon.ch). The information provided herein ...

  5. Removal of heavy metals using waste eggshell

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The removal capacity of toxic heavy metals by the reused eggshell was studied. As a pretreatment process for the preparation of reused material from waste eggshell, calcination was performed in the furnace at 800℃ for 2 h after crushing the dried waste eggshell. Calcination behavior, qualitative and quantitative elemental information, mineral type and surface characteristics before and after calcination of eggshell were examined by thermal gravimetric analysis (TGA), X-ray fluorescence (XRF), X-ray diffraction (XRD) and scanning electron microscopy (SEM), respectively. After calcination, the major inorganic composition was identified as Ca (lime, 99.63%) and K, P and Sr were identified as minor components. When calcined eggshell was applied in the treatment of synthetic wastewater containing heavy metals, a complete removal of Cd as well as above 99% removal of Cr was observed after 10 min. Although the natural eggshell had some removal capacity of Cd and Cr, a complete removal was not accomplished even after 60 min due to quite slower removal rate. However, in contrast to Cd and Cr, an efficient removal of Pb was observed with the natural eggshell rather than the calcined eggshell. From the application of the calcined eggshell in the treatment of real electroplating wastewater, the calcined eggshell showed a promising removal capacity of heavy metal ions as well as had a good neutralization capacity in the treatment of strong acidic wastewater.

  6. Removal of silver nanoparticles by coagulation processes

    International Nuclear Information System (INIS)

    Highlights: • This study investigated the removal of AgNP suspensions by four regular coagulants. • The optimal removal efficiencies for the four coagulants were achieved at pH 7.5. • The removal efficiency of AgNPs was affected by the natural water characteristics. • TEM and XRD showed that AgNPs or silver-containing NPs were adsorbed onto the flocs. -- Abstract: Commercial use of silver nanoparticles (AgNPs) will lead to a potential route for human exposure via potable water. Coagulation followed by sedimentation, as a conventional technique in the drinking water treatment facilities, may become an important barrier to prevent human from AgNP exposures. This study investigated the removal of AgNP suspensions by four regular coagulants. In the aluminum sulfate and ferric chloride coagulation systems, the water parameters slightly affected the AgNP removal. However, in the poly aluminum chloride and polyferric sulfate coagulation systems, the optimal removal efficiencies were achieved at pH 7.5, while higher or lower of pH could reduce the AgNP removal. Besides, the increasing natural organic matter (NOM) would reduce the AgNP removal, while Ca2+ and suspended solids concentrations would also affect the AgNP removal. In addition, results from the transmission electron microscopy and X-ray diffraction showed AgNPs or silver-containing nanoparticles were adsorbed onto the flocs. Finally, natural water samples were used to validate AgNP removal by coagulation. This study suggests that in the case of release of AgNPs into the source water, the traditional water treatment process, coagulation/sedimentation, can remove AgNPs and minimize the silver ion concentration under the well-optimized conditions

  7. Inferring Functional Brain States Using Temporal Evolution of Regularized Classifiers

    Directory of Open Access Journals (Sweden)

    Andrey Zhdanov

    2007-08-01

    Full Text Available We present a framework for inferring functional brain state from electrophysiological (MEG or EEG brain signals. Our approach is adapted to the needs of functional brain imaging rather than EEG-based brain-computer interface (BCI. This choice leads to a different set of requirements, in particular to the demand for more robust inference methods and more sophisticated model validation techniques. We approach the problem from a machine learning perspective, by constructing a classifier from a set of labeled signal examples. We propose a framework that focuses on temporal evolution of regularized classifiers, with cross-validation for optimal regularization parameter at each time frame. We demonstrate the inference obtained by this method on MEG data recorded from 10 subjects in a simple visual classification experiment, and provide comparison to the classical nonregularized approach.

  8. MAMMOGRAMS ANALYSIS USING SVM CLASSIFIER IN COMBINED TRANSFORMS DOMAIN

    Directory of Open Access Journals (Sweden)

    B.N. Prathibha

    2011-02-01

    Full Text Available Breast cancer is a primary cause of mortality and morbidity in women. Reports reveal that earlier the detection of abnormalities, better the improvement in survival. Digital mammograms are one of the most effective means for detecting possible breast anomalies at early stages. Digital mammograms supported with Computer Aided Diagnostic (CAD systems help the radiologists in taking reliable decisions. The proposed CAD system extracts wavelet features and spectral features for the better classification of mammograms. The Support Vector Machines classifier is used to analyze 206 mammogram images from Mias database pertaining to the severity of abnormality, i.e., benign and malign. The proposed system gives 93.14% accuracy for discrimination between normal-malign and 87.25% accuracy for normal-benign samples and 89.22% accuracy for benign-malign samples. The study reveals that features extracted in hybrid transform domain with SVM classifier proves to be a promising tool for analysis of mammograms.

  9. The fuzzy gene filter: A classifier performance assesment

    CERN Document Server

    Perez, Meir

    2011-01-01

    The Fuzzy Gene Filter (FGF) is an optimised Fuzzy Inference System designed to rank genes in order of differential expression, based on expression data generated in a microarray experiment. This paper examines the effectiveness of the FGF for feature selection using various classification architectures. The FGF is compared to three of the most common gene ranking algorithms: t-test, Wilcoxon test and ROC curve analysis. Four classification schemes are used to compare the performance of the FGF vis-a-vis the standard approaches: K Nearest Neighbour (KNN), Support Vector Machine (SVM), Naive Bayesian Classifier (NBC) and Artificial Neural Network (ANN). A nested stratified Leave-One-Out Cross Validation scheme is used to identify the optimal number top ranking genes, as well as the optimal classifier parameters. Two microarray data sets are used for the comparison: a prostate cancer data set and a lymphoma data set.

  10. The Emotion Sign: Human Motion Analysis Classifying Specific Emotion

    Directory of Open Access Journals (Sweden)

    Yuichi Kobayashi

    2008-09-01

    Full Text Available We examine the relationship between human motion and emotions. With recent improvement of sensing technologies, although precise human motion can be measured, the amount of data grows enormously. In this paper, we propose a new analysis method which can describe large amount of data rationally. This method can be used to classify human motions associated with specific emotions. Our approach to motion data analysis is to apply higher order singular value decomposition (HOSVD directly to motion data. HOSVD can generate a compact vector which specifies each emotion common among people. Experimentally, we obtained motion capture data for “gait” and “standing” actions related to six basic emotions. Human gait motion was also created with an animator. For these motion data, our analysis showed that our method can classify the human motions specific to each emotion.

  11. Evaluation of LDA Ensembles Classifiers for Brain Computer Interface

    International Nuclear Information System (INIS)

    The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.

  12. Characterizing and classifying uranium yellow cakes: A background

    Science.gov (United States)

    Hausen, D. M.

    1998-12-01

    Uranium concentrates obtained from leach solutions, known as uranium yellow cakes, represent an intermediate step in the processing of uranium ores. Yellow cake concentrates are prepared by various metallurgical methods, depending on the types of ores. Samples of yellow cakes prepared under various methods were analyzed; examined in detail by means of x-ray diffraction, infrared spectra, and wet chemical methods; and classified by mineralogic methods. The cakes were classified as uranyl hydroxide hydrate, basic uranyl sulfate hydrate, sodium para-uranate, and uranyl peroxide hydrate. The experimental preparation methods and characterization methodology used are described, and the significance of structural types to the physical and chemical properties of yellow cake production, as well as the pyrolytic transformations at high temperatures, are discussed.

  13. Feasibility study for banking loan using association rule mining classifier

    Directory of Open Access Journals (Sweden)

    Agus Sasmito Aribowo

    2015-03-01

    Full Text Available The problem of bad loans in the koperasi can be reduced if the koperasi can detect whether member can complete the mortgage debt or decline. The method used for identify characteristic patterns of prospective lenders in this study, called Association Rule Mining Classifier. Pattern of credit member will be converted into knowledge and used to classify other creditors. Classification process would separate creditors into two groups: good credit and bad credit groups. Research using prototyping for implementing the design into an application using programming language and development tool. The process of association rule mining using Weighted Itemset Tidset (WIT–tree methods. The results shown that the method can predict the prospective customer credit. Training data set using 120 customers who already know their credit history. Data test used 61 customers who apply for credit. The results concluded that 42 customers will be paying off their loans and 19 clients are decline

  14. Efficient iris recognition via ICA feature and SVM classifier

    Institute of Scientific and Technical Information of China (English)

    Wang Yong; Xu Luping

    2007-01-01

    To improve flexibility and reliability of iris recognition algorithm while keeping iris recognition success rate, an iris recognition approach for combining SVM with ICA feature extraction model is presented. SVM is a kind of classifier which has demonstrated high generalization capabilities in the object recognition problem. And ICA is a feature extraction technique which can be considered a generalization of principal component analysis. In this paper, ICA is used to generate a set of subsequences of feature vectors for iris feature extraction. Then each subsequence is classified using support vector machine sequence kernels. Experiments are made on CASIA iris database, the result indicates combination of SVM and ICA can improve iris recognition flexibility and reliability while keeping recognition success rate.

  15. Predicting Cutting Forces in Aluminum Using Polynomial Classifiers

    Science.gov (United States)

    Kadi, H. El; Deiab, I. M.; Khattab, A. A.

    Due to increased calls for environmentally benign machining processes, there has been focus and interest in making processes more lean and agile to enhance efficiency, reduce emissions and increase profitability. One approach to achieving lean machining is to develop a virtual simulation environment that enables fast and reasonably accurate predictions of various machining scenarios. Polynomial Classifiers (PCs) are employed to develop a smart data base that can provide fast prediction of cutting forces resulting from various combinations of cutting parameters. With time, the force model can expand to include different materials, tools, fixtures and machines and would be consulted prior to starting any job. In this work, first, second and third order classifiers are used to predict the cutting coefficients that can be used to determine the cutting forces. Predictions obtained using PCs are compared to experimental results and are shown to be in good agreement.

  16. Nonlinear interpolation fractal classifier for multiple cardiac arrhythmias recognition

    Energy Technology Data Exchange (ETDEWEB)

    Lin, C.-H. [Department of Electrical Engineering, Kao-Yuan University, No. 1821, Jhongshan Rd., Lujhu Township, Kaohsiung County 821, Taiwan (China); Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)], E-mail: eechl53@cc.kyu.edu.tw; Du, Y.-C.; Chen Tainsong [Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)

    2009-11-30

    This paper proposes a method for cardiac arrhythmias recognition using the nonlinear interpolation fractal classifier. A typical electrocardiogram (ECG) consists of P-wave, QRS-complexes, and T-wave. Iterated function system (IFS) uses the nonlinear interpolation in the map and uses similarity maps to construct various data sequences including the fractal patterns of supraventricular ectopic beat, bundle branch ectopic beat, and ventricular ectopic beat. Grey relational analysis (GRA) is proposed to recognize normal heartbeat and cardiac arrhythmias. The nonlinear interpolation terms produce family functions with fractal dimension (FD), the so-called nonlinear interpolation function (NIF), and make fractal patterns more distinguishing between normal and ill subjects. The proposed QRS classifier is tested using the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Compared with other methods, the proposed hybrid methods demonstrate greater efficiency and higher accuracy in recognizing ECG signals.

  17. Classifying paragraph types using linguistic features: Is paragraph positioning important?

    OpenAIRE

    Scott A. Crossley, Kyle Dempsey & Danielle S. McNamara

    2011-01-01

    This study examines the potential for computational tools and human raters to classify paragraphs based on positioning. In this study, a corpus of 182 paragraphs was collected from student, argumentative essays. The paragraphs selected were initial, middle, and final paragraphs and their positioning related to introductory, body, and concluding paragraphs. The paragraphs were analyzed by the computational tool Coh-Metrix on a variety of linguistic features with correlates to textual cohesion ...

  18. Classifying and identifying servers for biomedical information retrieval.

    OpenAIRE

    Patrick, T. B.; Springer, G K

    1994-01-01

    Useful retrieval of biomedical information from network information sources requires methods for organized access to those information sources. This access must be organized in terms of the information content of information sources and in terms of the discovery of the network location of those information sources. We have developed an approach to providing organized access to information sources based on a scheme of hierarchical classifiers and identifiers of the servers providing access to ...

  19. Classifying Floating Potential Measurement Unit Data Products as Science Data

    Science.gov (United States)

    Coffey, Victoria; Minow, Joseph

    2015-01-01

    We are Co-Investigators for the Floating Potential Measurement Unit (FPMU) on the International Space Station (ISS) and members of the FPMU operations and data analysis team. We are providing this memo for the purpose of classifying raw and processed FPMU data products and ancillary data as NASA science data with unrestricted, public availability in order to best support science uses of the data.

  20. Image Replica Detection based on Binary Support Vector Classifier

    OpenAIRE

    Maret, Y.; Dufaux, F.; Ebrahimi, T.

    2005-01-01

    In this paper, we present a system for image replica detection. More specifically, the technique is based on the extraction of 162 features corresponding to texture, color and gray-level characteristics. These features are then weighted and statistically normalized. To improve training and performances, the features space dimensionality is reduced. Lastly, a decision function is generated to classify the test image as replica or non-replica of a given reference image. Experimental results sho...

  1. Controlled self-organisation using learning classifier systems

    OpenAIRE

    Richter, Urban Maximilian

    2009-01-01

    The complexity of technical systems increases, breakdowns occur quite often. The mission of organic computing is to tame these challenges by providing degrees of freedom for self-organised behaviour. To achieve these goals, new methods have to be developed. The proposed observer/controller architecture constitutes one way to achieve controlled self-organisation. To improve its design, multi-agent scenarios are investigated. Especially, learning using learning classifier systems is addressed.

  2. Learning Rates for ${l}^{1}$ -Regularized Kernel Classifiers

    OpenAIRE

    Hongzhi Tong; Di-Rong Chen; Fenghong Yang

    2013-01-01

    We consider a family of classification algorithms generated from a regularization kernel scheme associated with ${l}^{1}$ -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derive...

  3. Higher operations in string topology of classifying spaces

    OpenAIRE

    Lahtinen, Anssi

    2015-01-01

    Examples of non-trivial higher string topology operations have been regrettably rare in the literature. In this paper, working in the context of string topology of classifying spaces, we provide explicit calculations of a wealth of non-trivial higher string topology operations associated to a number of different Lie groups. As an application of these calculations, we obtain an abundance of interesting homology classes in the twisted homology groups of automorphism groups of free groups, the o...

  4. Molecular Characteristics in MRI-Classified Group 1 Glioblastoma Multiforme

    OpenAIRE

    Chin-HsingAnnieLin; RebeccaAIhrie; ArturoAlvarez-Buylla; RobertNEisenman

    2013-01-01

    Glioblastoma multiforme (GBM) is a clinically and pathologically heterogeneous brain tumor. Previous studies of transcriptional profiling have revealed biologically relevant GBM subtypes associated with specific mutations and dysregulated pathways. Here, we applied a modified proteome to uncover abnormal protein expression profile in a MRI-classified group I GBM (GBM1), which has a spatial relationship with one of the adult neural stem cell niches, subventricular zone (SVZ). Most importantly,...

  5. Classifying racist texts using a support vector machine

    OpenAIRE

    Greevy, Edel; Alan F. SMEATON

    2004-01-01

    In this poster we present an overview of the techniques we used to develop and evaluate a text categorisation system to automatically classify racist texts. Detecting racism is difficult because the presence of indicator words is insufficient to indicate racist texts, unlike some other text classification tasks. Support Vector Machines (SVM) are used to automatically categorise web pages based on whether or not they are racist. Different interpretations of what constitutes a term are taken, a...

  6. VIRTUAL MINING MODEL FOR CLASSIFYING TEXT USING UNSUPERVISED LEARNING

    OpenAIRE

    S. Koteeswaran; E. Kannan; P. Visu

    2014-01-01

    In real world data mining is emerging in various era, one of its most outstanding performance is held in various research such as Big data, multimedia mining, text mining etc. Each of the researcher proves their contribution with tremendous improvements in their proposal by means of mathematical representation. Empowering each problem with solutions are classified into mathematical and implementation models. The mathematical model relates to the straight forward rules and formulas that are re...

  7. An alternative educational indicator for classifying Secondary Schools in Portugal

    OpenAIRE

    Gonçalves, A. Manuela; Costa, Marco; De Oliveira, Mário,

    2015-01-01

    The purpose of this paper aims at carrying out a study in the area of Statistics for classifying Portuguese Secondary Schools (both mainland and islands: “Azores” and “Madeira”),taking into account the results achievedby their students in both national examinations and internal assessment. The main according consists of identifying groups of schools with different performance levels by considering the sub-national public and private education systems’ as well as their respective geographic lo...

  8. Using linguistic information to classify Portuguese text documents

    OpenAIRE

    Teresa, Gonçalves; Paulo, Quaresma

    2008-01-01

    This paper examines the role of various linguistic structures on text classification applying the study to the Portuguese language. Besides using a bag-of-words representation where we evaluate different measures and use linguistic knowledge for term selection, we do several experiments using syntactic information representing documents as strings of words and strings of syntactic parse trees. To build the classifier we use the Support Vector Machine (SVM) algorithm which is known to prod...

  9. Applying deep learning to classify pornographic images and videos

    OpenAIRE

    Moustafa, Mohamed

    2015-01-01

    It is no secret that pornographic material is now a one-click-away from everyone, including children and minors. General social media networks are striving to isolate adult images and videos from normal ones. Intelligent image analysis methods can help to automatically detect and isolate questionable images in media. Unfortunately, these methods require vast experience to design the classifier including one or more of the popular computer vision feature descriptors. We propose to build a clas...

  10. A Mobile Service Delivery Platform forWeb Classifieds

    OpenAIRE

    Mahmood, Azam

    2013-01-01

    The Mobidoo Mobile Service Delivery Platform (MSDP) provides opportunity to the service providers to add online services by creating classifieds and advertising them to end users. These services can either be provided free of cost or with cost. Users can facilitate from these services by showing their interest and can get that particular service from service provider via ADMIN authentication or can also just surf through the services available on mobile web application. Main users of the appl...

  11. Application of dispersion analysis for determining classifying separation size

    OpenAIRE

    Golomeova, Mirjana; Golomeov, Blagoj; Krstev, Boris; Zendelska, Afrodita; Krstev, Aleksandar

    2009-01-01

    The paper presents the procedure of mathematical modelling the cut point of copper ore classifying by laboratory hydrocyclone. The application of dispersion analysis and planning with Latin square makes possible significant reduction the number of tests. Tests were carried out by D-100 mm hydrocyclone. Variable parameters are as follows: content of solid in pulp, underflow diameter, overflow diameter and inlet pressure. The cut point is determined by partition curve. The obtained mathemat...

  12. Learning Classifiers from Synthetic Data Using a Multichannel Autoencoder

    OpenAIRE

    Zhang, Xi; Fu, Yanwei; Zang, Andi; Sigal, Leonid; Agam, Gady

    2015-01-01

    We propose a method for using synthetic data to help learning classifiers. Synthetic data, even is generated based on real data, normally results in a shift from the distribution of real data in feature space. To bridge the gap between the real and synthetic data, and jointly learn from synthetic and real data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by suing MCAE, it is possible to learn a better feature representation for classification. To evaluate the proposed a...

  13. Classifying and Visualizing Motion Capture Sequences using Deep Neural Networks

    OpenAIRE

    Cho, Kyunghyun; Chen, Xi

    2013-01-01

    The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition. Currently most systems only classify dataset with a couple of dozens different actions. Moreover, feature extraction from the data is often computational complex. In this paper, we propose a novel system to recognize the actions from skeleton data with simple, but effective, features using deep neural networks. Features are extracted for each frame based on the relative...

  14. Building Road-Sign Classifiers Using a Trainable Similarity Measure

    Czech Academy of Sciences Publication Activity Database

    Paclík, P.; Novovičová, Jana; Duin, R.P.W.

    2006-01-01

    Roč. 7, č. 3 (2006), s. 309-321. ISSN 1524-9050 R&D Projects: GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : classifier system design * road-sign classification * similarity data representation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.434, year: 2006 http://www.ewh.ieee.org/tc/its/trans.html

  15. Learning Classifier Systems: A Complete Introduction, Review, and Roadmap

    OpenAIRE

    Urbanowicz, Ryan J; Jason H Moore

    2009-01-01

    If complexity is your problem, learning classifier systems (LCSs) may offer a solution. These rule-based, multifaceted, machine learning algorithms originated and have evolved in the cradle of evolutionary biology and artificial intelligence. The LCS concept has inspired a multitude of implementations adapted to manage the different problem domains to which it has been applied (e.g., autonomous robotics, classification, knowledge discovery, and modeling). One field that is taking increasing n...

  16. Management Education: Classifying Business Curricula and Conceptualizing Transfers and Bridges

    OpenAIRE

    Davar Rezania; Mike Henry

    2010-01-01

    Traditionally, higher academic education has favoured acquisition of individualized conceptual knowledge over context-independent procedural knowledge. Applied degrees, on the other hand, favour procedural knowledge. We present a conceptual model for classifying a business curriculum. This classification can inform discussion around difficulties associated with issues such as assessment of prior learning, as well as transfers and bridges from applied degrees to baccalaureate degrees in busine...

  17. Mathematical Modeling and Analysis of Classified Marketing of Agricultural Products

    Institute of Scientific and Technical Information of China (English)

    Fengying; WANG

    2014-01-01

    Classified marketing of agricultural products was analyzed using the Logistic Regression Model. This method can take full advantage of information in agricultural product database,to find factors influencing best selling degree of agricultural products,and make quantitative analysis accordingly. Using this model,it is also able to predict sales of agricultural products,and provide reference for mapping out individualized sales strategy for popularizing agricultural products.

  18. LEAF FEATURES EXTRACTION AND RECOGNITION APPROACHES TO CLASSIFY PLANT

    OpenAIRE

    Mohamad Faizal Ab Jabal; Suhardi Hamid; Salehuddin Shuib; Illiasaak Ahmad

    2013-01-01

    Plant classification based on leaf identification is becoming a popular trend. Each leaf carries substantial information that can be used to identify and classify the origin or the type of plant. In medical perspective, images have been used by doctors to diagnose diseases and this method has been proven reliable for years. Using the same method as doctors, researchers try to simulate the same principle to recognise a plant using high quality leaf images and complex mathematical formulae for ...

  19. Switching Fuzzy Classifier for Classification of EEG Spectrograms

    Czech Academy of Sciences Publication Activity Database

    Coufal, David

    Budapest: Budapest Tech, 2008, s. 143-150. ISBN 978-963-7154-82-9. [CINTI 2008. International Symposium of Hungarian Researchers on Computational Intelligence and Informatics /9./. Budapest (HU), 06.11.2008-08.11.2008] R&D Projects: GA MDS 1F84B/042/520 Institutional research plan: CEZ:AV0Z10300504 Keywords : fuzzy classifier * classification tree * EEG spectrograms Subject RIV: AQ - Safety, Health Protection, Human - Machine

  20. An Informed Framework for Training Classifiers from Social Media

    OpenAIRE

    Dong Seon Cheng; Sami Abduljalil Abdulhak

    2016-01-01

    Extracting information from social media has become a major focus of companies and researchers in recent years. Aside from the study of the social aspects, it has also been found feasible to exploit the collaborative strength of crowds to help solve classical machine learning problems like object recognition. In this work, we focus on the generally underappreciated problem of building effective datasets for training classifiers by automatically assembling data from social media. We detail som...

  1. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    OpenAIRE

    Sang-Hoon Hong; Hyun-Ok Kim; Shimon Wdowinski; Emanuelle Feliciano

    2015-01-01

    The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol) synthetic aperture radar (PolSAR) data for classifying wetland vegetation in the Everglades. We ...

  2. The three-dimensional origin of the classifying algebra

    OpenAIRE

    Fuchs, Jurgen; Schweigert, Christoph; Stigner, Carl

    2009-01-01

    It is known that reflection coefficients for bulk fields of a rational conformal field theory in the presence of an elementary boundary condition can be obtained as representation matrices of irreducible representations of the classifying algebra, a semisimple commutative associative complex algebra. We show how this algebra arises naturally from the three-dimensional geometry of factorization of correlators of bulk fields on the disk. This allows us to derive explicit expressions for the str...

  3. The electronic couplings in electron transfer and excitation energy transfer.

    Science.gov (United States)

    Hsu, Chao-Ping

    2009-04-21

    The transport of charge via electrons and the transport of excitation energy via excitons are two processes of fundamental importance in diverse areas of research. Characterization of electron transfer (ET) and excitation energy transfer (EET) rates are essential for a full understanding of, for instance, biological systems (such as respiration and photosynthesis) and opto-electronic devices (which interconvert electric and light energy). In this Account, we examine one of the parameters, the electronic coupling factor, for which reliable values are critical in determining transfer rates. Although ET and EET are different processes, many strategies for calculating the couplings share common themes. We emphasize the similarities in basic assumptions between the computational methods for the ET and EET couplings, examine the differences, and summarize the properties, advantages, and limits of the different computational methods. The electronic coupling factor is an off-diagonal Hamiltonian matrix element between the initial and final diabatic states in the transport processes. ET coupling is essentially the interaction of the two molecular orbitals (MOs) where the electron occupancy is changed. Singlet excitation energy transfer (SEET), however, contains a Frster dipole-dipole coupling as its most important constituent. Triplet excitation energy transfer (TEET) involves an exchange of two electrons of different spin and energy; thus, it is like an overlap interaction of two pairs of MOs. Strategies for calculating ET and EET couplings can be classified as (1) energy-gap-based approaches, (2) direct calculation of the off-diagonal matrix elements, or (3) use of an additional operator to describe the extent of charge or excitation localization and to calculate the coupling value. Some of the difficulties in calculating the couplings were recently resolved. Methods were developed to remove the nondynamical correlation problem from the highly precise coupled cluster

  4. IRIS RECOGNITION BASED ON LBP AND COMBINED LVQ CLASSIFIER

    Directory of Open Access Journals (Sweden)

    M. Z. Rashad

    2011-11-01

    Full Text Available Iris recognition is considered as one of the best biometric methods used for human identification andverification, this is because of its unique features that differ from one person to another, and itsimportance in the security field. This paper proposes an algorithm for iris recognition and classificationusing a system based on Local Binary Pattern and histogram properties as a statistical approaches forfeature extraction , and Combined Learning Vector Quantization Classifier as Neural Network approachfor classification, in order to build a hybrid model depends on both features. The localization andsegmentation techniques are presented using both Canny edge detection and Hough Circular Transformin order to isolate an iris from the whole eye image and for noise detection .Feature vectors results fromLBP is applied to a Combined LVQ classifier with different classes to determine the minimum acceptableperformance, and the result is based on majority voting among several LVQ classifier. Different irisdatasets CASIA, MMU1, MMU2, and LEI with different extensions and size are presented. Since LBP isworking on a grayscale level so colored iris images should be transformed into a grayscale level. Theproposed system gives a high recognition rate 99.87 % on different iris datasets compared with othermethods.

  5. Patients on weaning trials classified with support vector machines

    International Nuclear Information System (INIS)

    The process of discontinuing mechanical ventilation is called weaning and is one of the most challenging problems in intensive care. An unnecessary delay in the discontinuation process and an early weaning trial are undesirable. This study aims to characterize the respiratory pattern through features that permit the identification of patients' conditions in weaning trials. Three groups of patients have been considered: 94 patients with successful weaning trials, who could maintain spontaneous breathing after 48 h (GSucc); 39 patients who failed the weaning trial (GFail) and 21 patients who had successful weaning trials, but required reintubation in less than 48 h (GRein). Patients are characterized by their cardiorespiratory interactions, which are described by joint symbolic dynamics (JSD) applied to the cardiac interbeat and breath durations. The most discriminating features in the classification of the different groups of patients (GSucc, GFail and GRein) are identified by support vector machines (SVMs). The SVM-based feature selection algorithm has an accuracy of 81% in classifying GSucc versus the rest of the patients, 83% in classifying GRein versus GSucc patients and 81% in classifying GRein versus the rest of the patients. Moreover, a good balance between sensitivity and specificity is achieved in all classifications

  6. A space-based radio frequency transient event classifier

    Energy Technology Data Exchange (ETDEWEB)

    Moore, K.R.; Blain, C.P.; Caffrey, M.P.; Franz, R.C.; Henneke, K.M.; Jones, R.G.

    1998-03-01

    The Department of Energy is currently investigating economical and reliable techniques for space-based nuclear weapon treaty verification. Nuclear weapon detonations produce RF transients that are signatures of illegal nuclear weapons tests. However, there are many other sources of RF signals, both natural and man-made. Direct digitization of RF signals requires rates of 300 MSamples per second and produces 10{sup 13} samples per day of data to analyze. it is impractical to store and downlink all digitized RF data from such a satellite without a prohibitively expensive increase in the number and capacities of ground stations. Reliable and robust data processing and information extraction must be performed onboard the spacecraft in order to reduce downlinked data to a reasonable volume. The FORTE (Fast On-Orbit Recording of Transient Events) satellite records RF transients in space. These transients will be classified onboard the spacecraft with an Event Classifier specialized hardware that performs signal preprocessing and neural network classification. The authors describe the Event Classifier requirements, scientific constraints, design and implementation.

  7. Automative Multi Classifier Framework for Medical Image Analysis

    Directory of Open Access Journals (Sweden)

    R. Edbert Rajan

    2015-04-01

    Full Text Available Medical image processing is the technique used to create images of the human body for medical purposes. Nowadays, medical image processing plays a major role and a challenging solution for the critical stage in the medical line. Several researches have done in this area to enhance the techniques for medical image processing. However, due to some demerits met by some advanced technologies, there are still many aspects that need further development. Existing study evaluate the efficacy of the medical image analysis with the level-set shape along with fractal texture and intensity features to discriminate PF (Posterior Fossa tumor from other tissues in the brain image. To develop the medical image analysis and disease diagnosis, to devise an automotive subjective optimality model for segmentation of images based on different sets of selected features from the unsupervised learning model of extracted features. After segmentation, classification of images is done. The classification is processed by adapting the multiple classifier frameworks in the previous work based on the mutual information coefficient of the selected features underwent for image segmentation procedures. In this study, to enhance the classification strategy, we plan to implement enhanced multi classifier framework for the analysis of medical images and disease diagnosis. The performance parameter used for the analysis of the proposed enhanced multi classifier framework for medical image analysis is Multiple Class intensity, image quality, time consumption.

  8. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  9. Application of the Naive Bayesian Classifier to optimize treatment decisions

    International Nuclear Information System (INIS)

    Background and purpose: To study the accuracy, specificity and sensitivity of the Naive Bayesian Classifier (NBC) in the assessment of individual risk of cancer relapse or progression after radiotherapy (RT). Materials and methods: Data of 142 brain tumour patients irradiated from 2000 to 2005 were analyzed. Ninety-six attributes related to disease, patient and treatment were chosen. Attributes in binary form consisted of the training set for NBC learning. NBC calculated an individual conditional probability of being assigned to: relapse or progression (1), or no relapse or progression (0) group. Accuracy, attribute selection and quality of classifier were determined by comparison with actual treatment results, leave-one-out and cross validation methods, respectively. Clinical setting test utilized data of 35 patients. Treatment results at classification were unknown and were compared with classification results after 3 months. Results: High classification accuracy (84%), specificity (0.87) and sensitivity (0.80) were achieved, both for classifier training and in progressive clinical evaluation. Conclusions: NBC is a useful tool to support the assessment of individual risk of relapse or progression in patients diagnosed with brain tumour undergoing RT postoperatively

  10. College students classified with ADHD and the foreign language requirement.

    Science.gov (United States)

    Sparks, Richard L; Javorsky, James; Philips, Lois

    2004-01-01

    The conventional assumption of most disability service providers is that students classified as having attention-deficit/hyperactivity disorder (ADHD) will experience difficulties in foreign language (FL) courses. However, the evidence in support of this assumption is anecdotal. In this empirical investigation, the demographic profiles, overall academic performance, college entrance scores, and FL classroom performance of 68 college students classified as having ADHD were examined. All students had graduated from the same university over a 5-year period. The findings showed that all 68 students had completed the university's FL requirement by passing FL courses. The students' college entrance scores were similar to the middle 50% of freshmen at this university, and their graduating grade point average was similar to the typical graduating senior at the university. The students had participated in both lower (100) and upper (200, 300, 400) level FL courses and had achieved mostly average and above-average grades (A, B, C) in these courses. One student had majored and eight students had minored in an FL. Two thirds of the students passed all of their FL courses without the use of instructional accommodations. In this study, the classification of ADHD did not appear to interfere with participants' performance in FL courses. The findings suggest that students classified as having ADHD should enroll in and fulfill the FL requirement by passing FL courses. PMID:15493238

  11. Image Classifying Registration for Gaussian & Bayesian Techniques: A Review

    Directory of Open Access Journals (Sweden)

    Rahul Godghate,

    2014-04-01

    Full Text Available A Bayesian Technique for Image Classifying Registration to perform simultaneously image registration and pixel classification. Medical image registration is critical for the fusion of complementary information about patient anatomy and physiology, for the longitudinal study of a human organ over time and the monitoring of disease development or treatment effect, for the statistical analysis of a population variation in comparison to a so-called digital atlas, for image-guided therapy, etc. A Bayesian Technique for Image Classifying Registration is well-suited to deal with image pairs that contain two classes of pixels with different inter-image intensity relationships. We will show through different experiments that the model can be applied in many different ways. For instance if the class map is known, then it can be used for template-based segmentation. If the full model is used, then it can be applied to lesion detection by image comparison. Experiments have been conducted on both real and simulated data. It show that in the presence of an extra-class, the classifying registration improves both the registration and the detection, especially when the deformations are small. The proposed model is defined using only two classes but it is straightforward to extend it to an arbitrary number of classes.

  12. Exploiting Language Models to Classify Events from Twitter.

    Science.gov (United States)

    Vo, Duc-Thuan; Hai, Vo Thuan; Ock, Cheol-Young

    2015-01-01

    Classifying events is challenging in Twitter because tweets texts have a large amount of temporal data with a lot of noise and various kinds of topics. In this paper, we propose a method to classify events from Twitter. We firstly find the distinguishing terms between tweets in events and measure their similarities with learning language models such as ConceptNet and a latent Dirichlet allocation method for selectional preferences (LDA-SP), which have been widely studied based on large text corpora within computational linguistic relations. The relationship of term words in tweets will be discovered by checking them under each model. We then proposed a method to compute the similarity between tweets based on tweets' features including common term words and relationships among their distinguishing term words. It will be explicit and convenient for applying to k-nearest neighbor techniques for classification. We carefully applied experiments on the Edinburgh Twitter Corpus to show that our method achieves competitive results for classifying events. PMID:26451139

  13. Exploiting Language Models to Classify Events from Twitter

    Directory of Open Access Journals (Sweden)

    Duc-Thuan Vo

    2015-01-01

    Full Text Available Classifying events is challenging in Twitter because tweets texts have a large amount of temporal data with a lot of noise and various kinds of topics. In this paper, we propose a method to classify events from Twitter. We firstly find the distinguishing terms between tweets in events and measure their similarities with learning language models such as ConceptNet and a latent Dirichlet allocation method for selectional preferences (LDA-SP, which have been widely studied based on large text corpora within computational linguistic relations. The relationship of term words in tweets will be discovered by checking them under each model. We then proposed a method to compute the similarity between tweets based on tweets’ features including common term words and relationships among their distinguishing term words. It will be explicit and convenient for applying to k-nearest neighbor techniques for classification. We carefully applied experiments on the Edinburgh Twitter Corpus to show that our method achieves competitive results for classifying events.

  14. Analysis of classifiers performance for classification of potential microcalcification

    Science.gov (United States)

    M. N., Arun K.; Sheshadri, H. S.

    2013-07-01

    Breast cancer is a significant public health problem in the world. According to the literature early detection improve breast cancer prognosis. Mammography is a screening tool used for early detection of breast cancer. About 10-30% cases are missed during the routine check as it is difficult for the radiologists to make accurate analysis due to large amount of data. The Microcalcifications (MCs) are considered to be important signs of breast cancer. It has been reported in literature that 30% - 50% of breast cancer detected radio graphically show MCs on mammograms. Histologic examinations report 62% to 79% of breast carcinomas reveals MCs. MC are tiny, vary in size, shape, and distribution, and MC may be closely connected to surrounding tissues. There is a major challenge using the traditional classifiers in the classification of individual potential MCs as the processing of mammograms in appropriate stage generates data sets with an unequal amount of information for both classes (i.e., MC, and Not-MC). Most of the existing state-of-the-art classification approaches are well developed by assuming the underlying training set is evenly distributed. However, they are faced with a severe bias problem when the training set is highly imbalanced in distribution. This paper addresses this issue by using classifiers which handle the imbalanced data sets. In this paper, we also compare the performance of classifiers which are used in the classification of potential MC.

  15. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  16. ASYMBOOST-BASED FISHER LINEAR CLASSIFIER FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Wang Xianji; Ye Xueyi; Li Bin; Li Xin; Zhuang Zhenquan

    2008-01-01

    When using AdaBoost to select discriminant features from some feature space (e.g. Gabor feature space) for face recognition, cascade structure is usually adopted to leverage the asymmetry in the distribution of positive and negative samples. Each node in the cascade structure is a classifier trained by AdaBoost with an asymmetric learning goal of high recognition rate but only moderate low false positive rate. One limitation of AdaBoost arises in the context of skewed example distribution and cascade classifiers: AdaBoost minimizes the classification error, which is not guaranteed to achieve the asymmetric node learning goal. In this paper, we propose to use the asymmetric AdaBoost (Asym-Boost) as a mechanism to address the asymmetric node learning goal. Moreover, the two parts of the selecting features and forming ensemble classifiers are decoupled, both of which occur simultaneously in AsymBoost and AdaBoost. Fisher Linear Discriminant Analysis (FLDA) is used on the selected features to learn a linear discriminant function that maximizes the separability of data among the different classes, which we think can improve the recognition performance. The proposed algorithm is dem onstrated with face recognition using a Gabor based representation on the FERET database. Experimental results show that the proposed algorithm yields better recognition performance than AdaBoost itself.

  17. Early Detection of Breast Cancer using SVM Classifier Technique

    Directory of Open Access Journals (Sweden)

    Y.Ireaneus Anna Rejani

    2009-11-01

    Full Text Available This paper presents a tumor detection algorithm from mammogram. The proposed system focuses on the solution of two problems. One is how to detect tumors as suspicious regions with a very weak contrast to their background and another is how to extract features which categorize tumors. The tumor detection method follows the scheme of (a mammogram enhancement. (b The segmentation of the tumor area. (c The extraction of features from the segmented tumor area. (d The use of SVM classifier. The enhancement can be defined as conversion of the image quality to a better and more understandable level. The mammogram enhancement procedure includes filtering, top hat operation, DWT. Then the contrast stretching is used to increase the contrast of the image. The segmentation of mammogram images has been playing an important role to improve the detection and diagnosis of breast cancer. The most common segmentation method used is thresholding. The features are extracted from the segmented breast area. Next stage include, which classifies the regions using the SVM classifier. The method was tested on 75 mammographic images, from the mini-MIAS database. The methodology achieved a sensitivity of 88.75%.

  18. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    Directory of Open Access Journals (Sweden)

    Sang-Hoon Hong

    2015-07-01

    Full Text Available The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol synthetic aperture radar (PolSAR data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering. We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.

  19. Device for removing blackheads

    Energy Technology Data Exchange (ETDEWEB)

    Berkovich, Tamara (116 N. Wetherly Dr., Suite 115, Los Angeles, CA)

    1995-03-07

    A device for removing blackheads from pores in the skin having a elongated handle with a spoon shaped portion mounted on one end thereof, the spoon having multiple small holes piercing therethrough. Also covered is method for using the device to remove blackheads.

  20. Freeman Chain Code (FCC Representation in Signature Fraud Detection Based On Nearest Neighbour and Artificial Neural Network (ANN Classifiers

    Directory of Open Access Journals (Sweden)

    Aini Najwa Azmi

    2014-12-01

    Full Text Available This paper presents a signature verification system that used Freeman Chain Code (FCC as directional feature and data representation. There are 47 features were extracted from the signature images from six global features. Before extracting the features, the raw images were undergoing pre-processing stages which were binarization, noise removal by using media filter, cropping and thinning to produce Thinned Binary Image (TBI. Euclidean distance is measured and matched between nearest neighbours to find the result. MCYT-SignatureOff-75 database was used. Based on our experiment, the lowest FRR achieved is 6.67% and lowest FAR is 12.44% with only 1.12 second computational time from nearest neighbour classifier. The results are compared with Artificial Neural Network (ANN classifier.

  1. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers.

    Science.gov (United States)

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-04-15

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject "at rest"). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing "signal" (brain activity) can be distinguished form the "noise" components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX ("FMRIB's ICA-based X-noiseifier"), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original

  2. Use RAPD Analysis to Classify Tea Trees in Yunnan

    Institute of Scientific and Technical Information of China (English)

    SHAO Wan-fang; PANG Rui-hua; DUAN Hong-xing; WANG Ping-sheng; XU Mei; ZHANG Ya-ping; LI Jia-hua

    2003-01-01

    RAPD assessment on genetic variations of 45 tea trees in Yunnan was carried out. Eight primers selected from 40 random primers were used to amplify 45 tea samples, and a total of 95 DNA bands were amplified, of which 90 (94.7 %) were polymorphism. The average number of DNA bands amplified by each primer was 11.5. Based on the results of UPGMA cluster analysis of 95 DNA bands amplified by 8 primers,all the tested materials could be classified into 7 groups including 5 complex groups and 2 simple groups, which was basically identical with morphological classification. In addition, there were some speciations in 2 simple groups.

  3. Classifying the future of universes with dark energy

    International Nuclear Information System (INIS)

    We classify the future of the universe for general cosmological models including matter and dark energy. If the equation of state of dark energy is less then -1, the age of the universe becomes finite. We compute the rest of the age of the universe for such universe models. The behaviour of the future growth of matter density perturbation is also studied. We find that the collapse of the spherical overdensity region is greatly changed if the equation of state of dark energy is less than -1

  4. On-line computing in a classified environment

    International Nuclear Information System (INIS)

    Westinghouse Hanford Company (WHC) recently developed a Department of Energy (DOE) approved real-time, on-line computer system to control nuclear material. The system simultaneously processes both classified and unclassified information. Implementation of this system required application of many security techniques. The system has a secure, but user friendly interface. Many software applications protect the integrity of the data base from malevolent or accidental errors. Programming practices ensure the integrity of the computer system software. The audit trail and the reports generation capability record user actions and status of the nuclear material inventory

  5. Some factors influencing interobserver variation in classifying simple pneumoconiosis.

    OpenAIRE

    Musch, D C; Higgins, I T; Landis, J R

    1985-01-01

    Three experienced physician readers assessed the chest radiographs of 743 men from a coal mining community in West Virginia for the signs of simple pneumoconiosis, using the ILO U/C 1971 Classification of Radiographs of the Pneumoconioses. The number of films categorised by each reader as showing evidence of simple pneumoconiosis varied from 63 (8.5%) to 114 (15.3%) of the 743 films classified. The effect of film quality and obesity on interobserver agreement was assessed by use of kappa-type...

  6. Brain Computer Interface. Comparison of Neural Networks Classifiers.

    OpenAIRE

    Martínez Pérez, Jose Luis; Barrientos Cruz, Antonio

    2008-01-01

    Brain Computer Interface is an emerging technology that allows new output paths to communicate the user’s intentions without use of normal output ways, such as muscles or nerves (Wolpaw, J. R.; et al., 2002).In order to obtain its objective BCI devices shall make use of classifier which translate the inputs provided by user’s brain signal to commands for external devices. The primary uses of this technology will benefit persons with some kind blocking disease as for example: ALS, brainstem st...

  7. Support vector machine classifiers for large data sets.

    Energy Technology Data Exchange (ETDEWEB)

    Gertz, E. M.; Griffin, J. D.

    2006-01-31

    This report concerns the generation of support vector machine classifiers for solving the pattern recognition problem in machine learning. Several methods are proposed based on interior point methods for convex quadratic programming. Software implementations are developed by adapting the object-oriented packaging OOQP to the problem structure and by using the software package PETSc to perform time-intensive computations in a distributed setting. Linear systems arising from classification problems with moderately large numbers of features are solved by using two techniques--one a parallel direct solver, the other a Krylov-subspace method incorporating novel preconditioning strategies. Numerical results are provided, and computational experience is discussed.

  8. Classifying Cubic Edge-Transitive Graphs of Order 8

    Indian Academy of Sciences (India)

    Mehdi Alaeiyan; M K Hosseinipoor

    2009-11-01

    A simple undirected graph is said to be semisymmetric if it is regular and edge-transitive but not vertex-transitive. Let be a prime. It was shown by Folkman (J. Combin. Theory 3(1967) 215--232) that a regular edge-transitive graph of order 2 or 22 is necessarily vertex-transitive. In this paper, an extension of his result in the case of cubic graphs is given. It is proved that, every cubic edge-transitive graph of order 8 is symmetric, and then all such graphs are classified.

  9. Thyroid gland removal - discharge

    Science.gov (United States)

    ... surgery to remove part or all of your thyroid gland. This operation is called thyroidectomy . You probably ... in just a few weeks. If you had thyroid cancer, you may need to have radioactive iodine ...

  10. Thyroid gland removal - discharge

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000293.htm Thyroid gland removal - discharge To use the sharing features ... surgery. This will make your scar show less. Thyroid Hormone Replacement You may need to take thyroid ...

  11. Laser Tattoo Removal

    Science.gov (United States)

    ... permanent tattoo for a variety of personal or fashion-related reasons. While there are many methods of ... aspects to trying to remove a stain from clothing. A stain that takes a split second to ...

  12. Gallbladder removal - laparoscopic

    Science.gov (United States)

    ... PA: Elsevier Saunders; 2012:chap 55. Read More Acute cholecystitis Chronic cholecystitis Gallbladder removal - open Gallstones Patient Instructions Bland diet Surgical wound care - open When you have nausea and vomiting ...

  13. Hardware removal - extremity

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/007644.htm Hardware removal - extremity To use the sharing features on this page, please enable JavaScript. Surgeons use hardware such as pins, plates, or screws to help ...

  14. Laparoscopic Adrenal Gland Removal

    Science.gov (United States)

    ... adrenal tumors that appear malignant. What are the Advantages of Laparoscopic Adrenal Gland Removal? In the past, ... of procedure and the patients overall condition. Common advantages are: Less postoperative pain Shorter hospital stay Quicker ...

  15. Hardware removal - extremity

    Science.gov (United States)

    Surgeons use hardware such as pins, plates, or screws to help fix a broken bone or to correct an abnormality in ... of pain or other problems related to the hardware, you may have surgery to remove the hardware. ...

  16. Graph removal lemmas

    OpenAIRE

    Conlon, David; Fox, Jacob

    2012-01-01

    The graph removal lemma states that any graph on n vertices with o(n^{v(H)}) copies of a fixed graph H may be made H-free by removing o(n^2) edges. Despite its innocent appearance, this lemma and its extensions have several important consequences in number theory, discrete geometry, graph theory and computer science. In this survey we discuss these lemmas, focusing in particular on recent improvements to their quantitative aspects.

  17. Metal Removal in Wastewater

    OpenAIRE

    Sanchez Roldan, Laura

    2014-01-01

    The aim of this work was to study Copper removal capacity of different algae species and their mixtures from the municipal wastewater. This project was implemented in the greenhouse in the laboratories of Tampere University of Applied Sciences and the wastewater used was the one from the Tampere municipal wastewater treatment plant. Five algae species and three mixtures of them were tested for their Copper removal potential in wastewater in one batch test run. The most efficient algae mixture...

  18. Random Sampling with Removal

    OpenAIRE

    Gärtner, Bernd; Lengler, Johannes; Szedlak, May

    2015-01-01

    Random sampling is a classical tool in constrained optimization. Under favorable conditions, the optimal solution subject to a small subset of randomly chosen constraints violates only a small subset of the remaining constraints. Here we study the following variant that we call random sampling with removal: suppose that after sampling the subset, we remove a fixed number of constraints from the sample, according to an arbitrary rule. Is it still true that the optimal solution of the reduced s...

  19. Laser hair removal pearls.

    Science.gov (United States)

    Tierney, Emily P; Goldberg, David J

    2008-03-01

    A number of lasers and light devices are now available for the treatment of unwanted hair. The goal of laser hair removal is to damage stem cells in the bulge of the follicle through the targeting of melanin, the endogenous chromophore for laser and light devices utilized to remove hair. The competing chromophores in the skin and hair, oxyhemoglobin and water, have a decreased absorption between 690 nm and 1000 nm, thus making this an ideal range for laser and light sources. Pearls of laser hair removal are presented in this review, focusing on four areas of recent development: 1 treatment of blond, white and gray hair; 2 paradoxical hypertrichosis; 3 laser hair removal in children; and 4 comparison of lasers and IPL. Laser and light-based technologies to remove hair represents one of the most exciting areas where discoveries by dermatologists have led to novel treatment approaches. It is likely that in the next decade, continued advancements in this field will bring us closer to the development of a more permanent and painless form of hair removal. PMID:18330794

  20. Multiobjective Optimization of Classifiers by Means of 3-D Convex Hull Based Evolutionary Algorithm

    OpenAIRE

    Zhao, Jiaqi; Fernandes, Vitor Basto; Jiao, Licheng; Yevseyeva, Iryna; Maulana, Asep; Li, Rui; Bäck, Thomas; Emmerich, Michael T. M.

    2014-01-01

    Finding a good classifier is a multiobjective optimization problem with different error rates and the costs to be minimized. The receiver operating characteristic is widely used in the machine learning community to analyze the performance of parametric classifiers or sets of Pareto optimal classifiers. In order to directly compare two sets of classifiers the area (or volume) under the convex hull can be used as a scalar indicator for the performance of a set of classifiers in receiver operati...

  1. Classifying paragraph types using linguistic features: Is paragraph positioning important?

    Directory of Open Access Journals (Sweden)

    Scott A. Crossley, Kyle Dempsey & Danielle S. McNamara

    2011-12-01

    Full Text Available This study examines the potential for computational tools and human raters to classify paragraphs based on positioning. In this study, a corpus of 182 paragraphs was collected from student, argumentative essays. The paragraphs selected were initial, middle, and final paragraphs and their positioning related to introductory, body, and concluding paragraphs. The paragraphs were analyzed by the computational tool Coh-Metrix on a variety of linguistic features with correlates to textual cohesion and lexical sophistication and then modeled using statistical techniques. The paragraphs were also classified by human raters based on paragraph positioning. The performance of the reported model was well above chance and reported an accuracy of classification that was similar to human judgments of paragraph type (66% accuracy for human versus 65% accuracy for our model. The model's accuracy increased when longer paragraphs that provided more linguistic coverage and paragraphs judged by human raters to be of higher quality were examined. The findings support the notions that paragraph types contain specific linguistic features that allow them to be distinguished from one another. The finding reported in this study should prove beneficial in classroom writing instruction and in automated writing assessment.

  2. Automatic misclassification rejection for LDA classifier using ROC curves.

    Science.gov (United States)

    Menon, Radhika; Di Caterina, Gaetano; Lakany, Heba; Petropoulakis, Lykourgos; Conway, Bernard A; Soraghan, John J

    2015-08-01

    This paper presents a technique to improve the performance of an LDA classifier by determining if the predicted classification output is a misclassification and thereby rejecting it. This is achieved by automatically computing a class specific threshold with the help of ROC curves. If the posterior probability of a prediction is below the threshold, the classification result is discarded. This method of minimizing false positives is beneficial in the control of electromyography (EMG) based upper-limb prosthetic devices. It is hypothesized that a unique EMG pattern is associated with a specific hand gesture. In reality, however, EMG signals are difficult to distinguish, particularly in the case of multiple finger motions, and hence classifiers are trained to recognize a set of individual gestures. However, it is imperative that misclassifications be avoided because they result in unwanted prosthetic arm motions which are detrimental to device controllability. This warrants the need for the proposed technique wherein a misclassified gesture prediction is rejected resulting in no motion of the prosthetic arm. The technique was tested using surface EMG data recorded from thirteen amputees performing seven hand gestures. Results show the number of misclassifications was effectively reduced, particularly in cases with low original classification accuracy. PMID:26736304

  3. Gamma mixture classifier for plaque detection in intravascular ultrasonic images.

    Science.gov (United States)

    Vegas-Sánchez-Ferrero, Gonzalo; Seabra, José; Rodriguez-Leor, Oriol; Serrano-Vida, Angel; Aja-Fernández, Santiago; Palencia, César; Martín-Fernández, Marcos; Sanches, Joao

    2014-01-01

    Carotid and coronary vascular incidents are mostly caused by vulnerable plaques. Detection and characterization of vulnerable plaques are important for early disease diagnosis and treatment. For this purpose, the echomorphology and composition have been studied. Several distributions have been used to describe ultrasonic data depending on tissues, acquisition conditions, and equipment. Among them, the Rayleigh distribution is a one-parameter model used to describe the raw envelope RF ultrasound signal for its simplicity, whereas the Nakagami distribution (a generalization of the Rayleigh distribution) is the two-parameter model which is commonly accepted. However, it fails to describe B-mode images or Cartesian interpolated or subsampled RF images because linear filtering changes the statistics of the signal. In this work, a gamma mixture model (GMM) is proposed to describe the subsampled/interpolated RF images and it is shown that the parameters and coefficients of the mixture are useful descriptors of speckle pattern for different types of plaque tissues. This new model outperforms recently proposed probabilistic and textural methods with respect to plaque description and characterization of echogenic contents. Classification results provide an overall accuracy of 86.56% for four classes and 95.16% for three classes. These results evidence the classifier usefulness for plaque characterization. Additionally, the classifier provides probability maps according to each tissue type, which can be displayed for inspecting local tissue composition, or used for automatic filtering and segmentation. PMID:24402895

  4. Decision Tree Classifiers for Star/Galaxy Separation

    CERN Document Server

    Vasconcellos, E C; Gal, R R; LaBarbera, F L; Capelato, H V; Velho, H F Campos; Trevisan, M; Ruiz, R S R

    2010-01-01

    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of $884,126$ SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: $14\\le r\\le21$ ($85.2%$) and $r\\ge19$ ($82.1%$). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT and Ball et al. (2006). We find that our FT classifier is comparable or better in completeness over the full magnitude range $15\\le r\\le21$, with m...

  5. Elephants classify human ethnic groups by odor and garment color.

    Science.gov (United States)

    Bates, Lucy A; Sayialel, Katito N; Njiraini, Norah W; Moss, Cynthia J; Poole, Joyce H; Byrne, Richard W

    2007-11-20

    Animals can benefit from classifying predators or other dangers into categories, tailoring their escape strategies to the type and nature of the risk. Studies of alarm vocalizations have revealed various levels of sophistication in classification. In many taxa, reactions to danger are inflexible, but some species can learn the level of threat presented by the local population of a predator or by specific, recognizable individuals. Some species distinguish several species of predator, giving differentiated warning calls and escape reactions; here, we explore an animal's classification of subgroups within a species. We show that elephants distinguish at least two Kenyan ethnic groups and can identify them by olfactory and color cues independently. In the Amboseli ecosystem, Kenya, young Maasai men demonstrate virility by spearing elephants (Loxodonta africana), but Kamba agriculturalists pose little threat. Elephants showed greater fear when they detected the scent of garments previously worn by Maasai than by Kamba men, and they reacted aggressively to the color associated with Maasai. Elephants are therefore able to classify members of a single species into subgroups that pose different degrees of danger. PMID:17949977

  6. REPTREE CLASSIFIER FOR IDENTIFYING LINK SPAM IN WEB SEARCH ENGINES

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2013-01-01

    Full Text Available Search Engines are used for retrieving the information from the web. Most of the times, the importance is laid on top 10 results sometimes it may shrink as top 5, because of the time constraint and reliability on the search engines. Users believe that top 10 or 5 of total results are more relevant. Here comes the problem of spamdexing. It is a method to deceive the search result quality. Falsified metrics such as inserting enormous amount of keywords or links in website may take that website to the top 10 or 5 positions. This paper proposes a classifier based on the Reptree (Regression tree representative. As an initial step Link-based features such as neighbors, pagerank, truncated pagerank, trustrank and assortativity related attributes are inferred. Based on this features, tree is constructed. The tree uses the feature inference to differentiate spam sites from legitimate sites. WEBSPAM-UK-2007 dataset is taken as a base. It is preprocessed and converted into five datasets FEATA, FEATB, FEATC, FEATD and FEATE. Only link based features are taken for experiments. This paper focus on link spam alone. Finally a representative tree is created which will more precisely classify the web spam entries. Results are given. Regression tree classification seems to perform well as shown through experiments.

  7. Deposition of Nanostructured Thin Film from Size-Classified Nanoparticles

    Science.gov (United States)

    Camata, Renato P.; Cunningham, Nicholas C.; Seol, Kwang Soo; Okada, Yoshiki; Takeuchi, Kazuo

    2003-01-01

    Materials comprising nanometer-sized grains (approximately 1_50 nm) exhibit properties dramatically different from those of their homogeneous and uniform counterparts. These properties vary with size, shape, and composition of nanoscale grains. Thus, nanoparticles may be used as building blocks to engineer tailor-made artificial materials with desired properties, such as non-linear optical absorption, tunable light emission, charge-storage behavior, selective catalytic activity, and countless other characteristics. This bottom-up engineering approach requires exquisite control over nanoparticle size, shape, and composition. We describe the design and characterization of an aerosol system conceived for the deposition of size classified nanoparticles whose performance is consistent with these strict demands. A nanoparticle aerosol is generated by laser ablation and sorted according to size using a differential mobility analyzer. Nanoparticles within a chosen window of sizes (e.g., (8.0 plus or minus 0.6) nm) are deposited electrostatically on a surface forming a film of the desired material. The system allows the assembly and engineering of thin films using size-classified nanoparticles as building blocks.

  8. Deep convolutional neural networks for classifying GPR B-scans

    Science.gov (United States)

    Besaw, Lance E.; Stimac, Philip J.

    2015-05-01

    Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.

  9. Using Narrow Band Photometry to Classify Stars and Brown Dwarfs

    CERN Document Server

    Mainzer, A K; Sievers, J L; Young, E T; Lean, Ian S. Mc

    2004-01-01

    We present a new system of narrow band filters in the near infrared that can be used to classify stars and brown dwarfs. This set of four filters, spanning the H band, can be used to identify molecular features unique to brown dwarfs, such as H2O and CH4. The four filters are centered at 1.495 um (H2O), 1.595 um (continuum), 1.66 um (CH4), and 1.75 um (H2O). Using two H2O filters allows us to solve for individual objects' reddenings. This can be accomplished by constructing a color-color-color cube and rotating it until the reddening vector disappears. We created a model of predicted color-color-color values for different spectral types by integrating filter bandpass data with spectra of known stars and brown dwarfs. We validated this model by making photometric measurements of seven known L and T dwarfs, ranging from L1 - T7.5. The photometric measurements agree with the model to within +/-0.1 mag, allowing us to create spectral indices for different spectral types. We can classify A through early M stars to...

  10. Integrating language models into classifiers for BCI communication: a review

    Science.gov (United States)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain–computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  11. Energy-aware embedded classifier design for real-time emotion analysis.

    Science.gov (United States)

    Padmanabhan, Manoj; Murali, Srinivasan; Rincon, Francisco; Atienza, David

    2015-08-01

    Detection and classification of human emotions from multiple bio-signals has a wide variety of applications. Though electronic devices are available in the market today that acquire multiple body signals, the classification of human emotions in real-time, adapted to the tight energy budgets of wearable embedded systems is a big challenge. In this paper we present an embedded classifier for real-time emotion classification. We propose a system that operates at different energy budgeted modes, depending on the available energy, where each mode is constrained by an operating energy bound. The classifier has an offline training phase where feature selection is performed for each operating mode, with an energy-budget aware algorithm that we propose. Across the different operating modes, the classification accuracy ranges from 95% - 75% and 89% - 70% for arousal and valence respectively. The accuracy is traded off for less power consumption, which results in an increased battery life of up to 7.7 times (from 146.1 to 1126.9 hours). PMID:26736746

  12. Classifying regional development in Iran (Application of Composite Index Approach

    Directory of Open Access Journals (Sweden)

    A. Sharifzadeh

    2012-01-01

    Full Text Available Extended abstract1- IntroductionThe spatial economy of Iran, like that of so many other developing countries, is characterized by an uneven spatial pattern of economic activities. The problem of spatial inequality emerged when efficiency-oriented sectoral policies came into conflict with the spatial dimension of development (Atash, 1988. Due to this conflict, extreme imbalanced development in Iran was created. Moreover spatial uneven distribution of economic activities in Iran is unknown and incomplete. So, there is an urgent need for more efficient and effective design, targeting and implementing interventions to manage spatial imbalances in development. Hence, the identification of development patterns at spatial scale and the factors generating them can help improve planning if development programs are focused on removing the constraints adversely affecting development in potentially good areas. There is a need for research that would describe and explain the problem of spatial development patterns as well as proposal of possible strategies, which can be used to develop the country and reduce the spatial imbalances. The main objective of this research was to determine spatial economic development level in order to identify spatial pattern of development and explain determinants of such imbalance in Iran based on methodology of composite index of development. Then, Iran provinces were ranked and classified according to the calculated composite index. To collect the required data, census of 2006 and yearbook in various times were used. 2- Theoretical basesTheories of regional inequality as well as empirical evidence regarding actual trends at the national or international level have been discussed and debated in the economic literature for over three decades. Early debates concerning the impact of market mechanisms on regional inequality in the West (Myrdal, 1957 have become popular again in the 1990s. There is a conflict on probable outcomes

  13. Assessment of the optimum degree of Sr3Fe2MoO9 electron-doping through oxygen removal: An X-ray powder diffraction and 57Fe Moessbauer spectroscopy study

    International Nuclear Information System (INIS)

    We describe the preparation and structural characterization by X-ray powder diffraction (XRPD) and Moessbauer spectroscopy of three electron-doped perovskites Sr3Fe2MoO9-δ with Fe/Mo = 2 obtained from Sr3Fe2MoO9. The compounds were synthesized by topotactic reduction with H2/N2 (5/95) at 600, 700 and 800 oC. Above 800 oC the Fe/Mo ratio changes from Fe/Mo = 2-1 oC are only in the high-spin Fe3+ electronic state.

  14. Least Square Support Vector Machine Classifier vs a Logistic Regression Classifier on the Recognition of Numeric Digits

    Directory of Open Access Journals (Sweden)

    Danilo A. López-Sarmiento

    2013-11-01

    Full Text Available In this paper is compared the performance of a multi-class least squares support vector machine (LSSVM mc versus a multi-class logistic regression classifier to problem of recognizing the numeric digits (0-9 handwritten. To develop the comparison was used a data set consisting of 5000 images of handwritten numeric digits (500 images for each number from 0-9, each image of 20 x 20 pixels. The inputs to each of the systems were vectors of 400 dimensions corresponding to each image (not done feature extraction. Both classifiers used OneVsAll strategy to enable multi-classification and a random cross-validation function for the process of minimizing the cost function. The metrics of comparison were precision and training time under the same computational conditions. Both techniques evaluated showed a precision above 95 %, with LS-SVM slightly more accurate. However the computational cost if we found a marked difference: LS-SVM training requires time 16.42 % less than that required by the logistic regression model based on the same low computational conditions.

  15. Discrimination of Mine Seismic Events and Blasts Using the Fisher Classifier, Naive Bayesian Classifier and Logistic Regression

    Science.gov (United States)

    Dong, Longjun; Wesseloo, Johan; Potvin, Yves; Li, Xibing

    2016-01-01

    Seismic events and blasts generate seismic waveforms that have different characteristics. The challenge to confidently differentiate these two signatures is complex and requires the integration of physical and statistical techniques. In this paper, the different characteristics of blasts and seismic events were investigated by comparing probability density distributions of different parameters. Five typical parameters of blasts and events and the probability density functions of blast time, as well as probability density functions of origin time difference for neighbouring blasts were extracted as discriminant indicators. The Fisher classifier, naive Bayesian classifier and logistic regression were used to establish discriminators. Databases from three Australian and Canadian mines were established for training, calibrating and testing the discriminant models. The classification performances and discriminant precision of the three statistical techniques were discussed and compared. The proposed discriminators have explicit and simple functions which can be easily used by workers in mines or researchers. Back-test, applied results, cross-validated results and analysis of receiver operating characteristic curves in different mines have shown that the discriminator for one of the mines has a reasonably good discriminating performance.

  16. Semantic classification of diseases in discharge summaries using a context-aware rule-based classifier.

    Science.gov (United States)

    Solt, Illés; Tikk, Domonkos; Gál, Viktor; Kardkovács, Zsolt T

    2009-01-01

    OBJECTIVE Automated and disease-specific classification of textual clinical discharge summaries is of great importance in human life science, as it helps physicians to make medical studies by providing statistically relevant data for analysis. This can be further facilitated if, at the labeling of discharge summaries, semantic labels are also extracted from text, such as whether a given disease is present, absent, questionable in a patient, or is unmentioned in the document. The authors present a classification technique that successfully solves the semantic classification task. DESIGN The authors introduce a context-aware rule-based semantic classification technique for use on clinical discharge summaries. The classification is performed in subsequent steps. First, some misleading parts are removed from the text; then the text is partitioned into positive, negative, and uncertain context segments, then a sequence of binary classifiers is applied to assign the appropriate semantic labels. Measurement For evaluation the authors used the documents of the i2b2 Obesity Challenge and adopted its evaluation measures: F(1)-macro and F(1)-micro for measurements. RESULTS On the two subtasks of the Obesity Challenge (textual and intuitive classification) the system performed very well, and achieved a F(1)-macro = 0.80 for the textual and F(1)-macro = 0.67 for the intuitive tasks, and obtained second place at the textual and first place at the intuitive subtasks of the challenge. CONCLUSIONS The authors show in the paper that a simple rule-based classifier can tackle the semantic classification task more successfully than machine learning techniques, if the training data are limited and some semantic labels are very sparse. PMID:19390101

  17. One piece reactor removal

    International Nuclear Information System (INIS)

    Japan Research Reactor No.3 (JRR-3) was the first reactor consisting of 'Japanese-made' components alone except for fuel and heavy water. After reaching its initial critical state in September 1962, JRR-3 had been in operation for 21 years until March 1983. It was decided that the reactor be removed en-bloc in view of the work schedule, cost and management of the reactor following the removal. In the special method developed jointly by the Japanese Atomic Energy Research Institute and Shimizu Construction Co., Ltd., the reactor main unit was cut off from the building by continuous core boring, with its major components bound in the block with biological shield material (heavy concrete), and then conveyed and stored in a large waste store building constructed near the reactor building. Major work processes described in this report include the cutting off, lifting, horizontal conveyance and lowering of the reactor main unit. The removal of the JRR-3 reactor main unit was successfully carried out safely and quickly by the en-block removal method with radiation exposure dose of the workers being kept at a minimum. Thus the high performance of the en-bloc removal method was demonstrated and, in addition, valuable knowhow and other data were obtained from the work. (Nogami, K.)

  18. Graphic Symbol Recognition using Graph Based Signature and Bayesian Network Classifier

    CERN Document Server

    Luqman, Muhammad Muzzamil; Ramel, Jean-Yves

    2010-01-01

    We present a new approach for recognition of complex graphic symbols in technical documents. Graphic symbol recognition is a well known challenge in the field of document image analysis and is at heart of most graphic recognition systems. Our method uses structural approach for symbol representation and statistical classifier for symbol recognition. In our system we represent symbols by their graph based signatures: a graphic symbol is vectorized and is converted to an attributed relational graph, which is used for computing a feature vector for the symbol. This signature corresponds to geometry and topology of the symbol. We learn a Bayesian network to encode joint probability distribution of symbol signatures and use it in a supervised learning scenario for graphic symbol recognition. We have evaluated our method on synthetically deformed and degraded images of pre-segmented 2D architectural and electronic symbols from GREC databases and have obtained encouraging recognition rates.

  19. Whole toxicity removal for industrial and domestic effluents treated with electron beam radiation, evaluated with Vibrio fischeri, Daphnia similis and Poecilia reticulata; Reducao da toxicidade aguda de efluentes industriais e domesticos tratados por irradiacao com feixe de eletrons, avaliada com as especies Vibrio fischeri, Daphnia similis and Poecilia reticulata

    Energy Technology Data Exchange (ETDEWEB)

    Borrely, Sueli Ivone

    2001-07-01

    Several studies have been performed in order to apply ionizing radiation to treat real complexes effluents from different sources, at IPEN. This paper shows the results of such kind of application devoted to influents and effluents from Suzano Wastewater Treatment Plant, Sao Paulo, Suzano WTP, from SABESP. The purpose of the work was to evaluate the radiation technology according to ecotoxicological aspects. The evaluation was carried out on the toxicity bases which included three sampling sites as follows: complex industrial effluents; domestic sewage mixed to the industrial discharge (GM) and final secondary effluent. The tested-organisms for toxicity evaluation were: the marine bacteria Vibrio fischeri, the microcrustacean Daphnia similis and the guppy Poecilia reticulata. The fish tests were applied only for secondary final effluents. The results demonstrated the original acute toxicity levels as well as the efficiency of electron beam for its reduction. An important acute toxicity removal was achieved: from 75% up to 95% with 50 kGy (UNA), 20 kGy (GM) and 5.0 kGy for the final effluent. The toxicity removal was a consequence of several organic solvents decomposed by radiation and acute toxicity reduction was about 95%. When the toxicity was evaluated for fish the radiation efficiency reached from 40% to 60%. The hypothesis tests showed a statistical significant removal in the developed studies condition. No residual hydrogen peroxide was found after 5.0 kGy was applied to final effluent. (author)

  20. Removal of depleted uranium from contaminated soils

    International Nuclear Information System (INIS)

    Contamination of soil and water with depleted uranium (DU) has increased public health concerns due to the chemical toxicity of DU at elevated dosages. For this reason, there is great interest in developing methods for DU removal from contaminated sources. Two DU laden soils, taken from U.S. Army sites, were characterized for particle size distribution, total uranium concentration and removable uranium. Soil A was found to be a well graded sand containing a total of 3210 mg/kg DU (3.99 x 104 Bq/kg, where a Becquerel (Bq) is a unit of radiation). About 83% of the DU in the fines fraction (particle diameter 4 Bq/kg)) was associated with the carbonate, iron and manganese oxide and organic matter fractions of the material. Soil B was classified as a sandy silt with total DU of 1560 mg/kg (1.94 x 104 Bq/kg). The DU content in the fines fraction was 5171 mg/kg (6.43 x 104 Bq/kg). Sequential extraction of the Soil B fines fraction indicated that 64% of the DU was present either as soluble U(VI) minerals or as insoluble U(IV). Citric acid, sodium bicarbonate and hydrogen peroxide were used in batch experiments to extract DU from the fines fraction of both soils. Citric acid and sodium bicarbonate were relatively successful for Soil A (50-60% DU removal), but not for Soil B (20-35% DU removal). Hydrogen peroxide was found to significantly increase DU extraction from both soils, attaining removals up to 60-80%

  1. Rheological evaluation of pretreated cladding removal waste

    Energy Technology Data Exchange (ETDEWEB)

    McCarthy, D.; Chan, M.K.C.; Lokken, R.O.

    1986-01-01

    Cladding removal waste (CRW) contains concentrations of transuranic (TRU) elements in the 80 to 350 nCi/g range. This waste will require pretreatment before it can be disposed of as glass or grout at Hanford. The CRW will be pretreated with a rare earth strike and solids removal by centrifugation to segregate the TRU fraction from the non-TRU fraction of the waste. The centrifuge centrate will be neutralized with sodium hydroxide. This neutralized cladding removal waste (NCRW) is expected to be suitable for grouting. The TRU solids removed by centrifugation will be vitrified. The goal of the Rheological Evaluation of Pretreated Cladding Removal Waste Program was to evaluate those rheological and transport properties critical to assuring successful handling of the NCRW and TRU solids streams and to demonstrate transfers in a semi-prototypic pumping environment. This goal was achieved by a combination of laboratory and pilot-scale evaluations. The results obtained during these evaluations were correlated with classical rheological models and scaled-up to predict the performance that is likely to occur in the full-scale system. The Program used simulated NCRW and TRU solid slurries. Rockwell Hanford Operations (Rockwell) provided 150 gallons of simulated CRW and 5 gallons of simulated TRU solid slurry. The simulated CRW was neutralized by Pacific Northwest Laboratory (PNL). The physical and rheological properties of the NCRW and TRU solid slurries were evaluated in the laboratory. The properties displayed by NCRW allowed it to be classified as a pseudoplastic or yield-pseudoplastic non-Newtonian fluid. The TRU solids slurry contained very few solids. This slurry exhibited the properties associated with a pseudoplastic non-Newtonian fluid.

  2. Rheological evaluation of pretreated cladding removal waste

    International Nuclear Information System (INIS)

    Cladding removal waste (CRW) contains concentrations of transuranic (TRU) elements in the 80 to 350 nCi/g range. This waste will require pretreatment before it can be disposed of as glass or grout at Hanford. The CRW will be pretreated with a rare earth strike and solids removal by centrifugation to segregate the TRU fraction from the non-TRU fraction of the waste. The centrifuge centrate will be neutralized with sodium hydroxide. This neutralized cladding removal waste (NCRW) is expected to be suitable for grouting. The TRU solids removed by centrifugation will be vitrified. The goal of the Rheological Evaluation of Pretreated Cladding Removal Waste Program was to evaluate those rheological and transport properties critical to assuring successful handling of the NCRW and TRU solids streams and to demonstrate transfers in a semi-prototypic pumping environment. This goal was achieved by a combination of laboratory and pilot-scale evaluations. The results obtained during these evaluations were correlated with classical rheological models and scaled-up to predict the performance that is likely to occur in the full-scale system. The Program used simulated NCRW and TRU solid slurries. Rockwell Hanford Operations (Rockwell) provided 150 gallons of simulated CRW and 5 gallons of simulated TRU solid slurry. The simulated CRW was neutralized by Pacific Northwest Laboratory (PNL). The physical and rheological properties of the NCRW and TRU solid slurries were evaluated in the laboratory. The properties displayed by NCRW allowed it to be classified as a pseudoplastic or yield-pseudoplastic non-Newtonian fluid. The TRU solids slurry contained very few solids. This slurry exhibited the properties associated with a pseudoplastic non-Newtonian fluid

  3. Higher School Marketing Strategy Formation: Classifying the Factors

    Directory of Open Access Journals (Sweden)

    N. K. Shemetova

    2012-01-01

    Full Text Available The paper deals with the main trends of higher school management strategy formation. The author specifies the educational changes in the modern information society determining the strategy options. For each professional training level the author denotes the set of strategic factors affecting the educational service consumers and, therefore, the effectiveness of the higher school marketing. The given factors are classified from the stand-points of the providers and consumers of educational service (enrollees, students, graduates and postgraduates. The research methods include the statistic analysis and general methods of scientific analysis, synthesis, induction, deduction, comparison, and classification. The author is convinced that the university management should develop the necessary prerequisites for raising the graduates’ competitiveness in the labor market, and stimulate the active marketing policies of the relating subdivisions and departments. In author’s opinion, the above classification of marketing strategy factors can be used as the system of values for educational service providers. 

  4. Prediction of Pork Quality by Fuzzy Support Vector Machine Classifier

    Science.gov (United States)

    Zhang, Jianxi; Yu, Huaizhi; Wang, Jiamin

    Existing objective methods to evaluate pork quality in general do not yield satisfactory results and their applications in meat industry are limited. In this study, fuzzy support vector machine (FSVM) method was developed to evaluate and predict pork quality rapidly and nondestructively. Firstly, the discrete wavelet transform (DWT) was used to eliminate the noise component in original spectrum and the new spectrum was reconstructed. Then, considering the characteristic variables still exist correlation and contain some redundant information, principal component analysis (PCA) was carried out. Lastly, FSVM was developed to differentiate and classify pork samples into different quality grades using the features from PCA. Jackknife tests on the working datasets indicated that the prediction accuracies were higher than other methods.

  5. A new machine learning classifier for high dimensional healthcare data.

    Science.gov (United States)

    Padman, Rema; Bai, Xue; Airoldi, Edoardo M

    2007-01-01

    Data sets with many discrete variables and relatively few cases arise in health care, commerce, information security, and many other domains. Learning effective and efficient prediction models from such data sets is a challenging task. In this paper, we propose a new approach that combines Metaheuristic search and Bayesian Networks to learn a graphical Markov Blanket-based classifier from data. The Tabu Search enhanced Markov Blanket (TS/MB) procedure is based on the use of restricted neighborhoods in a general Bayesian Network constrained by the Markov condition, called Markov Blanket Neighborhoods. Computational results from two real world healthcare data sets indicate that the TS/MB procedure converges fast and is able to find a parsimonious model with substantially fewer predictor variables than in the full data set. Furthermore, it has comparable or better prediction performance when compared against several machine learning methods, and provides insight into possible causal relations among the variables. PMID:17911800

  6. Naive Bayes Classifier Algorithm Approach for Mapping Poor Families Potential

    Directory of Open Access Journals (Sweden)

    Sri Redjeki

    2015-12-01

    Full Text Available The poverty rate that was recorded high in Indonesia becomes main priority the government to find a solution to poverty rate was below 10%. Initial identification the potential poverty becomes a very important thing to anticipate the amount of the poverty rate. Naive Bayes Classifier (NBC algorithm was one of data mining algorithms that can be used to perform classifications the family poor with 11 indicators with three classifications. This study using sample data of poor families a total of 219 data. A system that built use Java programming compared to the result of Weka software with accuracy the results of classification of 93%. The results of classification data of poor families mapped by adding latitude-longitude data and a photograph of the house of the condition of poor families. Based on the results of mapping classifications using NBC can help the government in Kabupaten Bantul in examining the potential of poor people.

  7. Image replica detection based on support vector classifier

    Science.gov (United States)

    Maret, Y.; Dufaux, F.; Ebrahimi, T.

    2005-08-01

    In this paper, we propose a technique for image replica detection. By replica, we mean equivalent versions of a given reference image, e.g. after it has undergone operations such as compression, filtering or resizing. Applications of this technique include discovery of copyright infringement or detection of illicit content. The technique is based on the extraction of multiple features from an image, namely texture, color, and spatial distribution of colors. Similar features are then grouped into groups and the similarity between two images is given by several partial distances. The decision function to decide whether a test image is a replica of a given reference image is finally derived using Support Vector Classifier (SVC). In this paper, we show that this technique achieves good results on a large database of images. For instance, for a false negative rate of 5 % the system yields a false positive rate of only 6 " 10-5.

  8. Refining and classifying finite-time Lyapunov exponent ridges

    CERN Document Server

    Allshouse, Michael R

    2015-01-01

    While more rigorous and sophisticated methods for identifying Lagrangian based coherent structures exist, the finite-time Lyapunov exponent (FTLE) field remains a straightforward and popular method for gaining some insight into transport by complex, time-dependent two-dimensional flows. In light of its enduring appeal, and in support of good practice, we begin by investigating the effects of discretization and noise on two numerical approaches for calculating the FTLE field. A practical method to extract and refine FTLE ridges in two-dimensional flows, which builds on previous methods, is then presented. Seeking to better ascertain the role of an FTLE ridge in flow transport, we adapt an existing classification scheme and provide a thorough treatment of the challenges of classifying the types of deformation represented by an FTLE ridge. As a practical demonstration, the methods are applied to an ocean surface velocity field data set generated by a numerical model.

  9. Classifying transient signals with nonlinear dynamic filter banks

    International Nuclear Information System (INIS)

    In recent years, several specific advances in the study of chaotic processes have been made which appear to have immediate applicability to signal processing. This paper describes two applications of one of these advances, nonlinear modeling, to signal detection ampersand classification, in particular for short-lived or transient or signals. The first method uses the coefficients from an adaptively fit model as a set of features for signal detection and classification. In the second method, a library of predictive nonlinear dynamic equations is used as a filter bank, and statistics on the prediction residuals are used to form feature vectors for input data segments. These feature vectors provide a mechanism for detecting and classifying model transients at signal-to-noise ratios as low as -10 dB, even when the generating dynamics of the transient signals are not present in the filter bank. The second method and some validating experiments are described in detail. copyright 1996 American Institute of Physics

  10. Road network extraction in classified SAR images using genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    肖志强; 鲍光淑; 蒋晓确

    2004-01-01

    Due to the complicated background of objectives and speckle noise, it is almost impossible to extract roads directly from original synthetic aperture radar(SAR) images. A method is proposed for extraction of road network from high-resolution SAR image. Firstly, fuzzy C means is used to classify the filtered SAR image unsupervisedly, and the road pixels are isolated from the image to simplify the extraction of road network. Secondly, according to the features of roads and the membership of pixels to roads, a road model is constructed, which can reduce the extraction of road network to searching globally optimization continuous curves which pass some seed points. Finally, regarding the curves as individuals and coding a chromosome using integer code of variance relative to coordinates, the genetic operations are used to search global optimization roads. The experimental results show that the algorithm can effectively extract road network from high-resolution SAR images.

  11. Early Detection of Breast Cancer using SVM Classifier Technique

    CERN Document Server

    Rejani, Y Ireaneus Anna

    2009-01-01

    This paper presents a tumor detection algorithm from mammogram. The proposed system focuses on the solution of two problems. One is how to detect tumors as suspicious regions with a very weak contrast to their background and another is how to extract features which categorize tumors. The tumor detection method follows the scheme of (a) mammogram enhancement. (b) The segmentation of the tumor area. (c) The extraction of features from the segmented tumor area. (d) The use of SVM classifier. The enhancement can be defined as conversion of the image quality to a better and more understandable level. The mammogram enhancement procedure includes filtering, top hat operation, DWT. Then the contrast stretching is used to increase the contrast of the image. The segmentation of mammogram images has been playing an important role to improve the detection and diagnosis of breast cancer. The most common segmentation method used is thresholding. The features are extracted from the segmented breast area. Next stage include,...

  12. A Speedy Cardiovascular Diseases Classifier Using Multiple Criteria Decision Analysis

    Directory of Open Access Journals (Sweden)

    Wah Ching Lee

    2015-01-01

    Full Text Available Each year, some 30 percent of global deaths are caused by cardiovascular diseases. This figure is worsening due to both the increasing elderly population and severe shortages of medical personnel. The development of a cardiovascular diseases classifier (CDC for auto-diagnosis will help address solve the problem. Former CDCs did not achieve quick evaluation of cardiovascular diseases. In this letter, a new CDC to achieve speedy detection is investigated. This investigation incorporates the analytic hierarchy process (AHP-based multiple criteria decision analysis (MCDA to develop feature vectors using a Support Vector Machine. The MCDA facilitates the efficient assignment of appropriate weightings to potential patients, thus scaling down the number of features. Since the new CDC will only adopt the most meaningful features for discrimination between healthy persons versus cardiovascular disease patients, a speedy detection of cardiovascular diseases has been successfully implemented.

  13. An Automated Acoustic System to Monitor and Classify Birds

    Directory of Open Access Journals (Sweden)

    Ho KC

    2006-01-01

    Full Text Available This paper presents a novel bird monitoring and recognition system in noisy environments. The project objective is to avoid bird strikes to aircraft. First, a cost-effective microphone dish concept (microphone array with many concentric rings is presented that can provide directional and accurate acquisition of bird sounds and can simultaneously pick up bird sounds from different directions. Second, direction-of-arrival (DOA and beamforming algorithms have been developed for the circular array. Third, an efficient recognition algorithm is proposed which uses Gaussian mixture models (GMMs. The overall system is suitable for monitoring and recognition for a large number of birds. Fourth, a hardware prototype has been built and initial experiments demonstrated that the array can acquire and classify birds accurately.

  14. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    state-of-the-art method for cartilage segmentation using one stage nearest neighbour classifier. Our method achieved better results than the state-of-the-art method for tibial as well as femoral cartilage segmentation. The next main contribution of the thesis deals with learning features autonomously...... learning architecture that autonomously learns the features from the images is the main insight of this study. While training the convolutional neural networks for segmentation purposes, the commonly used cost function does not consider the labels of the neighbourhood pixels/voxels. We propose spatially......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...

  15. Classifying orbits in the restricted three-body problem

    CERN Document Server

    Zotos, Euaggelos E

    2015-01-01

    The case of the planar circular restricted three-body problem is used as a test field in order to determine the character of the orbits of a small body which moves under the gravitational influence of the two heavy primary bodies. We conduct a thorough numerical analysis on the phase space mixing by classifying initial conditions of orbits and distinguishing between three types of motion: (i) bounded, (ii) escape and (iii) collisional. The presented outcomes reveal the high complexity of this dynamical system. Furthermore, our numerical analysis shows a remarkable presence of fractal basin boundaries along all the escape regimes. Interpreting the collisional motion as leaking in the phase space we related our results to both chaotic scattering and the theory of leaking Hamiltonian systems. We also determined the escape and collisional basins and computed the corresponding escape/collisional times. We hope our contribution to be useful for a further understanding of the escape and collisional mechanism of orbi...

  16. Intermediaries in Bredon (Co)homology and Classifying Spaces

    CERN Document Server

    Dembegioti, Fotini; Talelli, Olympia

    2011-01-01

    For certain contractible G-CW-complexes and F a family of subgroups of G, we construct a spectral sequence converging to the F-Bredon cohomology of G with E1-terms given by the F-Bredon cohomology of the stabilizer subgroups. As applications, we obtain several corollaries concerning the cohomological and geometric dimensions of the classifying space for the family F. We also introduce a hierarchically defined class of groups which contains all countable elementary amenable groups and countable linear groups of characteristic zero, and show that if a group G is in this class, then G has finite F-Bredon (co)homological dimension if and only if G has jump F-Bredon (co)homology.

  17. Handwritten Bangla Alphabet Recognition using an MLP Based Classifier

    CERN Document Server

    Basu, Subhadip; Sarkar, Ram; Kundu, Mahantapas; Nasipuri, Mita; Basu, Dipak Kumar

    2012-01-01

    The work presented here involves the design of a Multi Layer Perceptron (MLP) based classifier for recognition of handwritten Bangla alphabet using a 76 element feature set Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten characters of Bangla alphabet includes 24 shadow features, 16 centroid features and 36 longest-run features. Recognition performances of the MLP designed to work with this feature set are experimentally observed as 86.46% and 75.05% on the samples of the training and the test sets respectively. The work has useful application in the development of a complete OCR system for handwritten Bangla text.

  18. Building multiclass classifiers for remote homology detection and fold recognition

    Directory of Open Access Journals (Sweden)

    Karypis George

    2006-10-01

    Full Text Available Abstract Background Protein remote homology detection and fold recognition are central problems in computational biology. Supervised learning algorithms based on support vector machines are currently one of the most effective methods for solving these problems. These methods are primarily used to solve binary classification problems and they have not been extensively used to solve the more general multiclass remote homology prediction and fold recognition problems. Results We present a comprehensive evaluation of a number of methods for building SVM-based multiclass classification schemes in the context of the SCOP protein classification. These methods include schemes that directly build an SVM-based multiclass model, schemes that employ a second-level learning approach to combine the predictions generated by a set of binary SVM-based classifiers, and schemes that build and combine binary classifiers for various levels of the SCOP hierarchy beyond those defining the target classes. Conclusion Analyzing the performance achieved by the different approaches on four different datasets we show that most of the proposed multiclass SVM-based classification approaches are quite effective in solving the remote homology prediction and fold recognition problems and that the schemes that use predictions from binary models constructed for ancestral categories within the SCOP hierarchy tend to not only lead to lower error rates but also reduce the number of errors in which a superfamily is assigned to an entirely different fold and a fold is predicted as being from a different SCOP class. Our results also show that the limited size of the training data makes it hard to learn complex second-level models, and that models of moderate complexity lead to consistently better results.

  19. Evaluation of toxicity and removal of color in textile effluent treated with electron beam; Avaliacao da toxicidade e remocao da cor de um efluente textil tratado com feixe de eletrons

    Energy Technology Data Exchange (ETDEWEB)

    Morais, Aline Viana de

    2015-07-01

    The textile industry is among the main activities Brazil, being relevant in number of jobs, quantity and diversity of products and mainly by the volume of water used in industrial processes and effluent generation. These effluents are complex mixtures which are characterized by the presence of dyes, surfactants, metal sequestering agents, salts and other potentially toxic chemicals for the aquatic biota. Considering the lack of adequate waste management to these treatments, new technologies are essential in highlighting the advanced oxidation processes such as ionizing radiation electron beam. This study includes the preparation of a standard textile effluent chemical laboratory and its treatment by electron beam from electron accelerator in order to reduce the toxicity and intense staining resulting from Cl. Blue 222 dye. The treatment caused a reduction in toxicity to exposed organisms with 34.55% efficiency for the Daphnia similis micro crustacean and 47.83% for Brachionus plicatilis rotifer at a dose of 2.5 kGy. The Vibrio fischeri bacteria obtained better results after treatment with a dose of 5 kGy showing 57.29% efficiency. Color reduction was greater than 90% at a dose of 2.5 kGy. This experiment has also carried out some preliminary tests on the sensitivity of the D. similis and V. fischeri organisms to exposure of some of the products used in this bleaching and dyeing and two water reuse simulations in new textile processing after the treating the effluent with electron beam. (author)

  20. Arsenic removal from water

    Science.gov (United States)

    Moore, Robert C.; Anderson, D. Richard

    2007-07-24

    Methods for removing arsenic from water by addition of inexpensive and commonly available magnesium oxide, magnesium hydroxide, calcium oxide, or calcium hydroxide to the water. The hydroxide has a strong chemical affinity for arsenic and rapidly adsorbs arsenic, even in the presence of carbonate in the water. Simple and commercially available mechanical methods for removal of magnesium hydroxide particles with adsorbed arsenic from drinking water can be used, including filtration, dissolved air flotation, vortex separation, or centrifugal separation. A method for continuous removal of arsenic from water is provided. Also provided is a method for concentrating arsenic in a water sample to facilitate quantification of arsenic, by means of magnesium or calcium hydroxide adsorption.

  1. Removing the remaining ridges in fingerprint segmentation

    Institute of Scientific and Technical Information of China (English)

    ZHU En; ZHANG Jian-ming; YIN Jian-ping; ZHANG Guo-min; HU Chun-feng

    2006-01-01

    Fingerprint segmentation is an important step in fingerprint recognition and is usually aimed to identify non-ridge regions and unrecoverable low quality ridge regions and exclude them as background so as to reduce the time expenditure of image processing and avoid detecting false features. In high and in low quality ridge regions, often are some remaining ridges which are the afterimages of the previously scanned finger and are expected to be excluded from the foreground. However, existing segmentation methods generally do not take the case into consideration, and often, the remaining ridge regions are falsely classified as foreground by segmentation algorithm with spurious features produced erroneously including unrecoverable regions as foreground. This paper proposes two steps for fingerprint segmentation aimed at removing the remaining ridge region from the foreground. The non-ridge regions and unrecoverable low quality ridge regions are removed as background in the first step, and then the foreground produced by the first step is further analyzed for possible remove of the remaining ridge region. The proposed method proved effective in avoiding detecting false ridges and in improving minutiae detection.

  2. Electronics and electronic systems

    CERN Document Server

    Olsen, George H

    1987-01-01

    Electronics and Electronic Systems explores the significant developments in the field of electronics and electronic devices. This book is organized into three parts encompassing 11 chapters that discuss the fundamental circuit theory and the principles of analog and digital electronics. This book deals first with the passive components of electronic systems, such as resistors, capacitors, and inductors. These topics are followed by a discussion on the analysis of electronic circuits, which involves three ways, namely, the actual circuit, graphical techniques, and rule of thumb. The remaining p

  3. Using Machine Learning to classify the diffuse interstellar bands

    Science.gov (United States)

    Baron, Dalya; Poznanski, Dovi; Watson, Darach; Yao, Yushu; Cox, Nick L. J.; Prochaska, J. Xavier

    2015-07-01

    Using over a million and a half extragalactic spectra from the Sloan Digital Sky Survey we study the correlations of the diffuse interstellar bands (DIBs) in the Milky Way. We measure the correlation between DIB strength and dust extinction for 142 DIBs using 24 stacked spectra in the reddening range E(B - V) Learning algorithms to divide the DIBs to spectroscopic families based on 250 stacked spectra. By removing the dust dependence, we study how DIBs follow their local environment. We thus obtain six groups of weak DIBs, four of which are tightly associated with C2 or CN absorption lines.

  4. Contamination removal using various solvents and methodologies

    Science.gov (United States)

    Jeppsen, J. C.

    1989-01-01

    Critical and non-critical bonding surfaces must be kept free of contamination that may cause potential unbonds. For example, an aft-dome section of a redesigned solid rocket motor that had been contaminated with hydraulic oil did not appear to be sufficiently cleaned when inspected by the optically stimulated electron emission process (Con Scan) after it had been cleaned using a hand double wipe cleaning method. As a result, current and new cleaning methodologies as well as solvent capability in removing various contaminant materials were reviewed and testing was performed. Bonding studies were also done to verify that the cleaning methods used in removing contaminants provide an acceptable bonding surface. The removal of contaminants from a metal surface and the strength of subsequent bonds were tested using the Martin Marietta and double-wipe cleaning methods. Results are reported.

  5. New anaerobic process of nitrogen removal.

    Science.gov (United States)

    Kalyuzhnyi, S; Gladchenko, M; Mulder, A; Versprille, B

    2006-01-01

    This paper reports on successful laboratory testing of a new nitrogen removal process called DEAMOX (DEnitrifying AMmonium OXidation) for the treatment of strong nitrogenous wastewater such as baker's yeast effluent. The concept of this process combines the recently discovered ANAMMOX (ANaerobic AMMonium OXidation) reaction with autotrophic denitrifying conditions using sulfide as an electron donor for the production of nitrite within an anaerobic biofilm. The achieved results with a nitrogen loading rate of higher than 1,000 mg/L/d and nitrogen removal of around 90% look very promising because they exceed (by 9-18 times) the corresponding nitrogen removal rates of conventional activated sludge systems. The paper describes also some characteristics of DEAMOX sludge, as well as the preliminary results of its microbiological characterization. PMID:17163025

  6. Automatic alkaloid removal system.

    Science.gov (United States)

    Yahaya, Muhammad Rizuwan; Hj Razali, Mohd Hudzari; Abu Bakar, Che Abdullah; Ismail, Wan Ishak Wan; Muda, Wan Musa Wan; Mat, Nashriyah; Zakaria, Abd

    2014-01-01

    This alkaloid automated removal machine was developed at Instrumentation Laboratory, Universiti Sultan Zainal Abidin Malaysia that purposely for removing the alkaloid toxicity from Dioscorea hispida (DH) tuber. It is a poisonous plant where scientific study has shown that its tubers contain toxic alkaloid constituents, dioscorine. The tubers can only be consumed after it poisonous is removed. In this experiment, the tubers are needed to blend as powder form before inserting into machine basket. The user is need to push the START button on machine controller for switching the water pump ON by then creating turbulence wave of water in machine tank. The water will stop automatically by triggering the outlet solenoid valve. The powders of tubers are washed for 10 minutes while 1 liter of contaminated water due toxin mixture is flowing out. At this time, the controller will automatically triggered inlet solenoid valve and the new water will flow in machine tank until achieve the desire level that which determined by ultra sonic sensor. This process will repeated for 7 h and the positive result is achieved and shows it significant according to the several parameters of biological character ofpH, temperature, dissolve oxygen, turbidity, conductivity and fish survival rate or time. From that parameter, it also shows the positive result which is near or same with control water and assuming was made that the toxin is fully removed when the pH of DH powder is near with control water. For control water, the pH is about 5.3 while water from this experiment process is 6.0 and before run the machine the pH of contaminated water is about 3.8 which are too acid. This automated machine can save time for removing toxicity from DH compared with a traditional method while less observation of the user. PMID:24783795

  7. Classifying transcription factor targets and discovering relevant biological features

    Directory of Open Access Journals (Sweden)

    DeLisi Charles

    2008-05-01

    Full Text Available Abstract Background An important goal in post-genomic research is discovering the network of interactions between transcription factors (TFs and the genes they regulate. We have previously reported the development of a supervised-learning approach to TF target identification, and used it to predict targets of 104 transcription factors in yeast. We now include a new sequence conservation measure, expand our predictions to include 59 new TFs, introduce a web-server, and implement an improved ranking method to reveal the biological features contributing to regulation. The classifiers combine 8 genomic datasets covering a broad range of measurements including sequence conservation, sequence overrepresentation, gene expression, and DNA structural properties. Principal Findings (1 Application of the method yields an amplification of information about yeast regulators. The ratio of total targets to previously known targets is greater than 2 for 11 TFs, with several having larger gains: Ash1(4, Ino2(2.6, Yaf1(2.4, and Yap6(2.4. (2 Many predicted targets for TFs match well with the known biology of their regulators. As a case study we discuss the regulator Swi6, presenting evidence that it may be important in the DNA damage response, and that the previously uncharacterized gene YMR279C plays a role in DNA damage response and perhaps in cell-cycle progression. (3 A procedure based on recursive-feature-elimination is able to uncover from the large initial data sets those features that best distinguish targets for any TF, providing clues relevant to its biology. An analysis of Swi6 suggests a possible role in lipid metabolism, and more specifically in metabolism of ceramide, a bioactive lipid currently being investigated for anti-cancer properties. (4 An analysis of global network properties highlights the transcriptional network hubs; the factors which control the most genes and the genes which are bound by the largest set of regulators. Cell-cycle and

  8. Attribute measurement equipment for the verification of plutonium in classified forms for the Trilateral Initiative

    International Nuclear Information System (INIS)

    these storage containers, together with the requirement to validate the equipment with unclassified reference materials, have led to the design requirement that the attribute measurement equipment must be large and state-of-the-art. The experts also have determined that simultaneous measurement of attributes is desirable to reduce the amount of classified information that resides in the system at any time. This adds further complexity to the system. Certain ancillary equipment has been proposed to provide additional confidence in a Trilateral Initiative verification approach. In situ probes and a simulation/ authentication tool have been proposed and are under development. In situ probes are simple gross radiation measurement devices that could be used to provide confidence that an item placed in a storage position has remained in storage. The authentication tool is an electronic pulse simulator that mimics the output of a radiation measurement device. With this tool, an inspector can exercise a system that has an information barrier that is independent of host-controlled reference materials. This tool has also been identified as potentially being very useful in training inspectors, in exercising electronics packages, and in applying safeguards as well. Two working prototypes of an attribute measurement system with an information barrier have been fabricated and demonstrated in the United States, and the Russian Federation has begun preliminary design work for a system that could be built in Russia. This paper will also describe these systems and give-the status of current activities. (author)

  9. MISR Level 2 FIRSTLOOK TOA/Cloud Classifier parameters V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 FIRSTLOOK TOA/Cloud Classifiers Product. It contains the Angular Signature Cloud Mask (ASCM), Cloud Classifiers, and Support Vector Machine...

  10. Gaussian and feed-forward neural network classifiers for shower recognition, generalization and parallel implementation

    International Nuclear Information System (INIS)

    The performance of Gaussian and feed-forward neural network classifiers, is compared with respect to the recognition of energy deposition patterns in a calorimeter. Implementation aspects of these classifiers for a multi-processor architecture are discussed

  11. 75 FR 51609 - Classified National Security Information Program for State, Local, Tribal, and Private Sector...

    Science.gov (United States)

    2010-08-23

    ... National Security Information Program for State, Local, Tribal, and Private Sector Entities By the... established a Classified National Security Information Program (Program) designed to safeguard and govern access to classified national security information shared by the Federal Government with State,...

  12. Automating the construction of scene classifiers for content-based video retrieval

    OpenAIRE

    Israël, Menno; Broek, van den, L.A.M.; Putten, van, B.; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a two stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classific...

  13. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    OpenAIRE

    M. Favorskaya; Nosov, A.; Popov, A.

    2015-01-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin dete...

  14. Construction of Classifier Based on MPCA and QSA and Its Application on Classification of Pancreatic Diseases

    OpenAIRE

    Huiyan Jiang; Di Zhao; Tianjiao Feng; Shiyang Liao; Yenwei Chen

    2013-01-01

    A novel method is proposed to establish the classifier which can classify the pancreatic images into normal or abnormal. Firstly, the brightness feature is used to construct high-order tensors, then using multilinear principal component analysis (MPCA) extracts the eigentensors, and finally, the classifier is constructed based on support vector machine (SVM) and the classifier parameters are optimized with quantum simulated annealing algorithm (QSA). In order to verify the effectiveness of th...

  15. A Classifier Fusion System with Verification Module for Improving Recognition Reliability

    OpenAIRE

    Zhang, Ping

    2010-01-01

    In this paper, we proposed a novel classifier fusion system to congregate the recognition results of an ANN classifier and a modified KNN classifier. The recognition results are verified by the recognition results of SVM. As two entirely different classification techniques (image-based OCR and 1-D digital signal SVM classification) are applied to the system, experiments have demonstrated that the proposed classifier fusion system with SVM verification module can significantly increase the sys...

  16. Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers

    Science.gov (United States)

    Anaya, Leticia H.

    2011-01-01

    In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed.…

  17. Classifying and explaining democracy in the Muslim world

    Directory of Open Access Journals (Sweden)

    Rohaizan Baharuddin

    2012-12-01

    Full Text Available The purpose of this study is to classify and explain democracies in the 47 Muslim countries between the years 1998 and 2008 by using liberties and elections as independent variables. Specifically focusing on the context of the Muslim world, this study examines the performance of civil liberties and elections, variation of democracy practised the most, the elections, civil liberties and democratic transitions and patterns that followed. Based on the quantitative data primarily collected from Freedom House, this study demonstrates the following aggregate findings: first, the “not free not fair” elections, the “limited” civil liberties and the “Illiberal Partial Democracy” were the dominant feature of elections, civil liberties and democracy practised in the Muslim world; second, a total of 413 Muslim regimes out of 470 (47 regimes x 10 years remained the same as their democratic origin points, without any transitions to a better or worse level of democracy, throughout these 10 years; and third, a slow, yet steady positive transition of both elections and civil liberties occurred in the Muslim world with changes in the nature of elections becoming much more progressive compared to the civil liberties’ transitions.

  18. Addressing the Challenge of Defining Valid Proteomic Biomarkers and Classifiers

    LENUS (Irish Health Repository)

    Dakna, Mohammed

    2010-12-10

    Abstract Background The purpose of this manuscript is to provide, based on an extensive analysis of a proteomic data set, suggestions for proper statistical analysis for the discovery of sets of clinically relevant biomarkers. As tractable example we define the measurable proteomic differences between apparently healthy adult males and females. We choose urine as body-fluid of interest and CE-MS, a thoroughly validated platform technology, allowing for routine analysis of a large number of samples. The second urine of the morning was collected from apparently healthy male and female volunteers (aged 21-40) in the course of the routine medical check-up before recruitment at the Hannover Medical School. Results We found that the Wilcoxon-test is best suited for the definition of potential biomarkers. Adjustment for multiple testing is necessary. Sample size estimation can be performed based on a small number of observations via resampling from pilot data. Machine learning algorithms appear ideally suited to generate classifiers. Assessment of any results in an independent test-set is essential. Conclusions Valid proteomic biomarkers for diagnosis and prognosis only can be defined by applying proper statistical data mining procedures. In particular, a justification of the sample size should be part of the study design.

  19. A Novel Performance Metric for Building an Optimized Classifier

    Directory of Open Access Journals (Sweden)

    Mohammad Hossin

    2011-01-01

    Full Text Available Problem statement: Typically, the accuracy metric is often applied for optimizing the heuristic or stochastic classification models. However, the use of accuracy metric might lead the searching process to the sub-optimal solutions due to its less discriminating values and it is also not robust to the changes of class distribution. Approach: To solve these detrimental effects, we propose a novel performance metric which combines the beneficial properties of accuracy metric with the extended recall and precision metrics. We call this new performance metric as Optimized Accuracy with Recall-Precision (OARP. Results: In this study, we demonstrate that the OARP metric is theoretically better than the accuracy metric using four generated examples. We also demonstrate empirically that a naïve stochastic classification algorithm, which is Monte Carlo Sampling (MCS algorithm trained with the OARP metric, is able to obtain better predictive results than the one trained with the conventional accuracy metric. Additionally, the t-test analysis also shows a clear advantage of the MCS model trained with the OARP metric over the accuracy metric alone for all binary data sets. Conclusion: The experiments have proved that the OARP metric leads stochastic classifiers such as the MCS towards a better training model, which in turn will improve the predictive results of any heuristic or stochastic classification models.

  20. Learning multiscale and deep representations for classifying remotely sensed imagery

    Science.gov (United States)

    Zhao, Wenzhi; Du, Shihong

    2016-03-01

    It is widely agreed that spatial features can be combined with spectral properties for improving interpretation performances on very-high-resolution (VHR) images in urban areas. However, many existing methods for extracting spatial features can only generate low-level features and consider limited scales, leading to unpleasant classification results. In this study, multiscale convolutional neural network (MCNN) algorithm was presented to learn spatial-related deep features for hyperspectral remote imagery classification. Unlike traditional methods for extracting spatial features, the MCNN first transforms the original data sets into a pyramid structure containing spatial information at multiple scales, and then automatically extracts high-level spatial features using multiscale training data sets. Specifically, the MCNN has two merits: (1) high-level spatial features can be effectively learned by using the hierarchical learning structure and (2) multiscale learning scheme can capture contextual information at different scales. To evaluate the effectiveness of the proposed approach, the MCNN was applied to classify the well-known hyperspectral data sets and compared with traditional methods. The experimental results shown a significant increase in classification accuracies especially for urban areas.