Sample records for classified removable electronic

  1. Sixty Percent Conceptual Design Report: Enterprise Accountability System for Classified Removable Electronic Media

    Energy Technology Data Exchange (ETDEWEB)

    B. Gardiner; L.Graton; J.Longo; T.Marks, Jr.; B.Martinez; R. Strittmatter; C.Woods; J. Joshua


    Classified removable electronic media (CREM) are tracked in several different ways at the Laboratory. To ensure greater security for CREM, we are creating a single, Laboratory-wide system to track CREM. We are researching technology that can be used to electronically tag and detect CREM, designing a database to track the movement of CREM, and planning to test the system at several locations around the Laboratory. We focus on affixing ''smart tags'' to items we want to track and installing gates at pedestrian portals to detect the entry or exit of tagged items. By means of an enterprise database, the system will track the entry and exit of tagged items into and from CREM storage vaults, vault-type rooms, access corridors, or boundaries of secure areas, as well as the identity of the person carrying an item. We are considering several options for tracking items that can give greater security, but at greater expense.

  2. Electronic Commerce Removing Regulatory Impediments (United States)


    AD-A252 691 ELECTRONIC COMMERCE Removing Regulatory Impediments ~DuiG A% ELECTE I JUL1 8 1992 0 C D Daniel J. Drake John A. Ciucci ... - ""N ST AT KE...Management Institute 6400 Goldsboro Road Bethesda, Maryland 20817-5886 92 LMI Executive Summary ELECTRONIC COMMERCE : REMOVING REGULATORY IMPEDIMENTS... Electronic Commerce techniques, such as electronic mail and electronic data interchange (EDI), enable Government agencies to conduct business without the

  3. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Yaqi; Meng, Qinghao, E-mail:; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen [Tianjin Key Laboratory of Process Measurement and Control, Institute of Robotics and Autonomous Systems, School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)


    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.

  4. Removal of Vesicle Structures from Transmission Electron Microscope Images

    DEFF Research Database (Denmark)

    Jensen, Katrine Hommelhoff; Sigworth, Fred; Brandt, Sami Sebastian


    In this paper, we address the problem of imaging membrane proteins for single-particle cryo-electron microscopy reconstruction of the isolated protein structure. More precisely, we propose a method for learning and removing the interfering vesicle signals from the micrograph, prior to reconstruct...

  5. Ensemble Classifier Strategy Based on Transient Feature Fusion in Electronic Nose (United States)

    Bagheri, Mohammad Ali; Montazer, Gholam Ali


    In this paper, we test the performance of several ensembles of classifiers and each base learner has been trained on different types of extracted features. Experimental results show the potential benefits introduced by the usage of simple ensemble classification systems for the integration of different types of transient features.

  6. Terra MODIS Band 27 Electronic Crosstalk Effect and Its Removal (United States)

    Sun, Junqiang; Xiong, Xiaoxiong; Madhavan, Sriharsha; Wenny, Brian


    The MODerate-resolution Imaging Spectroradiometer (MODIS) is one of the primary instruments in the NASA Earth Observing System (EOS). The first MODIS instrument was launched in December, 1999 on-board the Terra spacecraft. MODIS has 36 bands, covering a wavelength range from 0.4 micron to 14.4 micron. MODIS band 27 (6.72 micron) is a water vapor band, which is designed to be insensitive to Earth surface features. In recent Earth View (EV) images of Terra band 27, surface feature contamination is clearly seen and striping has become very pronounced. In this paper, it is shown that band 27 is impacted by electronic crosstalk from bands 28-30. An algorithm using a linear approximation is developed to correct the crosstalk effect. The crosstalk coefficients are derived from Terra MODIS lunar observations. They show that the crosstalk is strongly detector dependent and the crosstalk pattern has changed dramatically since launch. The crosstalk contributions are positive to the instrument response of band 27 early in the mission but became negative and much larger in magnitude at later stages of the mission for most detectors of the band. The algorithm is applied to both Black Body (BB) calibration and MODIS L1B products. With the crosstalk effect removed, the calibration coefficients of Terra MODIS band 27 derived from the BB show that the detector differences become smaller. With the algorithm applied to MODIS L1B products, the Earth surface features are significantly removed and the striping is substantially reduced in the images of the band. The approach developed in this report for removal of the electronic crosstalk effect can be applied to other MODIS bands if similar crosstalk behaviors occur.

  7. Classifying Microorganisms

    DEFF Research Database (Denmark)

    Sommerlund, Julie


    This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris......This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological...... and integration possible, the field of molecular biology seems to be overwhelmingly homogeneous, and in need of heterogeneity and conflict to add drive and momentum to the work being carried out. The paper is based on observations of daily life in a molecular microbiology laboratory at the Technical University...

  8. 分类线性回归的Landsat影像去云方法%Classified Linear Regression Based Landsat Image Cloud Removal Method

    Institute of Scientific and Technical Information of China (English)

    吴炜; 骆剑承; 沈占锋; 王卫红


    An approach for cloud removal based on linear regression after image classification is proposed in this article.First of all,the clouds in a remote sensing image and its referenced data to be processed are detected,from which two cloud masks are built.Then,an ISODATA classification is applied to the referenced image with the cloud mask.Next,the masked part of the contaminated image is classified with the existing clusters of the referenced data using the minimum distance method.Last,the digital numbers of the cloudy are as of the contaminated image are replaced with by the prediction value of the referenced data calculated by the linear relationships determined between clusters on the referenced image and the corresponding contaminates done according to the pixel location.This algorithm is programmed to automatically detect and remove the clouds areas in Landsat images.The accuracy of cloud detection and the prediction of original values of the cloud cover are evaluated.Results show that the proposed method is effective.%首先,对参考影像和待去云影像上的云覆盖区域进行检测和掩膜;然后,对掩膜后的参考影像进行ISODATA聚类,并建立各个类别参考影像到待去云影像灰度值的线性回归方程;再对待去云影像上的云覆盖区域,依据参考影像上的灰度值进行最小距离方法分类,划分到聚类形成的各个类别之中;最后,依据各个类别回归方程进行灰度值预测.实验结果表明,所提方法能够进行云区的检测和去除,预测精度相比传统方法有较大提高.

  9. Evaluation of sustainable electron donors for nitrate removal in different water media. (United States)

    Fowdar, Harsha S; Hatt, Belinda E; Breen, Peter; Cook, Perran L M; Deletic, Ana


    An external electron donor is usually included in wastewater and groundwater treatment systems to enhance nitrate removal through denitrification. The choice of electron donor is critical for both satisfactory denitrification rates and sustainable long-term performance. Electron donors that are waste products are preferred to pure organic chemicals. Different electron donors have been used to treat different water types and little is known as to whether there are any electron donors that are suitable for multiple applications. Seven different carbon rich waste products, including liquid and solid electron donors, were studied in comparison to pure acetate. Batch-scale tests were used to measure their ability to reduce nitrate concentrations in a pure nutrient solution, light greywater, secondary-treated wastewater and tertiary-treated wastewater. The tested electron donors removed oxidised nitrogen (NOx) at varying rates, ranging from 48 mg N/L/d (acetate) to 0.3 mg N/L/d (hardwood). The concentrations of transient nitrite accumulation also varied across the electron donors. The different water types had an influence on NOx removal rates, the extent of which was dependent on the type of electron donor. Overall, the highest rates were recorded in light greywater, followed by the pure nutrient solution and the two partially treated wastewaters. Cotton wool and rice hulls were found to be promising electron donors with good NOx removal rates, lower leachable nutrients and had the least variation in performance across water types.

  10. Effect of cathode electron acceptors on simultaneous anaerobic sulfide and nitrate removal in microbial fuel cell. (United States)

    Cai, Jing; Zheng, Ping; Mahmood, Qaisar


    The current investigation reports the effect of cathode electron acceptors on simultaneous sulfide and nitrate removal in two-chamber microbial fuel cells (MFCs). Potassium permanganate and potassium ferricyanide were common cathode electron acceptors and evaluated for substrate removal and electricity generation. The abiotic MFCs produced electricity through spontaneous electrochemical oxidation of sulfide. In comparison with abiotic MFC, the biotic MFC showed better ability for simultaneous nitrate and sulfide removal along with electricity generation. Keeping external resistance of 1,000 Ω, both MFCs showed good capacities for substrate removal where nitrogen and sulfate were the main end products. The steady voltage with potassium permanganate electrodes was nearly twice that of with potassium ferricyanide. Cyclic voltammetry curves confirmed that the potassium permanganate had higher catalytic activity than potassium ferricyanide. The potassium permanganate may be a suitable choice as cathode electron acceptor for enhanced electricity generation during simultaneous treatment of sulfide and nitrate in MFCs.

  11. 76 FR 27606 - Technical Corrections To Remove Obsolete References to Non-Automated Carriers From Electronic... (United States)


    ... amends the U.S. Customs and Border Protection (CBP) regulations concerning the ] mandatory electronic... in the first sentence the words ``Customs Form 1302 or a Customs-approved electronic equivalent'' and... SECURITY Customs and Border Protection 19 CFR Part 4 Technical Corrections To Remove Obsolete References...

  12. Characterization of phosphorus removal bacteria in (AO)2 SBR system by using different electron acceptors

    Institute of Scientific and Technical Information of China (English)

    JIANG Yi-feng; WANG Lin; YU Ying; WANG Bao-zhen; LIU Shuo; SHEN Zheng


    Characteristics of phosphorus removal bacteria were investigated by using three different types of electron acceptors, as well as the positive role of nitrite in phosphorus removal process. An (AO)2 SBR (anaerobic-aerobic-anoxic-aerobic sequencing batch reactor) was thereby employed to enrich denitrifying phosphorus removal bacteria for simultaneously removing phosphorus and nitrogen via anoxic phosphorus uptake. Ammonium oxidation was controlled at the first phase of the nitrification process. Nitrite-inhibition batch tests illustrated that nitrite was not an inhibitor to phosphorus uptake process, but served as an alternative electron acceptor to nitrate and oxygen if the concentration was under the inhibition level of 40mg NO2 - N · L- 1. It implied that in addition to the two well-accepted groups of phosphorus removal bacterium ( one can only utilize oxygen as electron acceptor, P1, while the other can use both oxygen and nitrate as electron acceptor, P2 ), a new group of phosphorus removal bacterium P3, which could use oxygen, nitrate and nitrite as electron acceptor to take up phosphorus were identified in the test system. To understand (AO)2 SBR sludge better, the relative population of the different bacteria in this system, plus another A/O SBR sludge (seed sludge) were respectively estimated by the phosphorus uptake batch tests with either oxygen or nitrate or nitrite as electron acceptor. The results demonstrated that phosphorus removal capability of (AO)2 SBR sludge had a little degradation after A/O sludge was cultivated in the (AO)2 mode over a long period of time. However, denitrifying phosphorus removal bacteria ( P2 and P3 ) was significantly enriched showed by the relative population of the three types of bacteria,which implied that energy for aeration and COD consumption could be reduced in theory.

  13. Influence of oxygen and water vapor on removal of sulfur compounds by electron attachment

    Energy Technology Data Exchange (ETDEWEB)

    Tamon, Hajime; Sano, Noriaki; Okazaki, Morio [Kyoto Univ. (Japan). Dept. of Chemical Engineering


    When an electron collides with a gas molecule, a negative ion is produced in a probability depending on the electron energy, the structure of the gas molecule, and its electron affinity. This reaction is called electron attachment. In previous articles, a novel gas purification method based on the high selectivity of electron attachment reaction has been proposed. In the proposed gas-purification principle, the gas impurities are ionized by colliding with electrons that are produced in a corona discharge between a wire cathode and a cylindrical anode. The negative ions formed by electron attachment drift to the anode. On the basis of removing the negative ions at the anode, two types of reactors, deposition-type reactor and sweep-out-type reactor, have been proposed. The authors have then conducted the removals of seven kinds of sulfur compounds [SF{sub 6}, H{sub 2}S, CH{sub 3}SH, (CH{sub 3}){sub 2}S, CS{sub 2}, COS and SO{sub 2}] by the deposition-type reactor and dilute iodine and oxygen by the sweep-out-type reactor from nitrogen. Since oxygen and water vapor coexist in the actual process, it is necessary to examine their influence on the removal efficiency for the practical application. In this article, the authors study experimentally the influence of coexisting oxygen and water vapor on the removal of six sulfur compounds [H{sub 2}S, CH{sub 3}SH, (CH{sub 3}){sub 2}S, CS{sub 2}, COS, and SO{sub 2}] by use of the deposition-type reactor. They also discuss the removal mechanism of the sulfur compounds in air by electron attachment.

  14. Single and multiple electron removal and fragmentation in collisions of protons with water molecules (United States)

    Gulyás, L.; Egri, S.; Ghavaminia, H.; Igarashi, A.


    Single and multiple electron removal processes (capture and ionization) in proton-H2O collisions have been investigated applying the continuum distorted wave with eikonal initial-state model within the framework of independent electron approach. Probabilities and cross sections for electron capture are derived from the same quantities evaluated for ionization using the continuity of transition quantities across the ionization threshold. Dissociation and fragmentation cross sections for the H2Oq + (q =1 -3) ions have been evaluated by considering branching ratios that include the effect of multiple electron removal transitions. The results are compared with experimental and other theoretical data in the range of impact energies from 30 kev to 5 MeV. Generally, the evaluated cross sections and fragmentation yields show good agreement with experiments at impact energies above 100-150 keV.

  15. Experimental study of electron beam induced removal of H/sub 2/S from geothermal fluids

    Energy Technology Data Exchange (ETDEWEB)

    Helfritch, D.J.; Singhvi, R.; Evans, R.D.; Reynolds, W.E.


    The treatment of geothermal steam by electron beam irradiation is a potential alternative method of H/sub 2/S removal which can be applied upstream or downstream and has no chemical requirements. The experimental work reported here examines the effectiveness of electron beam treatment of geothermal fluids. These fluids are produced by combining the constituents in a heated cell, which contains an electron beam transparent window. Irradiation of the contents and subsequent chemical analysis allows an evaluation of effectiveness. These results are used for a commercial feasibility assessment.

  16. Evaluation of the use of electronic health data to classify four-year mortality risk for older adults undergoing screening colonoscopies. (United States)

    Synnestvedt, Marie B; Weiner, Mark G


    Current cancer screening recommendations often apply coarse age cutoffs for screening requirements without regard to predicted life expectancy. Using these cutoffs, healthier older patients may be under-screened, and sicker younger patients may be screened too often. Mortality risk classification using EHR data could be used to tailor screening reminders to physicians in ways that better align screening recommendations with patients who are more likely to live long enough to benefit from early detection. We have evaluated the performance of an existing prognostic index for 4-year mortality using data readily available in the electronic health record (EHR), and investigated the effect of the index in retrospective cohorts of adults age 65 and older undergoing screening colonoscopy. Risk scores in this adaptation of a four-year prognostic index were found to be associated with actual death rates and consistent with mortality rates from a national sample. Our results demonstrate that data extracted from electronic health records can be used to classify mortality risk. With improvements, including extension to a 5-year mortality model with inclusion of additional variables and extension of variable definitions, informatics methods to implement mortality models may prove to be clinically useful in tailoring screening guidelines.

  17. Pseudomonas pellicle in disinfectant testing: electron microscopy, pellicle removal, and effect on test results. (United States)

    Cole, E C; Rutala, W A; Carson, J L; Alfano, E M


    Pseudomonas aeruginosa ATCC 15442 is a required organism in the Association of Official Analytical Chemists use-dilution method for disinfectant efficacy testing. When grown in a liquid medium, P. aeruginosa produces a dense mat or pellicle at the broth/air interface. The purpose of this investigation was to examine the pellicle by scanning electron microscopy, to evaluate three pellicle removal methods, and to determine the effect of pellicle fragments on disinfectant efficacy test results. The efficacies of three methods of pellicle removal (decanting, vacuum suction, and filtration) were assessed by quantifying cell numbers on penicylinders. The Association of Official Analytical Chemists use-dilution method was used to determine whether pellicle fragments in the tubes used to inoculate penicylinders affected test results. Scanning electron micrographs showed the pellicle to be a dense mass of intact, interlacing cells at least 10 microns thick. No significant differences in pellicle removal methods were observed, and the presence of pellicle fragments usually increased the number of positive tubes in the use-dilution method significantly. Images PMID:2497711

  18. Positive role of nitrite as electron acceptor on anoxic denitrifying phosphorus removal process

    Institute of Scientific and Technical Information of China (English)

    HUANG RongXin; LI Dong; LI XiangKun; BAO LinLin; JIANG AnXi; ZHANG Jie


    Literatures revealed that the electron acceptor-nitrite could be inhibitory or toxic in the denitrifying phosphorus removal process.Batch test experiments were used to investigate the inhibitory effect during the anoxic condition.The inoculated activated sludge was taken from a continuous double- sludge denitrifying phosphorus and nitrogen removal system.Nitrite was added at the anoxic stage.One time injection and sequencing batch injection were carried on in the denitrifying dephosphorus procedure.The results indicated that the nitrite concentration higher than 30 mg/L would inhibit the anoxic phosphate uptake severely, and the threshold inhibitory concentration was dependent on the characteristics of the activated sludge and the operating conditions; instead, lower than the inhibitory concentration would not be detrimental to anoxic phosphorus uptake, and it could act as good electron acceptor for the anoxic phosphate accumulated.Positive effects performed during the denitrifying biological dephosphorus all the time.The utility of nitrite as good electron acceptor would provide a new feasible way in the denitrifying phosphorus process.

  19. Experimental facility for investigation of gaseous pollutants removal process stimulated by electron beam and microwave energy

    Energy Technology Data Exchange (ETDEWEB)

    Zimek, Z.; Chmielewski, A.G.; Bulka, S.; Roman, K.; Licki, J. [Institute of Nuclear Chemistry and Technology, Warsaw (Poland)


    A laboratory unit for the investigation of toxic gases removal from flue gases based on an ILU 6 accelerator has been built at the Institute of Nuclear Chemistry and Technology. This installation was provided with independent pulsed and continuous wave (c.w.) microwave generators to create electrical discharge and another pulsed microwave generator for plasma diagnostics. This allows to investigate a combined removal process based on the simultaneous use of the electron beam and streams of microwave energy in one reaction vessel. Two heating furnaces, each of them being a water-tube boiler with 100 kW thermal power, were applied for the production of combustion gas with flow rates 5-400 Nm{sup 3}/h. Proper composition of the flue gas was obtained by introducing such components as SO{sub 2}, NO and NH{sub 3} to the gas stream. The installation consists of: inlet system (two boilers - house heating furnace, boiler pressure regulator, SO{sub 2}, NO and NH{sub 3} dosage system, analytical equipment); reaction vessel where the electron beam from ILU 6 accelerator and microwave streams from the pulse and c.w. generators can be introduced simultaneously or separately and plasma diagnostic pulsed microwave stream can be applied; outlet system (retention chamber, filtration unit, fan, off-take duct of gas, analytical equipment). The experiments have demonstrated that it is possible to investigate the removal process in the presence of NH{sub 3} by separate or simultaneous application of the electron beam and of microwave energy streams under stable experimental conditions. (author). 15 refs, 26 figs, 5 tabs.

  20. Removal of brominated flame retardant from electrical and electronic waste plastic by solvothermal technique

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Cong-Cong [Research Center For Eco-Environmental Sciences, Chinese Academy of Sciences, 18 Shuangqing Road, Beijing 100085 (China); Zhang, Fu-Shen, E-mail: [Research Center For Eco-Environmental Sciences, Chinese Academy of Sciences, 18 Shuangqing Road, Beijing 100085 (China)


    Highlights: Black-Right-Pointing-Pointer A process for brominated flame retardants (BFRs) removal in plastic was established. Black-Right-Pointing-Pointer The plastic became bromine-free with the structure maintained after this treatment. Black-Right-Pointing-Pointer BFRs transferred into alcohol solvent were easily debrominated by metallic copper. - Abstract: Brominated flame retardants (BFRs) in electrical and electronic (E and E) waste plastic are toxic, bioaccumulative and recalcitrant. In the present study, tetrabromobisphenol A (TBBPA) contained in this type of plastic was tentatively subjected to solvothermal treatment so as to obtain bromine-free plastic. Methanol, ethanol and isopropanol were examined as solvents for solvothermal treatment and it was found that methanol was the optimal solvent for TBBPA removal. The optimum temperature, time and liquid to solid ratio for solvothermal treatment to remove TBBPA were 90 Degree-Sign C, 2 h and 15:1, respectively. After the treatment with various alcohol solvents, it was found that TBBPA was finally transferred into the solvents and bromine in the extract was debrominated catalyzed by metallic copper. Bisphenol A and cuprous bromide were the main products after debromination. The morphology and FTIR properties of the plastic were generally unchanged after the solvothermal treatment indicating that the structure of the plastic maintained after the process. This work provides a clean and applicable process for BFRs-containing plastic disposal.

  1. The Effect of Fragaria vesca Extract on Smear Layer Removal: A Scanning Electron Microscopic Evaluation (United States)

    Davoudi, Amin; Razavi, Sayed Alireza; Mosaddeghmehrjardi, Mohammad Hossein; Tabrizizadeh, Mehdi


    Introduction: Successful endodontic treatment depends on elimination of the microorganisms through chemomechanical debridement. The aim of this in vitro study was to evaluate the effectiveness of Fragaria vesca (wild strawberry) extract (FVE) on the removal of smear layer (SL). Methods and Materials: In this analytical-observational study, 40 extracted mandibular and maxillary human teeth were selected. After canal preparation with standard step-back technique, the teeth were randomly divided into 4 groups according to the irrigation solution: saline (negative control), 5.25% NaOCl+EDTA (positive control), FVE and FVE+EDTA. The teeth were split longitudinally so that scanning electron microscopy (SEM) photomicrographs could be taken to evaluate the amount of remnant SL in coronal, middle and apical thirds. The data were analyzed statistically by the Kruskal-Wallis and Mann Whitney U tests and the level of significance was set at 0.05. Results: Significant differences were found among the groups (P<0.001). The use of NaOCl+EDTA was the most effective regimen for removing the SL followed by FVE+EDTA. FVE alone was significantly more effective than saline (P<0.001). Conclusion: FVE with and without EDTA could effectively remove the smear layer; however, compared to NaOCl group it was less effective. PMID:26526069

  2. Removal of nonylphenol from industrial sludge by using an electron beam (United States)

    Choi, Jang-Seung; Park, Jun-Hyun; Kim, Yuri; Kim, JinKyu; Jung, SeungTae; Han, Bumsoo; Alkhuraiji, Turki S.


    Endocrine disrupting chemicals (EDCs) and potential EDCs are mostly man-made, found in various materials such as pesticides, additives or contaminants in food, and personal care products. EDCs have been suspected to be associated with altered reproductive function in males and females increased incidence of breast cancer, abnormal growth patterns and neuro-developmental delays in children and changes in immune function. A number of processes were investigated regarding their potential for removing of endocrine disrupters. Those processes are ferric chloride coagulation, powdered activated carbon, magnetic ion exchange combined with microfiltration or ultrafiltration, as well as nanofiltration, and reverse osmosis. They show some good removal of EDCs in aqueous solution, but do not show good efficiency when EDCs are in sludge. High energy ionizing radiation has the ability to remove the EDCs with a very high degree of reliability and in a clean and efficient manner. The ionizing radiation interacts with EDCs both directly and indirectly. Direct interaction takes place with EDCs, and the structure of EDCs is destroyed or changed. During indirect interaction, radiolysis products of water result in the formation of highly reactive intermediates which then react with the target molecules, culminating in structural changes. For confirmation of radiation reduction of EDCs in industrial sludge, a pilot scale experiment up to 50 kGy of electron beam was conducted with samples from the textile dyeing industries. The experimental result showed over a 90% reduction of nonylphenol (NP) at absorbed doses of around 10 kGy.

  3. Single- and Multiple-Electron Removal Processes in Proton-Water Vapor Collisions (United States)

    Murakami, Mitsuko; Kirchner, Tom; Horbatsch, Marko; Jürgen Lüdde, Hans


    Charge-state correlated cross sections for single- and multiple-electron removal processes due to capture and ionization in proton-H2O collisions are calculated by using the non-perturbative basis generator method adapted for ion-molecule collisions [1]. Orbital-specific cross sections for vacancy production are evaluated using this method to predict the yields of charged fragments (H2O^+, OH^+, H^+, O^+) according to branching ratios known to be valid at high impact energies. At intermediate and low energies, we obtain fragmentation results on the basis of predicted multi-electron removal cross sections, and explain most of the available experimental data [2]. The cross sections for charge transfer and for ionization are also compared with recent multi-center classical-trajectory Monte Carlo calculations [3] for impact energies from 20keV to several MeV. [4pt] [1] H.J. L"udde et al, Phys. Rev. A 80, 060702(R) (2009)[0pt] [2] M. Murakami et al, to be submitted to Phys. Rev. A (2012)[0pt] [3] C. Illescas et al, Phys. Rev. A 83, 052704 (2011)

  4. An examination of electronic file transfer between host and microcomputers for the AMPMODNET/AIMNET (Army Material Plan Modernization Network/Acquisition Information Management Network) classified network environment

    Energy Technology Data Exchange (ETDEWEB)

    Hake, K.A.


    This report presents the results of investigation and testing conducted by Oak Ridge National Laboratory (ORNL) for the Project Manager -- Acquisition Information Management (PM-AIM), and the United States Army Materiel Command Headquarters (HQ-AMC). It concerns the establishment of file transfer capabilities on the Army Materiel Plan Modernization (AMPMOD) classified computer system. The discussion provides a general context for micro-to-mainframe connectivity and focuses specifically upon two possible solutions for file transfer capabilities. The second section of this report contains a statement of the problem to be examined, a brief description of the institutional setting of the investigation, and a concise declaration of purpose. The third section lays a conceptual foundation for micro-to-mainframe connectivity and provides a more detailed description of the AMPMOD computing environment. It gives emphasis to the generalized International Business Machines, Inc. (IBM) standard of connectivity because of the predominance of this vendor in the AMPMOD computing environment. The fourth section discusses two test cases as possible solutions for file transfer. The first solution used is the IBM 3270 Control Program telecommunications and terminal emulation software. A version of this software was available on all the IBM Tempest Personal Computer 3s. The second solution used is Distributed Office Support System host electronic mail software with Personal Services/Personal Computer microcomputer e-mail software running with IBM 3270 Workstation Program for terminal emulation. Test conditions and results are presented for both test cases. The fifth section provides a summary of findings for the two possible solutions tested for AMPMOD file transfer. The report concludes with observations on current AMPMOD understanding of file transfer and includes recommendations for future consideration by the sponsor.

  5. Scanning Electron Microscopic Evaluation of Smear Layer Removal Using Isolated or Interweaving EDTA with Sodium Hypochlorite (United States)

    da Silva Beraldo, Ângelo José; Silva, Rogério Vieira; da Gama Antunes, Alberto Nogueira; Silveira, Frank Ferreira; Nunes, Eduardo


    Introduction: The aim of this study was to verify the effect of alternating 2.5% sodium hypochlorite (NaOCl) and 17% ethylenediaminetetraacetic acid (EDTA) on the smear layer removal from root canal surfaces. Methods and Materials: A total of 15 single-rooted human teeth, instrumented with ProTaper files, were randomly distributed in 3 groups. In group 1 (n=7) the canals were irrigated with 1 mL of 2.5% NaOCl between files and final irrigation was done with 1 mL of 2,5% NaOCl, followe by 1 mL of 17% EDTA, for a perio of 15 sec with new irrigtion of 1 mL of 2,5% NaOCl at each change of files. In group 3 (control group) (n=1), saline solution was used. All samples were cleaved into two sections, metalized and analyzed under scanning electron microscopy (SEM). The presence or absence of smear layer in the cervical, middle and apical thirds, with scores varying from 1 to 3, respectively were evaluated. The data were submitted to nonparametric Mann-Whitney U test. The level of significance was set at 0.05. Results: It was observed that there was a greater discrepancy between groups with respect to the apical third. In the other areas there was a greater similarity between the scores attributed to the groups. There was a statistically significant difference between the groups only in the apical third, when group 1 presented the higher median (P<0.05). Conclusion: The alternating use of EDTA during instrumentation with NaOCl was the most effective irrigation method to remove the apical smear layer. Both forms of irrigation were effective on removal of the smear layer in the coronal and middle thirds of the canals. PMID:28179925

  6. Removal of brominated flame retardant from electrical and electronic waste plastic by solvothermal technique. (United States)

    Zhang, Cong-Cong; Zhang, Fu-Shen


    Brominated flame retardants (BFRs) in electrical and electronic (E&E) waste plastic are toxic, bioaccumulative and recalcitrant. In the present study, tetrabromobisphenol A (TBBPA) contained in this type of plastic was tentatively subjected to solvothermal treatment so as to obtain bromine-free plastic. Methanol, ethanol and isopropanol were examined as solvents for solvothermal treatment and it was found that methanol was the optimal solvent for TBBPA removal. The optimum temperature, time and liquid to solid ratio for solvothermal treatment to remove TBBPA were 90°C, 2h and 15:1, respectively. After the treatment with various alcohol solvents, it was found that TBBPA was finally transferred into the solvents and bromine in the extract was debrominated catalyzed by metallic copper. Bisphenol A and cuprous bromide were the main products after debromination. The morphology and FTIR properties of the plastic were generally unchanged after the solvothermal treatment indicating that the structure of the plastic maintained after the process. This work provides a clean and applicable process for BFRs-containing plastic disposal.

  7. Evaluation of sustained release polylactate electron donors for removal of hexavalent chromium from contaminated groundwater

    Energy Technology Data Exchange (ETDEWEB)

    Brodie, E.L.; Joyner, D. C.; Faybishenko, B.; Conrad, M. E.; Rios-Velazquez, C.; Mork, B.; Willet, A.; Koenigsberg, S.; Herman, D.; Firestone, M. K.; Hazen, T. C.; Malave, Josue; Martinez, Ramon


    To evaluate the efficacy of bioimmobilization of Cr(VI) in groundwater at the Department of Energy Hanford site, we conducted a series of microcosm experiments using a range of commercial electron donors with varying degrees of lactate polymerization (polylactate). These experiments were conducted using Hanford Formation sediments (coarse sand and gravel) immersed in Hanford groundwater, which were amended with Cr(VI) and several types of lactate-based electron donors (Hydrogen Release Compound, HRC; primer-HRC, pHRC; extended release HRC) and the polylactate-cysteine form (Metal Remediation Compound, MRC). The results showed that polylactate compounds stimulated an increase in bacterial biomass and activity to a greater extent than sodium lactate when applied at equivalent carbon concentrations. At the same time, concentrations of headspace hydrogen and methane increased and correlated with changes in the microbial community structure. Enrichment of Pseudomonas spp. occurred with all lactate additions, and enrichment of sulfate-reducing Desulfosporosinus spp. occurred with almost complete sulfate reduction. The results of these experiments demonstrate that amendment with the pHRC and MRC forms result in effective removal of Cr(VI) from solution most likely by both direct (enzymatic) and indirect (microbially generated reductant) mechanisms.

  8. Quantum-mechanical calculation of multiple electron removal and fragmentation cross sections in He+-H2O collisions (United States)

    Murakami, Mitsuko; Kirchner, Tom; Horbatsch, Marko; Lüdde, Hans Jürgen


    Electron removal and fragmentation cross sections are calculated for He+(1s)-H2O collisions at impact energies from 20 keV/amu to several MeV/amu by using the nonperturbative basis generator method for ion-molecule collisions. Previous work for proton impact is extended to deal with the dressed projectile in the present case. The effects from the active projectile electron are taken into account by applying the same single-particle Hamiltonian to all electrons and by using the inclusive-probability formalism in the final-state analysis. Fragment-ion yields are evaluated from the single-, double-, and triple-electron removal cross sections, and the results are compared with the available experimental data. Very reasonable agreement is obtained for fragmentation caused by direct ionization, while some discrepancies remain in the capture and loss data.

  9. Enhancing the Electron Transfer Capacity and Subsequent Color Removal in Bioreactors by Applying Thermophilic Anaerobic Treatment and Redox Mediators

    NARCIS (Netherlands)

    Santos, dos A.B.; Traverse, J.; Cervantes, F.J.; Lier, van J.B.


    The effect of temperature, hydraulic retention time (HRT) and the redox mediator anthraquinone-2,6-disulfonate (AQDS), on electron transfer and subsequent color removal from textile wastewater was assessed in mesophilic and thermophilic anaerobic bioreactors. The results clearly show that compared w

  10. Scanning electron microscopic study of the surface of feline gastric epithelium: a simple method of removing the coating material. (United States)

    Al-Tikriti, M; Henry, R W; Al-Bagdadi, F K; Hoskins, J; Titkemeyer, C


    Scanning electron microscopic examination of the gastric surface epithelial cells is often hindered by the presence of a coating material. Several methods for removal of coating material on feline gastric mucosa were utilized. The cleansed tissues were evaluated using the scanning electron microscope to assess damage caused by the use of various cleansing methods to surface epithelial cells. The stretched stomach washed several times, including rubbing the mucosal surface with gloved fingers, yielded the best results with no apparent damage to the surface epithelial cells. Flushing unstretched stomachs with saline only did not adequately remove coating material. Flushing unstretched stomachs with saline while stroking the surface with a cotton tipped applicator stick removed debris but damaged the surface epithelium.

  11. Enhanced biological phosphorus removal. Carbon sources, nitrate as electron acceptor, and characterization of the sludge community

    Energy Technology Data Exchange (ETDEWEB)

    Christensson, M.


    Enhanced biological phosphorus removal (EBPR) was studied in laboratory scale experiments as well as in a full scale EBPR process. The studies were focused on carbon source transformations, the use of nitrate as an electron acceptor and characterisation of the microflora. A continuous anaerobic/aerobic laboratory system was operated on synthetic wastewater with acetate as sole carbon source. An efficient EBPR was obtained and mass balances over the anaerobic reactor showed a production of 1.45 g poly-{beta}-hydroxyalcanoic acids (PHA), measured as chemical oxygen demand (COD), per g of acetic acid (as COD) taken up. Furthermore, phosphate was released in the anaerobic reactor in a ratio of 0.33 g phosphorus (P) per g PHA (COD) formed and 0.64 g of glycogen (COD) was consumed per g of acetic acid (COD) taken up. Microscopic investigations revealed a high amount of polyphosphate accumulating organisms (PAO) in the sludge. Isolation and characterisation of bacteria indicated Acinetobacter spp. to be abundant in the sludge, while sequencing of clones obtained in a 16S rDNA clone library showed a large part of the bacteria to be related to the high mole % G+C Gram-positive bacteria and only a minor fraction to be related to the gamma-subclass of proteobacteria to which Acinetobacter belongs. Operation of a similar anaerobic/aerobic laboratory system with ethanol as sole carbon source showed that a high EBPR can be achieved with this compound as carbon source. However, a prolonged detention time in the anaerobic reactor was required. PHA were produced in the anaerobic reactor in an amount of 1.24 g COD per g of soluble DOC taken up, phosphate was released in an amount of 0.4-0.6 g P per g PHA (COD) produced and 0.46 g glycogen (COD) was consumed per g of soluble COD taken up. Studies of the EBPR in the UCT process at the sewage treatment plant in Helsingborg, Sweden, showed the amount of volatile fatty acids (VFA) available to the PAO in the anaerobic stage to be

  12. Palladium and gold removal and recovery from precious metal solutions and electronic scrap leachates by Desulfovibrio desulfuricans. (United States)

    Creamer, Neil J; Baxter-Plant, Victoria S; Henderson, John; Potter, M; Macaskie, Lynne E


    Biomass of Desulfovibrio desulfuricans was used to recover Au(III) as Au(0) from test solutions and from waste electronic scrap leachate. Au(0) was precipitated extracellularly by a different mechanism from the biodeposition of Pd(0). The presence of Cu(2+) ( approximately 2000 mg/l) in the leachate inhibited the hydrogenase-mediated removal of Pd(II) but pre-palladisation of the cells in the absence of added Cu(2+) facilitated removal of Pd(II) from the leachate and more than 95% of the Pd(II) was removed autocatalytically from a test solution supplemented with Cu(II) and Pd(II). Metal recovery was demonstrated in a gas-lift electrobioreactor with electrochemically generated hydrogen, followed by precipitation of recovered metal under gravity. A 3-stage bioseparation process for the recovery of Au(III), Pd(II) and Cu(II) is proposed.

  13. Diagnosis of cervical cancer cell taken from scanning electron and atomic force microscope images of the same patients using discrete wavelet entropy energy and Jensen Shannon, Hellinger, Triangle Measure classifier (United States)

    Aytac Korkmaz, Sevcan


    The aim of this article is to provide early detection of cervical cancer by using both Atomic Force Microscope (AFM) and Scanning Electron Microscope (SEM) images of same patient. When the studies in the literature are examined, it is seen that the AFM and SEM images of the same patient are not used together for early diagnosis of cervical cancer. AFM and SEM images can be limited when using only one of them for the early detection of cervical cancer. Therefore, multi-modality solutions which give more accuracy results than single solutions have been realized in this paper. Optimum feature space has been obtained by Discrete Wavelet Entropy Energy (DWEE) applying to the 3 × 180 AFM and SEM images. Then, optimum features of these images are classified with Jensen Shannon, Hellinger, and Triangle Measure (JHT) Classifier for early diagnosis of cervical cancer. However, between classifiers which are Jensen Shannon, Hellinger, and triangle distance have been validated the measures via relationships. Afterwards, accuracy diagnosis of normal, benign, and malign cervical cancer cell was found by combining mean success rates of Jensen Shannon, Hellinger, and Triangle Measure which are connected with each other. Averages of accuracy diagnosis for AFM and SEM images by averaging the results obtained from these 3 classifiers are found as 98.29% and 97.10%, respectively. It has been observed that AFM images for early diagnosis of cervical cancer have higher performance than SEM images. Also in this article, surface roughness of malign AFM images in the result of the analysis made for the AFM images, according to the normal and benign AFM images is observed as larger, If the volume of particles has found as smaller. She has been a Faculty Member at Fırat University in the Electrical- Electronic Engineering Department since 2007. Her research interests include image processing, computer vision systems, pattern recognition, data fusion, wavelet theory, artificial neural

  14. Increased electric sail thrust through removal of trapped shielding electrons by orbit chaotisation due to spacecraft body

    Directory of Open Access Journals (Sweden)

    P. Janhunen


    Full Text Available An electric solar wind sail is a recently introduced propellantless space propulsion method whose technical development has also started. The electric sail consists of a set of long, thin, centrifugally stretched and conducting tethers which are charged positively and kept in a high positive potential of order 20 kV by an onboard electron gun. The positively charged tethers deflect solar wind protons, thus tapping momentum from the solar wind stream and producing thrust. The amount of obtained propulsive thrust depends on how many electrons are trapped by the potential structures of the tethers, because the trapped electrons tend to shield the charged tether and reduce its effect on the solar wind. Here we present physical arguments and test particle calculations indicating that in a realistic three-dimensional electric sail spacecraft there exist a natural mechanism which tends to remove the trapped electrons by chaotising their orbits and causing them to eventually collide with the conducting tethers. We present calculations which indicate that if these mechanisms were able to remove trapped electrons nearly completely, the electric sail performance could be about five times higher than previously estimated, about 500 nN/m, corresponding to 1 N thrust for a baseline construction with 2000 km total tether length.

  15. Dynamic system classifier (United States)

    Pumpe, Daniel; Greiner, Maksim; Müller, Ewald; Enßlin, Torsten A.


    Stochastic differential equations describe well many physical, biological, and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time-dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of the DSC to oscillation processes with a time-dependent frequency ω (t ) and damping factor γ (t ) . Although real systems might be more complex, this simple oscillator captures many characteristic features. The ω and γ time lines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.

  16. Diagnosis of cervical cancer cell taken from scanning electron and atomic force microscope images of the same patients using discrete wavelet entropy energy and Jensen Shannon, Hellinger, Triangle Measure classifier. (United States)

    Aytac Korkmaz, Sevcan


    The aim of this article is to provide early detection of cervical cancer by using both Atomic Force Microscope (AFM) and Scanning Electron Microscope (SEM) images of same patient. When the studies in the literature are examined, it is seen that the AFM and SEM images of the same patient are not used together for early diagnosis of cervical cancer. AFM and SEM images can be limited when using only one of them for the early detection of cervical cancer. Therefore, multi-modality solutions which give more accuracy results than single solutions have been realized in this paper. Optimum feature space has been obtained by Discrete Wavelet Entropy Energy (DWEE) applying to the 3×180 AFM and SEM images. Then, optimum features of these images are classified with Jensen Shannon, Hellinger, and Triangle Measure (JHT) Classifier for early diagnosis of cervical cancer. However, between classifiers which are Jensen Shannon, Hellinger, and triangle distance have been validated the measures via relationships. Afterwards, accuracy diagnosis of normal, benign, and malign cervical cancer cell was found by combining mean success rates of Jensen Shannon, Hellinger, and Triangle Measure which are connected with each other. Averages of accuracy diagnosis for AFM and SEM images by averaging the results obtained from these 3 classifiers are found as 98.29% and 97.10%, respectively. It has been observed that AFM images for early diagnosis of cervical cancer have higher performance than SEM images. Also in this article, surface roughness of malign AFM images in the result of the analysis made for the AFM images, according to the normal and benign AFM images is observed as larger, If the volume of particles has found as smaller.

  17. Dynamic system classifier

    CERN Document Server

    Pumpe, Daniel; Müller, Ewald; Enßlin, Torsten A


    Stochastic differential equations describe well many physical, biological and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of DSC to oscillation processes with a time dependent frequency {\\omega}(t) and damping factor {\\gamma}(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The {\\omega} and {\\gamma} timelines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiment...

  18. Impact of the amount of working fluid in loop heat pipe to remove waste heat from electronic component

    Directory of Open Access Journals (Sweden)

    Smitka Martin


    Full Text Available One of the options on how to remove waste heat from electronic components is using loop heat pipe. The loop heat pipe (LHP is a two-phase device with high effective thermal conductivity that utilizes change phase to transport heat. It was invented in Russia in the early 1980’s. The main parts of LHP are an evaporator, a condenser, a compensation chamber and a vapor and liquid lines. Only the evaporator and part of the compensation chamber are equipped with a wick structure. Inside loop heat pipe is working fluid. As a working fluid can be used distilled water, acetone, ammonia, methanol etc. Amount of filling is important for the operation and performance of LHP. This work deals with the design of loop heat pipe and impact of filling ratio of working fluid to remove waste heat from insulated gate bipolar transistor (IGBT.

  19. Ionic Polymer-Based Removable and Charge-Dissipative Coatings for Space Electronic Applications Project (United States)

    National Aeronautics and Space Administration — Protection of critical electronic systems in spacecraft and satellites is imperative for NASA's future missions to high-energy, outer-planet environments. The...

  20. Removal of Oxygen from Electronic Materials by Vapor-Phase Processes (United States)

    Palosz, Witold


    Thermochemical analyses of equilibrium partial pressures over oxides with and without the presence of the respective element condensed phase, and hydrogen, chalcogens, hydrogen chalcogenides, and graphite are presented. Theoretical calculations are supplemented with experimental results on the rate of decomposition and/or sublimation/vaporization of the oxides under dynamic vacuum, and on the rate of reaction with hydrogen, graphite, and chalcogens. Procedures of removal of a number of oxides under different conditions are discussed.

  1. Classifying Returns as Extreme

    DEFF Research Database (Denmark)

    Christiansen, Charlotte


    I consider extreme returns for the stock and bond markets of 14 EU countries using two classification schemes: One, the univariate classification scheme from the previous literature that classifies extreme returns for each market separately, and two, a novel multivariate classification scheme tha...

  2. Energy-Efficient Neuromorphic Classifiers. (United States)

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano


    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

  3. Effect of high electron donor supply on dissimilatory nitrate reduction pathways in a bioreactor for nitrate removal

    DEFF Research Database (Denmark)

    Behrendt, Anna; Tarre, Sheldon; Beliavski, Michael;


    The possible shift of a bioreactor for NO3- removal from predominantly denitrification (DEN) to dissimilatory nitrate reduction to ammonium (DNRA) by elevated electron donor supply was investigated. By increasing the C/NO3- ratio in one of two initially identical reactors, the production of high...... sulfide concentrations was induced. The response of the dissimilatory NO3- reduction processes to the increased availability of organic carbon and sulfide was monitored in a batch incubation system. The expected shift from a DEN- towards a DNRA-dominated bioreactor was not observed, also not under...

  4. Electronic health record use to classify patients with newly diagnosed versus preexisting type 2 diabetes: infrastructure for comparative effectiveness research and population health management. (United States)

    Kudyakov, Rustam; Bowen, James; Ewen, Edward; West, Suzanne L; Daoud, Yahya; Fleming, Neil; Masica, Andrew


    Use of electronic health record (EHR) content for comparative effectiveness research (CER) and population health management requires significant data configuration. A retrospective cohort study was conducted using patients with diabetes followed longitudinally (N=36,353) in the EHR deployed at outpatient practice networks of 2 health care systems. A data extraction and classification algorithm targeting identification of patients with a new diagnosis of type 2 diabetes mellitus (T2DM) was applied, with the main criterion being a minimum 30-day window between the first visit documented in the EHR and the entry of T2DM on the EHR problem list. Chart reviews (N=144) validated the performance of refining this EHR classification algorithm with external administrative data. Extraction using EHR data alone designated 3205 patients as newly diagnosed with T2DM with classification accuracy of 70.1%. Use of external administrative data on that preselected population improved classification accuracy of cases identified as new T2DM diagnosis (positive predictive value was 91.9% with that step). Laboratory and medication data did not help case classification. The final cohort using this 2-stage classification process comprised 1972 patients with a new diagnosis of T2DM. Data use from current EHR systems for CER and disease management mandates substantial tailoring. Quality between EHR clinical data generated in daily care and that required for population health research varies. As evidenced by this process for classification of newly diagnosed T2DM cases, validation of EHR data with external sources can be a valuable step.

  5. Efficient electron-induced removal of oxalate ions and formation of copper nanoparticles from copper(II oxalate precursor layers

    Directory of Open Access Journals (Sweden)

    Kai Rückriem


    Full Text Available Copper(II oxalate grown on carboxy-terminated self-assembled monolayers (SAM using a step-by-step approach was used as precursor for the electron-induced synthesis of surface-supported copper nanoparticles. The precursor material was deposited by dipping the surfaces alternately in ethanolic solutions of copper(II acetate and oxalic acid with intermediate thorough rinsing steps. The deposition of copper(II oxalate and the efficient electron-induced removal of the oxalate ions was monitored by reflection absorption infrared spectroscopy (RAIRS. Helium ion microscopy (HIM reveals the formation of spherical nanoparticles with well-defined size and X-ray photoelectron spectroscopy (XPS confirms their metallic nature. Continued irradiation after depletion of oxalate does not lead to further particle growth giving evidence that nanoparticle formation is primarily controlled by the available amount of precursor.

  6. Classifier in Age classification

    Directory of Open Access Journals (Sweden)

    B. Santhi


    Full Text Available Face is the important feature of the human beings. We can derive various properties of a human by analyzing the face. The objective of the study is to design a classifier for age using facial images. Age classification is essential in many applications like crime detection, employment and face detection. The proposed algorithm contains four phases: preprocessing, feature extraction, feature selection and classification. The classification employs two class labels namely child and Old. This study addresses the limitations in the existing classifiers, as it uses the Grey Level Co-occurrence Matrix (GLCM for feature extraction and Support Vector Machine (SVM for classification. This improves the accuracy of the classification as it outperforms the existing methods.

  7. Intelligent Garbage Classifier

    Directory of Open Access Journals (Sweden)

    Ignacio Rodríguez Novelle


    Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  8. Classifying Linear Canonical Relations


    Lorand, Jonathan


    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  9. Generalized classifier neural network. (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu


    In this work a new radial basis function based classification neural network named as generalized classifier neural network, is proposed. The proposed generalized classifier neural network has five layers, unlike other radial basis function based neural networks such as generalized regression neural network and probabilistic neural network. They are input, pattern, summation, normalization and output layers. In addition to topological difference, the proposed neural network has gradient descent based optimization of smoothing parameter approach and diverge effect term added calculation improvements. Diverge effect term is an improvement on summation layer calculation to supply additional separation ability and flexibility. Performance of generalized classifier neural network is compared with that of the probabilistic neural network, multilayer perceptron algorithm and radial basis function neural network on 9 different data sets and with that of generalized regression neural network on 3 different data sets include only two classes in MATLAB environment. Better classification performance up to %89 is observed. Improved classification performances proved the effectivity of the proposed neural network.

  10. A comparative scanning electron microscopy evaluation of smear layer removal with apple vinegar and sodium hypochlorite associated with EDTA

    Directory of Open Access Journals (Sweden)

    George Táccio de Miranda Candeiro


    Full Text Available OBJECTIVE: The purpose of this study was to evaluate by scanning electron microscopy (SEM the removal of smear layer from the middle and apical root thirds after use of different irrigating solutions. MATERIAL AND METHODS: Forty roots of permanent human teeth had their canals instrumented and were randomly assigned to 4 groups (n=10, according to the irrigating solution: apple vinegar (group A, apple vinegar finished with 17% ethylenediaminetetraacetic acid (EDTA (group B, 1% sodium hypochlorite (NaOCl finished with 17% EDTA (group C and saline (group D - control. After chemomechanical preparation, the roots were cleaved longitudinally and their middle and apical thirds were examined by SEM at ×1,000 magnification. Two calibrated examiners (kappa=0.92 analyzed the SEM micrographs qualitatively attributing scores that indicated the efficacy of the solutions in removing the smear layer from the surface of the dentin tubules (1 - poor, 2 - good and 3 - excellent. Data from the control and experimental groups were analyzed by the Kruskal-Wallis and Dunn's test, while the Wilcoxon test was used to compare the middle and apical thirds of the canals within the same group (a=0.05. RESULTS: The middle third presented less amount of smear layer than the apical third, regardless of the irrigant. There was statistically significant difference (p=0.0402 among the groups in the middle third. In the apical third, the apple vinegar/EDTA group showed the greatest removal of smear layer (p=0.0373. CONCLUSION: Apple vinegar associated or not with EDTA was effective in removing smear layer when used as an endodontic irrigant.

  11. Educating Health Professionals about the Electronic Health Record (EHR: Removing the Barriers to Adoption

    Directory of Open Access Journals (Sweden)

    Paule Bellwood


    Full Text Available In the healthcare industry we have had a significant rise in the use of electronic health records (EHRs in health care settings (e.g. hospital, clinic, physician office and home. There are three main barriers that have arisen to the adoption of these technologies: (1 a shortage of health professional faculty who are familiar with EHRs and related technologies, (2 a shortage of health informatics specialists who can implement these technologies, and (3 poor access to differing types of EHR software. In this paper we outline a novel solution to these barriers: the development of a web portal that provides facility and health professional students with access to multiple differing types of EHRs over the WWW. The authors describe how the EHR is currently being used in educational curricula and how it has overcome many of these barriers. The authors also briefly describe the strengths and limitations of the approach.

  12. Effect of high electron donor supply on dissimilatory nitrate reduction pathways in a bioreactor for nitrate removal. (United States)

    Behrendt, Anna; Tarre, Sheldon; Beliavski, Michael; Green, Michal; Klatt, Judith; de Beer, Dirk; Stief, Peter


    The possible shift of a bioreactor for NO3(-) removal from predominantly denitrification (DEN) to dissimilatory nitrate reduction to ammonium (DNRA) by elevated electron donor supply was investigated. By increasing the C/NO3(-) ratio in one of two initially identical reactors, the production of high sulfide concentrations was induced. The response of the dissimilatory NO3(-) reduction processes to the increased availability of organic carbon and sulfide was monitored in a batch incubation system. The expected shift from a DEN- towards a DNRA-dominated bioreactor was not observed, also not under conditions where DNRA would be thermodynamically favorable. Remarkably, the microbial community exposed to a high C/NO3(-) ratio and sulfide concentration did not use the most energy-gaining process.

  13. Stack filter classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory


    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  14. Classifying TDSS Stellar Variables (United States)

    Amaro, Rachael Christina; Green, Paul J.; TDSS Collaboration


    The Time Domain Spectroscopic Survey (TDSS), a subprogram of SDSS-IV eBOSS, obtains classification/discovery spectra of point-source photometric variables selected from PanSTARRS and SDSS multi-color light curves regardless of object color or lightcurve shape. Tens of thousands of TDSS spectra are already available and have been spectroscopically classified both via pipeline and by visual inspection. About half of these spectra are quasars, half are stars. Our goal is to classify the stars with their correct variability types. We do this by acquiring public multi-epoch light curves for brighter stars (rprogram for analyzing astronomical time-series data, to constrain variable type both for broad statistics relevant to future surveys like the Transiting Exoplanet Survey Satellite (TESS) and the Large Synoptic Survey Telescope (LSST), and to find the inevitable exotic oddballs that warrant further follow-up. Specifically, the Lomb-Scargle Periodogram and the Box-Least Squares Method are being implemented and tested against their known variable classifications and parameters in the Catalina Surveys Periodic Variable Star Catalog. Variable star classifications include RR Lyr, close eclipsing binaries, CVs, pulsating white dwarfs, and other exotic systems. The key difference between our catalog and others is that along with the light curves, we will be using TDSS spectra to help in the classification of variable type, as spectra are rich with information allowing estimation of physical parameters like temperature, metallicity, gravity, etc. This work was supported by the SDSS Research Experience for Undergraduates program, which is funded by a grant from Sloan Foundation to the Astrophysical Research Consortium.

  15. Application of ultrasound and air stripping for the removal of aromatic hydrocarbons from spent sulfidic caustic for use in autotrophic denitrification as an electron donor. (United States)

    Lee, Jae-Ho; Park, Jeung-Jin; Choi, Gi-Choong; Byun, Im-Gyu; Park, Tae-Joo; Lee, Tae-Ho


    Spent sulfidic caustic (SSC) produced from petroleum industry can be reused to denitrify nitrate-nitrogen via a biological nitrogen removal process as an electron donor for sulfur-based autotrophic denitrification, because it has a large amount of dissolved sulfur. However, SSC has to be refined because it also contains some aromatic hydrocarbons, typically benzene, toluene, ethylbenzene, xylene (BTEX) and phenol that are recalcitrant organic compounds. In this study, laboratory-scale ultrasound irradiation and air stripping treatment were applied in order to remove these aromatic hydrocarbons. In the ultrasound system, both BTEX and phenol were exponentially removed by ultrasound irradiation during 60 min of reaction time to give the greatest removal efficiency of about 80%. Whereas, about 95% removal efficiency of BTEX was achieved, but not any significant phenol removal, within 30 min in the air stripping system, indicating that air stripping was a more efficient method than ultrasound irradiation. However, since air stripping did not remove any significant phenol, an additional process for degrading phenol was required. Accordingly, we applied a combined ultrasound and air stripping process. In these experiments, the removal efficiencies of BTEX and phenol were improved compared to the application of ultrasound and air stripping alone. Thus, the combined ultrasound and air stripping treatment is appropriate for refining SSC.

  16. Emergent behaviors of classifier systems

    Energy Technology Data Exchange (ETDEWEB)

    Forrest, S.; Miller, J.H.


    This paper discusses some examples of emergent behavior in classifier systems, describes some recently developed methods for studying them based on dynamical systems theory, and presents some initial results produced by the methodology. The goal of this work is to find techniques for noticing when interesting emergent behaviors of classifier systems emerge, to study how such behaviors might emerge over time, and make suggestions for designing classifier systems that exhibit preferred behaviors. 20 refs., 1 fig.

  17. Subsurface Biogeochemical Heterogeneity (Field-scale removal of U(VI) from groundwater in an alluvial aquifer by electron donor amendment)

    Energy Technology Data Exchange (ETDEWEB)

    Long, Philip E.; Lovley, Derek R.; N' Guessan, A. L.; Nevin, Kelly; Resch, C. T.; Arntzen, Evan; Druhan, Jenny; Peacock, Aaron; Baldwin, Brett; Dayvault, Dick; Holmes, Dawn; Williams, Ken; Hubbard, Susan; Yabusaki, Steve; Fang, Yilin; White, D. C.; Komlos, John; Jaffe, Peter


    Determine if biostimulation of alluvial aquifers by electron donor amendment can effectively remove U(VI) from groundwater at the field scale. Uranium contamination in groundwater is a significant problem at several DOE sites. In this project, the possibility of accelerating bioreduction of U(VI) to U(IV) as a means of decreasing U(VI) concentrations in groundwater is directly addressed by conducting a series of field-scale experiments. Scientific goals include demonstrating the quantitative linkage between microbial activity and U loss from groundwater and relating the dominant terminal electron accepting processes to the rate of U loss. The project is currently focused on understanding the mechanisms for unexpected long-term ({approx}2 years) removal of U after stopping electron donor amendment. Results obtained in the project successfully position DOE and others to apply biostimulation broadly to U contamination in alluvial aquifers.

  18. Removal of CO from CO-contaminated hydrogen gas by carbon-supported rhodium porphyrins using water-soluble electron acceptors (United States)

    Yamazaki, Shin-ichi; Siroma, Zyun; Asahi, Masafumi; Ioroi, Tsutomu


    Carbon-supported Rh porphyrins catalyze the oxidation of carbon monoxide by water-soluble electron acceptors. The rate of this reaction is plotted as a function of the redox potential of the electron acceptor. The rate increases with an increase in the redox potential until it reaches a plateau. This profile can be explained in terms of the electrocatalytic CO oxidation activity of the Rh porphyrin. The removal of CO from CO(2%)/H2 by a solution containing a carbon-supported Rh porphyrin and an electron acceptor is examined. The complete conversion of CO to CO2 is achieved with only a slight amount of Rh porphyrins. Rh porphyrin on carbon black gives higher conversion than that dissolved in solution. This reaction can be used not only to remove CO in anode gas of stationary polymer electrolyte fuel cells but also to regenerate a reductant in indirect CO fuel cell systems.

  19. Feature Selection and Effective Classifiers. (United States)

    Deogun, Jitender S.; Choubey, Suresh K.; Raghavan, Vijay V.; Sever, Hayri


    Develops and analyzes four algorithms for feature selection in the context of rough set methodology. Experimental results confirm the expected relationship between the time complexity of these algorithms and the classification accuracy of the resulting upper classifiers. When compared, results of upper classifiers perform better than lower…

  20. Sampling Based Average Classifier Fusion

    Directory of Open Access Journals (Sweden)

    Jian Hou


    fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.

  1. Classified

    CERN Multimedia

    Computer Security Team


    In the last issue of the Bulletin, we have discussed recent implications for privacy on the Internet. But privacy of personal data is just one facet of data protection. Confidentiality is another one. However, confidentiality and data protection are often perceived as not relevant in the academic environment of CERN.   But think twice! At CERN, your personal data, e-mails, medical records, financial and contractual documents, MARS forms, group meeting minutes (and of course your password!) are all considered to be sensitive, restricted or even confidential. And this is not all. Physics results, in particular when being preliminary and pending scrutiny, are sensitive, too. Just recently, an ATLAS collaborator copy/pasted the abstract of an ATLAS note onto an external public blog, despite the fact that this document was clearly marked as an "Internal Note". Such an act was not only embarrassing to the ATLAS collaboration, and had negative impact on CERN’s reputation --- i...

  2. Optimally Training a Cascade Classifier

    CERN Document Server

    Shen, Chunhua; Hengel, Anton van den


    Cascade classifiers are widely used in real-time object detection. Different from conventional classifiers that are designed for a low overall classification error rate, a classifier in each node of the cascade is required to achieve an extremely high detection rate and moderate false positive rate. Although there are a few reported methods addressing this requirement in the context of object detection, there is no a principled feature selection method that explicitly takes into account this asymmetric node learning objective. We provide such an algorithm here. We show a special case of the biased minimax probability machine has the same formulation as the linear asymmetric classifier (LAC) of \\cite{wu2005linear}. We then design a new boosting algorithm that directly optimizes the cost function of LAC. The resulting totally-corrective boosting algorithm is implemented by the column generation technique in convex optimization. Experimental results on object detection verify the effectiveness of the proposed bo...

  3. Combining different types of classifiers


    Gatnar, Eugeniusz


    Model fusion has proved to be a very successful strategy for obtaining accurate models in classification and regression. The key issue, however, is the diversity of the component classifiers because classification error of an ensemble depends on the correlation between its members. The majority of existing ensemble methods combine the same type of models, e.g. trees. In order to promote the diversity of the ensemble members, we propose to aggregate classifiers of different t...

  4. Effect of different final irrigating solutions on smear layer removal in apical third of root canal: A scanning electron microscope study

    Directory of Open Access Journals (Sweden)

    Sayesh Vemuri


    Full Text Available Aim: The aim of this in vitro study is to compare the smear layer removal efficacy of different irrigating solutions at the apical third of the root canal. Materials and Methods: Forty human single-rooted mandibular premolar teeth were taken and decoronated to standardize the canal length to 14 mm. They were prepared by ProTaper rotary system to an apical preparation of file size F3. Prepared teeth were randomly divided into four groups (n = 10; saline (Group 1; negative control, ethylenediaminetetraacetic acid (Group 2, BioPure MTAD (Group 3, and QMix 2 in 1 (Group 4. After final irrigation with tested irrigants, the teeth were split into two halves longitudinally and observed under a scanning electron microscope (SEM for the removal of smear layer. The SEM images were then analyzed for the amount of smear layer present using a three score system. Statistical Analysis: Data are analyzed using the Kruskal-Wallis test and Mann-Whitney U-test. Results: Intergroup comparison of groups showed statistically significant difference in the smear layer removal efficacy of irrigants tested. QMix 2 in 1 is most effective in removal of smear layer when compared to other tested irrigants. Conclusion: QMix 2 in 1 is the most effective final irrigating solution for smear layer removal.

  5. Optimal weighted nearest neighbour classifiers

    CERN Document Server

    Samworth, Richard J


    We derive an asymptotic expansion for the excess risk (regret) of a weighted nearest-neighbour classifier. This allows us to find the asymptotically optimal vector of non-negative weights, which has a rather simple form. We show that the ratio of the regret of this classifier to that of an unweighted $k$-nearest neighbour classifier depends asymptotically only on the dimension $d$ of the feature vectors, and not on the underlying population densities. The improvement is greatest when $d=4$, but thereafter decreases as $d \\rightarrow \\infty$. The popular bagged nearest neighbour classifier can also be regarded as a weighted nearest neighbour classifier, and we show that its corresponding weights are somewhat suboptimal when $d$ is small (in particular, worse than those of the unweighted $k$-nearest neighbour classifier when $d=1$), but are close to optimal when $d$ is large. Finally, we argue that improvements in the rate of convergence are possible under stronger smoothness assumptions, provided we allow nega...

  6. Effects of brushing in a classifying machine on the cuticles of Fuji and Gala apples

    Directory of Open Access Journals (Sweden)

    Renar João Bender


    Full Text Available The cuticle, a layer that covers the fruit epidermis, has a protective function against environmental stresses such as wind, temperature, chemicals and drought, not only when the fruit is attached to the plant, but also after harvest. Some postharvest procedures may influence the external layers of the fruit, like the cuticle. The objective of this work was to evaluate the effects of brushing in a classifying machine on the cuticles of apples under scanning electron microscopy (SEM. Two experiments were conducted to test brushing on the cultivars Fuji and Gala using heavy and smooth brushes. The experiments consisted of three replicates of three apples each, with three samples taken from the equatorial area of the fruit to be analyzed under SEM. The brushes of the classifying machine altered the cuticular layer, dragging it, modifying the structure and removing crystalloids of the cuticular wax layer, and forming cracks. There were no differences between the effects of the two types of brushes tested on the cuticles of the apples. The classifying machine used commercially is capable of producing similar effects to those encountered in the brushing experiments conducted on the prototype in the laboratory, removing partially the protective wax content of the apple’s cuticle.

  7. Hybrid classifiers methods of data, knowledge, and classifier combination

    CERN Document Server

    Wozniak, Michal


    This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.

  8. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus


    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  9. Comparative evaluation of 15% ethylenediamine tetra-acetic acid plus cetavlon and 5% chlorine dioxide in removal of smear layer: A scanning electron microscope study

    Directory of Open Access Journals (Sweden)

    Sandeep Singh


    Full Text Available Aims: The purpose of this study was to compare the efficacy of smear layer removal by 5% chlorine dioxide and 15% Ethylenediamine Tetra-Acetic Acid plus Cetavlon (EDTAC from the human root canal dentin. Materials >and Methods : Fifty single rooted human mandibular anterior teeth were divided into two groups of 20 teeth each and control group of 10 teeth. The root canals were prepared till F3 protaper and initially irrigated with 2% Sodium hypochlorite followed by 1 min irrigation with 15% EDTAC or 5% Chlorine dioxide respectively. The control group was irrigated with saline. The teeth were longitudinally split and observed under Scanning electron microscope SEM (×2000. Statistical Analysis Used: The statistical analysis was done using General Linear Mixed Model. Results : At the coronal thirds, no statistically significant difference was found between 15% EDTAC and 5% Chlorine dioxide in removing smear layer. In the middle and apical third region 15% EDTAC showed better smear layer removal ability than 5% Chlorine dioxide. Conclusion : Final irrigation with 15% EDTAC is superior to 5% chlorine dioxide in removing smear layer in the middle and apical third of radicular dentin.

  10. Comparative evaluation of 15% ethylenediamine tetra-acetic acid plus cetavlon and 5% chlorine dioxide in removal of smear layer: A scanning electron microscope study (United States)

    Singh, Sandeep; Arora, Vimal; Majithia, Inderpal; Dhiman, Rakesh Kumar; Kumar, Dinesh; Ather, Amber


    Aims: The purpose of this study was to compare the efficacy of smear layer removal by 5% chlorine dioxide and 15% Ethylenediamine Tetra-Acetic Acid plus Cetavlon (EDTAC) from the human root canal dentin. Materials and Methods Fifty single rooted human mandibular anterior teeth were divided into two groups of 20 teeth each and control group of 10 teeth. The root canals were prepared till F3 protaper and initially irrigated with 2% Sodium hypochlorite followed by 1 min irrigation with 15% EDTAC or 5% Chlorine dioxide respectively. The control group was irrigated with saline. The teeth were longitudinally split and observed under Scanning electron microscope SEM (×2000). Statistical Analysis Used: The statistical analysis was done using General Linear Mixed Model. Results: At the coronal thirds, no statistically significant difference was found between 15% EDTAC and 5% Chlorine dioxide in removing smear layer. In the middle and apical third region 15% EDTAC showed better smear layer removal ability than 5% Chlorine dioxide. Conclusion: Final irrigation with 15% EDTAC is superior to 5% chlorine dioxide in removing smear layer in the middle and apical third of radicular dentin. PMID:23853455

  11. Comparative scanning electron microscopy evaluation of Canal Brushing technique, sonic activation, and master apical file for the removal of triple antibiotic paste from root canal (in vitro study

    Directory of Open Access Journals (Sweden)

    Deepa Ashoksingh Thakur


    Full Text Available Aims: To compare and evaluate the effectiveness of Canal Brushing technique, sonic activation, and master apical file (MAF for the removal of triple antibiotic paste (TAP from root canal using scanning electron microscopy (SEM. Materials and Methods: Twenty-two single rooted teeth were instrumented with ProTaper up to the size number F2 and dressed with TAP. TAP was removed with Canal Brush technique (Group I, n: 6, sonic (EndoActivator (Group II, n: 6, and MAF (Group III, n: 6. Four teeth served as positive (n: 2 and negative (n: 2 controls. The roots were split in the buccolingual direction and prepared for SEM examination (×1000 at coronal, middle, and apical third. Three examiners evaluated the wall cleanliness. Statistical Analysis: Statistical analysis was performed by Kruskal–Wallis test and Wilcoxon rank sum test. Results: Difference in cleanliness between three groups is statistically significant in cervical region only. Pairwise comparison in cervical region Canal Brush and sonic activation showed more removal of TAP than MAF. Conclusions: Canal Brush and sonic activation system showed better result than MAF in the cervical and middle third of canal. In the apical third, none of the techniques showed a better result. None of the techniques showed complete removal of TAP from the canal.

  12. Multi-input distributed classifiers for synthetic genetic circuits.

    Directory of Open Access Journals (Sweden)

    Oleg Kanakov

    Full Text Available For practical construction of complex synthetic genetic networks able to perform elaborate functions it is important to have a pool of relatively simple modules with different functionality which can be compounded together. To complement engineering of very different existing synthetic genetic devices such as switches, oscillators or logical gates, we propose and develop here a design of synthetic multi-input classifier based on a recently introduced distributed classifier concept. A heterogeneous population of cells acts as a single classifier, whose output is obtained by summarizing the outputs of individual cells. The learning ability is achieved by pruning the population, instead of tuning parameters of an individual cell. The present paper is focused on evaluating two possible schemes of multi-input gene classifier circuits. We demonstrate their suitability for implementing a multi-input distributed classifier capable of separating data which are inseparable for single-input classifiers, and characterize performance of the classifiers by analytical and numerical results. The simpler scheme implements a linear classifier in a single cell and is targeted at separable classification problems with simple class borders. A hard learning strategy is used to train a distributed classifier by removing from the population any cell answering incorrectly to at least one training example. The other scheme implements a circuit with a bell-shaped response in a single cell to allow potentially arbitrary shape of the classification border in the input space of a distributed classifier. Inseparable classification problems are addressed using soft learning strategy, characterized by probabilistic decision to keep or discard a cell at each training iteration. We expect that our classifier design contributes to the development of robust and predictable synthetic biosensors, which have the potential to affect applications in a lot of fields, including that of

  13. Multi-input distributed classifiers for synthetic genetic circuits. (United States)

    Kanakov, Oleg; Kotelnikov, Roman; Alsaedi, Ahmed; Tsimring, Lev; Huerta, Ramón; Zaikin, Alexey; Ivanchenko, Mikhail


    For practical construction of complex synthetic genetic networks able to perform elaborate functions it is important to have a pool of relatively simple modules with different functionality which can be compounded together. To complement engineering of very different existing synthetic genetic devices such as switches, oscillators or logical gates, we propose and develop here a design of synthetic multi-input classifier based on a recently introduced distributed classifier concept. A heterogeneous population of cells acts as a single classifier, whose output is obtained by summarizing the outputs of individual cells. The learning ability is achieved by pruning the population, instead of tuning parameters of an individual cell. The present paper is focused on evaluating two possible schemes of multi-input gene classifier circuits. We demonstrate their suitability for implementing a multi-input distributed classifier capable of separating data which are inseparable for single-input classifiers, and characterize performance of the classifiers by analytical and numerical results. The simpler scheme implements a linear classifier in a single cell and is targeted at separable classification problems with simple class borders. A hard learning strategy is used to train a distributed classifier by removing from the population any cell answering incorrectly to at least one training example. The other scheme implements a circuit with a bell-shaped response in a single cell to allow potentially arbitrary shape of the classification border in the input space of a distributed classifier. Inseparable classification problems are addressed using soft learning strategy, characterized by probabilistic decision to keep or discard a cell at each training iteration. We expect that our classifier design contributes to the development of robust and predictable synthetic biosensors, which have the potential to affect applications in a lot of fields, including that of medicine and industry.

  14. Classifying Cereal Data (Earlier Methods) (United States)

    The DSQ includes questions about cereal intake and allows respondents up to two responses on which cereals they consume. We classified each cereal reported first by hot or cold, and then along four dimensions: density of added sugars, whole grains, fiber, and calcium.

  15. Maximum margin Bayesian network classifiers. (United States)

    Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian


    We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  16. Energy Efficient Removal of Volatile Organic Compounds (VOCs) and Organic Hazardous Air Pollutants (o-HAPs) from Industrial Waste Streams by Direct Electron Oxidation

    Energy Technology Data Exchange (ETDEWEB)

    Testoni, A. L.


    This research program investigated and quantified the capability of direct electron beam destruction of volatile organic compounds and organic hazardous air pollutants in model industrial waste streams and calculated the energy savings that would be realized by the widespread adoption of the technology over traditional pollution control methods. Specifically, this research determined the quantity of electron beam dose required to remove 19 of the most important non-halogenated air pollutants from waste streams and constructed a technical and economic model for the implementation of the technology in key industries including petroleum refining, organic & solvent chemical production, food & beverage production, and forest & paper products manufacturing. Energy savings of 75 - 90% and green house gas reductions of 66 - 95% were calculated for the target market segments.

  17. 76 FR 34761 - Classified National Security Information (United States)


    ... Classified National Security Information AGENCY: Marine Mammal Commission. ACTION: Notice. SUMMARY: This... information, as directed by Information Security Oversight Office regulations. FOR FURTHER INFORMATION CONTACT..., ``Classified National Security Information,'' and 32 CFR part 2001, ``Classified National Security......

  18. Classifying self-gravitating radiations

    CERN Document Server

    Kim, Hyeong-Chan


    We study static systems of self-gravitating radiations confined in a sphere by using numerical and analytic calculations. We classify and analyze the solutions systematically. Due to the scaling symmetry, any solution can be represented as a segment of a solution curve on a plane of two-dimensional scale invariant variables. We find that a system can be conveniently parametrized by three parameters representing the solution curve, the scaling, and the system size, instead of the parameters defined at the outer boundary. The solution curves are classified to three types representing regular solutions, conically singular solutions with, and without an object which resembles an event horizon up to causal disconnectedness. For the last type, the behavior of a self-gravitating system is simple enough to allow analytic calculations.


    Directory of Open Access Journals (Sweden)

    Felipe Schneider Costa


    Full Text Available The naïve Bayes classifier is considered one of the most effective classification algorithms today, competing with more modern and sophisticated classifiers. Despite being based on unrealistic (naïve assumption that all variables are independent, given the output class, the classifier provides proper results. However, depending on the scenario utilized (network structure, number of samples or training cases, number of variables, the network may not provide appropriate results. This study uses a process variable selection, using the chi-squared test to verify the existence of dependence between variables in the data model in order to identify the reasons which prevent a Bayesian network to provide good performance. A detailed analysis of the data is also proposed, unlike other existing work, as well as adjustments in case of limit values between two adjacent classes. Furthermore, variable weights are used in the calculation of a posteriori probabilities, calculated with mutual information function. Tests were applied in both a naïve Bayesian network and a hierarchical Bayesian network. After testing, a significant reduction in error rate has been observed. The naïve Bayesian network presented a drop in error rates from twenty five percent to five percent, considering the initial results of the classification process. In the hierarchical network, there was not only a drop in fifteen percent error rate, but also the final result came to zero.

  20. Smear layer removal efficacy of combination of herbal extracts in two different ratios either alone or supplemented with sonic agitation: An in vitro scanning electron microscope study (United States)

    Chhabra, Naveen; Gyanani, Hitesh; Kamatagi, Laxmikant


    Objectives: The study aimed to evaluate the effectiveness of the combination of two natural extracts in varying ratios for removal of smear layer either alone or supplemented with sonic agitation. Materials and Methods: Fifty extracted single-rooted teeth were collected, disinfected and decoronated below the cementoenamel junction to obtain standardized root length of 10 mm. Root canals were instrumented using rotary files at working length 1 mm short of the apex. Specimens were divided into six groups according to the irrigation protocol as follows: Group A – Distilled water, Group B – 17% ethylenediaminetetraacetic acid, Group C – Herbal extracts in 1:1 ratio, Group D – Herbal extracts in 1:1 ratio supplemented with sonic agitation, Group E – Herbal extracts in 2:1 ratio, Group F – Herbal extracts in 2:1 ratio supplemented with sonic agitation. Specimens were longitudinally sectioned and evaluated under scanning electron microscope for smear layer removal efficacy. Obtained scores were statistically analyzed using one-way analysis of variance and post-hoc test. Results: Among all, Group B showed the best results followed by Group F. Remaining other groups showed inferior outcome (P extracts in 2:1 ratio was slightly better than 1:1 ratio and the smear layer removal efficacy was further improved when accompanied with sonic agitation. PMID:26430300

  1. Effectiveness of four different final irrigation activation techniques on smear layer removal in curved root canals : a scanning electron microscopy study.

    Directory of Open Access Journals (Sweden)

    Puneet Ahuja


    Full Text Available The aim of this study was to assess the efficacy of apical negative pressure (ANP, manual dynamic agitation (MDA, passive ultrasonic irrigation (PUI and needle irrigation (NI as final irrigation activation techniques for smear layer removal in curved root canals.Mesiobuccal root canals of 80 freshly extracted maxillary first molars with curvatures ranging between 25° and 35° were used. A glide path with #08-15 K files was established before cleaning and shaping with Mtwo rotary instruments (VDW, Munich, Germany up to size 35/0.04 taper. During instrumentation, 1 ml of 2.5% NaOCl was used at each change of file. Samples were divided into 4 equal groups (n=20 according to the final irrigation activation technique: group 1, apical negative pressure (ANP (EndoVac; group 2, manual dynamic agitation (MDA; group 3, passive ultrasonic irrigation (PUI; and group 4, needle irrigation (NI. Root canals were split longitudinally and subjected to scanning electron microscopy. The presence of smear layer at coronal, middle and apical levels was evaluated by superimposing 300-μm square grid over the obtained photomicrographs using a four-score scale with X1,000 magnification.Amongst all the groups tested, ANP showed the overall best smear layer removal efficacy (p < 0.05. Removal of smear layer was least effective with the NI technique.ANP (EndoVac system can be used as the final irrigation activation technique for effective smear layer removal in curved root canals.

  2. The Effect of EDTA and Citric Acid on Smear Layer Removal of Mesial Canals of First Mandibular Molars, A Scanning Electron Microscopic Study

    Directory of Open Access Journals (Sweden)

    A Khademi


    Full Text Available Background: The purpose of this in vitro study was to determine the effect of EDTA and citric acid on smear layer removal in different regions of root canals. Methods: In this study, mesial roots of 48 freshly extracted human mature mandibular first molar teeth with curved mesial roots of about 15-45 degrees and lengths of 20-23 mm were used. Instrumentation was done using the crown down technique by hand and rotary filing. Irrigant used during instrumentation was NaOCl. The teeth were divided in three groups. The mesial canals of teeth were irrigated by 17% EDTA in group I, 7% citric acid in group II and 5.25% NaOCl in group III as the control group. Then, the mesial roots were split in to two parts and studied under scanning electron microscopy. Results: The degree of cleanliness by 17% EDTA and 7% citric acid were 96.55% and 95% respectively. Although both solutions seem to be appropriate, their difference was statistically significant (P<0.05 and EDTA proved better than citric acid especially in middle and apical thirds of canals. The smear layer removal in apical area was less than that in other areas and was maximum in the middle third. However, the removal of smear layer in apical area was acceptable in both groups. Conclusion: It seems that use of both 17% EDTA and 7% citric acid offer desired results and they can remove smear layer from narrow and curved canals especially from apical region. Keywords: EDTA, citric acid, smear layer, irrigation

  3. Scanning electron microscopy analysis of the growth of dental plaque on the surfaces of removable orthodontic aligners after the use of different cleaning methods

    Directory of Open Access Journals (Sweden)

    Levrini L


    Full Text Available Luca Levrini, Francesca Novara, Silvia Margherini, Camilla Tenconi, Mario Raspanti Department of Surgical and Morphological Sciences, Dental Hygiene School, Research Centre Cranio Facial Disease and Medicine, University of Insubria, Varese, Italy Background: Advances in orthodontics are leading to the use of minimally invasive technologies, such as transparent removable aligners, and are able to meet high demands in terms of performance and esthetics. However, the most correct method of cleaning these appliances, in order to minimize the effects of microbial colonization, remains to be determined. Purpose: The aim of the present study was to identify the most effective method of cleaning removable orthodontic aligners, analyzing the growth of dental plaque as observed under scanning electron microscopy. Methods: Twelve subjects were selected for the study. All were free from caries and periodontal disease and were candidates for orthodontic therapy with invisible orthodontic aligners. The trial had a duration of 6 weeks, divided into three 2-week stages, during which three sets of aligners were used. In each stage, the subjects were asked to use a different method of cleaning their aligners: 1 running water (control condition; 2 effervescent tablets containing sodium carbonate and sulfate crystals followed by brushing with a toothbrush; and 3 brushing alone (with a toothbrush and toothpaste. At the end of each 2-week stage, the surfaces of the aligners were analyzed under scanning electron microscopy. Results: The best results were obtained with brushing combined with the use of sodium carbonate and sulfate crystals; brushing alone gave slightly inferior results. Conclusion: On the basis of previous literature results relating to devices in resin, studies evaluating the reliability of domestic ultrasonic baths for domestic use should be encouraged. At present, pending the availability of experimental evidence, it can be suggested that dental

  4. Defining and Classifying Interest Groups

    DEFF Research Database (Denmark)

    Baroni, Laura; Carroll, Brendan; Chalmers, Adam;


    The interest group concept is defined in many different ways in the existing literature and a range of different classification schemes are employed. This complicates comparisons between different studies and their findings. One of the important tasks faced by interest group scholars engaged...... in large-N studies is therefore to define the concept of an interest group and to determine which classification scheme to use for different group types. After reviewing the existing literature, this article sets out to compare different approaches to defining and classifying interest groups with a sample...

  5. Measurement of magnetic fields produced by a "magnetic deflector" for the removal of electron contamination in radiotherapy. (United States)

    Damrongkijudom, N; Oborn, B; Butson, M; Rosenfeld, A


    Electron contamination generated from interactions of x-rays with components in a medical linear accelerator's head can increase damage to skin and subcutaneous tissue during radiotherapy through increased dose deposition. Skin and subcutaneous dose from high energy x-rays can be reduced using magnetic fields to sweep the electron contamination away from the radiation treatment field. This work is aimed at investigating the magnetic fields generated by an improved magnetic deflector which utilizes Nd2Fe14B magnets. Magnetic field strengths generated by the deflector have been simulated using Vizimag 3.0 magnetic modelling software. The improved deflector has a more uniform magnetic field strength than its predecessor and is optimised to easily fit on a clinical linear accelerator. Experimental measurements of the magnetic field strengths produced have also been performed for comparison. Results show a relatively good match to Vizimag modelling in the central regions of the deflector. Reductions of skin and subcutaneous dose up to 34% of original values were seen for a 20 x 20 cm2 field at 6MV x-ray energy.

  6. A comparative evaluation of different irrigation activation systems on smear layer removal from root canal: An in-vitro scanning electron microscope study

    Directory of Open Access Journals (Sweden)

    Nishi Singh


    Full Text Available Aim: The aim of the following study is to compare the evaluation of different irrigation activation system-F-File, CanalBrush (CB and EndoActivator (EA in removing smear layer from root canal. Materials and Methods: Root canals of eighty single rooted decoronated premolar teeth were instrumented using crown-down technique and then equally divided into four groups on basis of irrigation activation methods used: Without irrigation - control group, irrigation with F-File, CB, EA into Group I, II, III respectively. Samples were then longitudinally sectioned and examined under scanning electron microscope by three qualified observers using score from 1 to 4. Data was analyzed using Statistical Package for Social Sciences (SPSS, version 15.0 (SPSS Inc., Chicago IL at significance level of P ≤ 0.05. Results: Minimum mean score was observed in Group II at coronal, apical locations. Group III had minimum score at middle third. Groups difference in score were found to be significant statistically for all three locations as well as for overall assessment (P < 0.001. Conclusion: CB remove smear layer more efficiently from the root canal than F-File and EA in coronal and apical region.

  7. The effect of the temperature changes of EDTA and MTAD on the removal of the smear layer: a scanning electron microscopy study. (United States)

    Çiçek, Ersan; Keskin, Özgür


    The purpose of this study was to determine the effectiveness of EDTA and MTAD at different temperatures as a final irrigant to remove the smear layer after the use of 5.25% NaOCl. Seventy-eight human mandibular premolars with single straight canal were prepared by a crown-down technique using rotary 0.06 taper nickel-titanium files. Final irrigation was performed with EDTA and MTAD at different temperatures. The removal of the smear layer in the coronal, middle and apical level of each canal was examined under scanning electron microscope. No difference was found between the EDTA and MTAD at 4°C, 25°C, and 37°C temperatures regardless of the canal level (coronal, middle and apical) (P = 0.286). In EDTA-25, EDTA-37, MTAD-25, and MTAD-37 groups, the difference among the coronal, middle, and apical levels were statistically no significant (P > 0.05). Our findings showed that EDTA and MTAD at 25°C and 37°C are more effective than EDTA and MTAD at 4°C even in the apical level.

  8. Effectiveness of different irrigation techniques on smear layer removal in apical thirds of mesial root canals of permanent mandibular first molar: A scanning electron microscopic study

    Directory of Open Access Journals (Sweden)

    Pranav Khaord


    Full Text Available Aim: The aim of this study was to compare smear layer removal after final irrigant activation with sonic irrigation (SI, manual dynamic agitation (MDA, passive ultrasonic irrigation (PUI, and conventional syringe irrigation (CI. Materials and Methods: Forty mesial canals of mandibular first molars (mesial roots were cleaned and shaped by using ProTaper system to size F1 and sodium hypochlorite 3% and 17% ethylenediaminetetraacetic acid. The specimens were divided into 4 equal groups (n = 10 according to the final irrigation activation technique: Group 1, PUI; group 2, manual dynamic activation (MDA; group 3, SI; and group 4, control group (simple irrigation. Samples were split longitudinally and examined under scanning electron microscope for smear layer presence. Results: Control groups had the highest smear scores, which showed the statistically significant highest mean score at P < 0.05. This was followed by ultrasonic, MDA, and finally sonic, with no significant differences between them. Conclusions: Final irrigant activation with sonic and MDA resulted in the better removal of the smear layer than with CI.

  9. Injector for CESAR (2 MeV electron storage ring): 2-beam, 2 MV van de Graaff generator; tank removed.

    CERN Multimedia


    The van de Graaff generator in its tank. For voltage-holding, the tank was filled with pressurized extra-dry nitrogen. 2 beams emanated from 2 separate electron-guns. The left beam, for injection into the CESAR ring, was pulsed at 50 Hz, with currents of up to 1 A for 400 ns. The right beam was sent to a spectrometer line. Its pulselength was also 400 ns, but the pulse current was 12 microA, at a rate variable from 50 kHz to 1 MHz. This allowed stabilization of the top-terminal voltage to an unprecedented stability of +- 100 V, i.e. 6E-5. Although built for a nominal voltage of 2 MV, the operational voltage was limited to 1.75 MV in order to minimize voltage break-down events. CESAR was terminated at the end of 1967 and dismantled in 1968. R.Nettleton (left) and H.Burridge (right) are preparing the van de Graaff for shipment to the University of Swansea.

  10. Improving hole injection and carrier distribution in InGaN light-emitting diodes by removing the electron blocking layer and including a unique last quantum barrier

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Liwen, E-mail:; Chen, Haitao; Wu, Shudong [College of Physics Science and Technology & Institute of Optoelectronic Technology, Yangzhou University, Yangzhou 225002 (China)


    The effects of removing the AlGaN electron blocking layer (EBL), and using a last quantum barrier (LQB) with a unique design in conventional blue InGaN light-emitting diodes (LEDs), were investigated through simulations. Compared with the conventional LED design that contained a GaN LQB and an AlGaN EBL, the LED that contained an AlGaN LQB with a graded-composition and no EBL exhibited enhanced optical performance and less efficiency droop. This effect was caused by an enhanced electron confinement and hole injection efficiency. Furthermore, when the AlGaN LQB was replaced with a triangular graded-composition, the performance improved further and the efficiency droop was lowered. The simulation results indicated that the enhanced hole injection efficiency and uniform distribution of carriers observed in the quantum wells were caused by the smoothing and thinning of the potential barrier for the holes. This allowed a greater number of holes to tunnel into the quantum wells from the p-type regions in the proposed LED structure.

  11. 75 FR 707 - Classified National Security Information (United States)


    ... National Security Information Memorandum of December 29, 2009--Implementation of the Executive Order ``Classified National Security Information'' Order of December 29, 2009--Original Classification Authority #0... 13526 of December 29, 2009 Classified National Security Information This order prescribes a...

  12. Aggregation Operator Based Fuzzy Pattern Classifier Design

    DEFF Research Database (Denmark)

    Mönks, Uwe; Larsen, Henrik Legind


    This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable. The perfor....... The performances of novel classifiers using substitutes of MFPC's geometric mean aggregator are benchmarked in the scope of an image processing application against the MFPC to reveal classification improvement potentials for obtaining higher classification rates....

  13. 15 CFR 4.8 - Classified Information. (United States)


    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily...

  14. Hair Removal (United States)

    ... Loss Surgery? A Week of Healthy Breakfasts Shyness Hair Removal KidsHealth > For Teens > Hair Removal A A A ... recommend an electrologist with the proper credentials. Laser Hair Removal How It Works: A laser is directed through ...

  15. Effect of diode laser and ultrasonics with and without ethylenediaminetetraacetic acid on smear layer removal from the root canals: A scanning electron microscope study (United States)

    Amin, Khalid; Masoodi, Ajaz; Nabi, Shahnaz; Ahmad, Parvaiz; Farooq, Riyaz; Purra, Aamir Rashid; Ahangar, Fayaz Ahmad


    Aim: To evaluate the effect of diode laser and ultrasonics with and without ethylenediaminetetraacetic acid (EDTA) on the smear layer removal from root canals. Materials and Methods: A total of 120 mandibular premolars were decoronated to working the length of 12 mm and prepared with protaper rotary files up to size F3. Group A canals irrigated with 1 ml of 3% sodium hypochlorite (NaOCl) followed by 3 ml of 3% NaOCl. Group B canals irrigated with 1 ml of 17% EDTA followed by 3 ml of 3% NaOCl. Group C canals lased with a diode laser. Group D canals were initially irrigated with 0.8 ml of 17% EDTA the remaining 0.2 ml was used to fill the root canals, and diode laser application was done. Group E canals were irrigated with 1 ml distilled water with passive ultrasonic activation, followed by 3 ml of 3% NaOCl. Group F canals were irrigated with 1 ml EDTA with passive ultrasonic activation, followed by 3 ml of 3% NaOCl. Scanning electron microscope examination of canals was done for remaining smear layer at coronal middle and apical third levels. Results: Ultrasonics with EDTA had the least smear layer scores. Conclusion: Diode laser alone performed significantly better than ultrasonics. PMID:27656060

  16. 22 CFR 125.3 - Exports of classified technical data and classified defense articles. (United States)


    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Exports of classified technical data and... IN ARMS REGULATIONS LICENSES FOR THE EXPORT OF TECHNICAL DATA AND CLASSIFIED DEFENSE ARTICLES § 125.3 Exports of classified technical data and classified defense articles. (a) A request for authority...

  17. Pavement Crack Classifiers: A Comparative Study

    Directory of Open Access Journals (Sweden)

    S. Siddharth


    Full Text Available Non Destructive Testing (NDT is an analysis technique used to inspect metal sheets and components without harming the product. NDT do not cause any change after inspection; this technique saves money and time in product evaluation, research and troubleshooting. In this study the objective is to perform NDT using soft computing techniques. Digital images are taken; Gray Level Co-occurrence Matrix (GLCM extracts features from these images. Extracted features are then fed into the classifiers which classifies them into images with and without cracks. Three major classifiers: Neural networks, Support Vector Machine (SVM and Linear classifiers are taken for the classification purpose. Performances of these classifiers are assessed and the best classifier for the given data is chosen.

  18. Comparing different classifiers for automatic age estimation. (United States)

    Lanitis, Andreas; Draganova, Chrisina; Christodoulou, Chris


    We describe a quantitative evaluation of the performance of different classifiers in the task of automatic age estimation. In this context, we generate a statistical model of facial appearance, which is subsequently used as the basis for obtaining a compact parametric description of face images. The aim of our work is to design classifiers that accept the model-based representation of unseen images and produce an estimate of the age of the person in the corresponding face image. For this application, we have tested different classifiers: a classifier based on the use of quadratic functions for modeling the relationship between face model parameters and age, a shortest distance classifier, and artificial neural network based classifiers. We also describe variations to the basic method where we use age-specific and/or appearance specific age estimation methods. In this context, we use age estimation classifiers for each age group and/or classifiers for different clusters of subjects within our training set. In those cases, part of the classification procedure is devoted to choosing the most appropriate classifier for the subject/age range in question, so that more accurate age estimates can be obtained. We also present comparative results concerning the performance of humans and computers in the task of age estimation. Our results indicate that machines can estimate the age of a person almost as reliably as humans.

  19. Tick Removal (United States)

    ... ticks Tickborne diseases abroad Borrelia miyamotoi Borrelia mayonii Tick Removal Recommend on Facebook Tweet Share Compartir If ... a tick quite effectively. How to remove a tick Use fine-tipped tweezers to grasp the tick ...

  20. A review of learning vector quantization classifiers

    CERN Document Server

    Nova, David


    In this work we present a review of the state of the art of Learning Vector Quantization (LVQ) classifiers. A taxonomy is proposed which integrates the most relevant LVQ approaches to date. The main concepts associated with modern LVQ approaches are defined. A comparison is made among eleven LVQ classifiers using one real-world and two artificial datasets.

  1. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond


    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  2. DFRFT: A Classified Review of Recent Methods with Its Application

    Directory of Open Access Journals (Sweden)

    Ashutosh Kumar Singh


    Full Text Available In the literature, there are various algorithms available for computing the discrete fractional Fourier transform (DFRFT. In this paper, all the existing methods are reviewed, classified into four categories, and subsequently compared to find out the best alternative from the view point of minimal computational error, computational complexity, transform features, and additional features like security. Subsequently, the correlation theorem of FRFT has been utilized to remove significantly the Doppler shift caused due to motion of receiver in the DSB-SC AM signal. Finally, the role of DFRFT has been investigated in the area of steganography.

  3. Logarithmic learning for generalized classifier neural network. (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu


    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network.

  4. A Sequential Algorithm for Training Text Classifiers

    CERN Document Server

    Lewis, D D; Lewis, David D.; Gale, William A.


    The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness.

  5. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha


    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  6. An Efficient and Effective Immune Based Classifier

    Directory of Open Access Journals (Sweden)

    Shahram Golzari


    Full Text Available Problem statement: Artificial Immune Recognition System (AIRS is most popular and effective immune inspired classifier. Resource competition is one stage of AIRS. Resource competition is done based on the number of allocated resources. AIRS uses a linear method to allocate resources. The linear resource allocation increases the training time of classifier. Approach: In this study, a new nonlinear resource allocation method is proposed to make AIRS more efficient. New algorithm, AIRS with proposed nonlinear method, is tested on benchmark datasets from UCI machine learning repository. Results: Based on the results of experiments, using proposed nonlinear resource allocation method decreases the training time and number of memory cells and doesn't reduce the accuracy of AIRS. Conclusion: The proposed classifier is an efficient and effective classifier.

  7. Local Component Analysis for Nonparametric Bayes Classifier

    CERN Document Server

    Khademi, Mahmoud; safayani, Meharn


    The decision boundaries of Bayes classifier are optimal because they lead to maximum probability of correct decision. It means if we knew the prior probabilities and the class-conditional densities, we could design a classifier which gives the lowest probability of error. However, in classification based on nonparametric density estimation methods such as Parzen windows, the decision regions depend on the choice of parameters such as window width. Moreover, these methods suffer from curse of dimensionality of the feature space and small sample size problem which severely restricts their practical applications. In this paper, we address these problems by introducing a novel dimension reduction and classification method based on local component analysis. In this method, by adopting an iterative cross-validation algorithm, we simultaneously estimate the optimal transformation matrices (for dimension reduction) and classifier parameters based on local information. The proposed method can classify the data with co...

  8. Classifying Genomic Sequences by Sequence Feature Analysis

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Liu; Dian Jiao; Xiao Sun


    Traditional sequence analysis depends on sequence alignment. In this study, we analyzed various functional regions of the human genome based on sequence features, including word frequency, dinucleotide relative abundance, and base-base correlation. We analyzed the human chromosome 22 and classified the upstream,exon, intron, downstream, and intergenic regions by principal component analysis and discriminant analysis of these features. The results show that we could classify the functional regions of genome based on sequence feature and discriminant analysis.

  9. Searching and Classifying non-textual information


    Arentz, Will Archer


    This dissertation contains a set of contributions that deal with search or classification of non-textual information. Each contribution can be considered a solution to a specific problem, in an attempt to map out a common ground. The problems cover a wide range of research fields, including search in music, classifying digitally sampled music, visualization and navigation in search results, and classifying images and Internet sites.On classification of digitally sample music, as method for ex...

  10. Classifying the Quantum Phases of Matter (United States)


    2013), arXiv:1305.2176. [10] J. Haah, Lattice quantum codes and exotic topological phases of matter , arXiv:1305.6973. [11[ M. Hastings and S...CLASSIFYING THE QUANTUM PHASES OF MATTER CALIFORNIA INSTITUTE OF TECHNOLOGY JANUARY 2015 FINAL TECHNICAL REPORT...REPORT 3. DATES COVERED (From - To) JAN 2012 – AUG 2014 4. TITLE AND SUBTITLE CLASSIFYING THE QUANTUM PHASES OF MATTER 5a. CONTRACT NUMBER FA8750-12-2


    Institute of Scientific and Technical Information of China (English)

    Bhekisipho TWALA


    Credit risk prediction models seek to predict quality factors such as whether an individual will default (bad applicant) on a loan or not (good applicant). This can be treated as a kind of machine learning (ML) problem. Recently, the use of ML algorithms has proven to be of great practical value in solving a variety of risk problems including credit risk prediction. One of the most active areas of recent research in ML has been the use of ensemble (combining) classifiers. Research indicates that ensemble individual classifiers lead to a significant improvement in classification performance by having them vote for the most popular class. This paper explores the predicted behaviour of five classifiers for different types of noise in terms of credit risk prediction accuracy, and how could such accuracy be improved by using pairs of classifier ensembles. Benchmarking results on five credit datasets and comparison with the performance of each individual classifier on predictive accuracy at various attribute noise levels are presented. The experimental evaluation shows that the ensemble of classifiers technique has the potential to improve prediction accuracy.

  12. A multi-class large margin classifier

    Institute of Scientific and Technical Information of China (English)

    Liang TANG; Qi XUAN; Rong XIONG; Tie-jun WU; Jian CHU


    Currently there are two approaches for a multi-class support vector classifier (SVC). One is to construct and combine several binary classifiers while the other is to directly consider all classes of data in one optimization formulation. For a K-class problem (K>2), the first approach has to construct at least K classifiers, and the second approach has to solve a much larger op-timization problem proportional to K by the algorithms developed so far. In this paper, following the second approach, we present a novel multi-class large margin classifier (MLMC). This new machine can solve K-class problems in one optimization formula-tion without increasing the size of the quadratic programming (QP) problem proportional to K. This property allows us to construct just one classifier with as few variables in the QP problem as possible to classify multi-class data, and we can gain the advantage of speed from it especially when K is large. Our experiments indicate that MLMC almost works as well as (sometimes better than) many other multi-class SVCs for some benchmark data classification problems, and obtains a reasonable performance in face recognition application on the AR face database.

  13. Classifying prosthetic use via accelerometry in persons with transtibial amputations

    Directory of Open Access Journals (Sweden)

    Morgan T. Redfield, MSEE


    Full Text Available Knowledge of how persons with amputation use their prostheses and how this use changes over time may facilitate effective rehabilitation practices and enhance understanding of prosthesis functionality. Perpetual monitoring and classification of prosthesis use may also increase the health and quality of life for prosthetic users. Existing monitoring and classification systems are often limited in that they require the subject to manipulate the sensor (e.g., attach, remove, or reset a sensor, record data over relatively short time periods, and/or classify a limited number of activities and body postures of interest. In this study, a commercially available three-axis accelerometer (ActiLife ActiGraph GT3X+ was used to characterize the activities and body postures of individuals with transtibial amputation. Accelerometers were mounted on prosthetic pylons of 10 persons with transtibial amputation as they performed a preset routine of actions. Accelerometer data was postprocessed using a binary decision tree to identify when the prosthesis was being worn and to classify periods of use as movement (i.e., leg motion such as walking or stair climbing, standing (i.e., standing upright with limited leg motion, or sitting (i.e., seated with limited leg motion. Classifications were compared to visual observation by study researchers. The classifier achieved a mean +/– standard deviation accuracy of 96.6% +/– 3.0%.

  14. Classifying prosthetic use via accelerometry in persons with transtibial amputations. (United States)

    Redfield, Morgan T; Cagle, John C; Hafner, Brian J; Sanders, Joan E


    Knowledge of how persons with amputation use their prostheses and how this use changes over time may facilitate effective rehabilitation practices and enhance understanding of prosthesis functionality. Perpetual monitoring and classification of prosthesis use may also increase the health and quality of life for prosthetic users. Existing monitoring and classification systems are often limited in that they require the subject to manipulate the sensor (e.g., attach, remove, or reset a sensor), record data over relatively short time periods, and/or classify a limited number of activities and body postures of interest. In this study, a commercially available three-axis accelerometer (ActiLife ActiGraph GT3X+) was used to characterize the activities and body postures of individuals with transtibial amputation. Accelerometers were mounted on prosthetic pylons of 10 persons with transtibial amputation as they performed a preset routine of actions. Accelerometer data was postprocessed using a binary decision tree to identify when the prosthesis was being worn and to classify periods of use as movement (i.e., leg motion such as walking or stair climbing), standing (i.e., standing upright with limited leg motion), or sitting (i.e., seated with limited leg motion). Classifications were compared to visual observation by study researchers. The classifier achieved a mean +/- standard deviation accuracy of 96.6% +/- 3.0%.

  15. What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers?

    CERN Document Server

    Hu, Bao-Gang


    In this study, both Bayesian classifiers and mutual information classifiers are examined for binary classifications with or without a reject option. The general decision rules in terms of distinctions on error types and reject types are derived for Bayesian classifiers. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of "non-consistency" for interpreting cost terms. If no data is given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the theoretical differences, including the extremely-class-imbalanced cases. Finally, we briefly summarize the Bayesian classifiers and mutual-info...

  16. A Numerical Technique for Removing Residual Gate-Source Capacitances When Extracting Parasitic Inductance for GaN High Electron Mobility Transistors (HEMTs) (United States)


    Residual Gate-source Capacitances When Extracting Parasitic Inductance for GaN High Electron Mobility Transistors ( HEMTs ) Benjamin Huebschman and Pankaj...Extracting Parasitic Inductance for GaN High Electron Mobility Transistors ( HEMTs ) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...nitride ( GaN ) high electron mobility transistors ( HEMTs ) begin to realize their performance potential, and to transition from experimental devices to

  17. Averaged Extended Tree Augmented Naive Classifier

    Directory of Open Access Journals (Sweden)

    Aaron Meehan


    Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.

  18. Evolving Classifiers: Methods for Incremental Learning

    CERN Document Server

    Hulley, Greg


    The ability of a classifier to take on new information and classes by evolving the classifier without it having to be fully retrained is known as incremental learning. Incremental learning has been successfully applied to many classification problems, where the data is changing and is not all available at once. In this paper there is a comparison between Learn++, which is one of the most recent incremental learning algorithms, and the new proposed method of Incremental Learning Using Genetic Algorithm (ILUGA). Learn++ has shown good incremental learning capabilities on benchmark datasets on which the new ILUGA method has been tested. ILUGA has also shown good incremental learning ability using only a few classifiers and does not suffer from catastrophic forgetting. The results obtained for ILUGA on the Optical Character Recognition (OCR) and Wine datasets are good, with an overall accuracy of 93% and 94% respectively showing a 4% improvement over Learn++.MT for the difficult multi-class OCR dataset.

  19. Reinforcement Learning Based Artificial Immune Classifier

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose


    Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  20. Dynamic Bayesian Combination of Multiple Imperfect Classifiers

    CERN Document Server

    Simpson, Edwin; Psorakis, Ioannis; Smith, Arfon


    Classifier combination methods need to make best use of the outputs of multiple, imperfect classifiers to enable higher accuracy classifications. In many situations, such as when human decisions need to be combined, the base decisions can vary enormously in reliability. A Bayesian approach to such uncertain combination allows us to infer the differences in performance between individuals and to incorporate any available prior knowledge about their abilities when training data is sparse. In this paper we explore Bayesian classifier combination, using the computationally efficient framework of variational Bayesian inference. We apply the approach to real data from a large citizen science project, Galaxy Zoo Supernovae, and show that our method far outperforms other established approaches to imperfect decision combination. We go on to analyse the putative community structure of the decision makers, based on their inferred decision making strategies, and show that natural groupings are formed. Finally we present ...

  1. A Customizable Text Classifier for Text Mining

    Directory of Open Access Journals (Sweden)

    Yun-liang Zhang


    Full Text Available Text mining deals with complex and unstructured texts. Usually a particular collection of texts that is specified to one or more domains is necessary. We have developed a customizable text classifier for users to mine the collection automatically. It derives from the sentence category of the HNC theory and corresponding techniques. It can start with a few texts, and it can adjust automatically or be adjusted by user. The user can also control the number of domains chosen and decide the standard with which to choose the texts based on demand and abundance of materials. The performance of the classifier varies with the user's choice.

  2. A survey of decision tree classifier methodology (United States)

    Safavian, S. R.; Landgrebe, David


    Decision tree classifiers (DTCs) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps the most important feature of DTCs is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issues. After considering potential advantages of DTCs over single-state classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.

  3. Tattoo removal. (United States)

    Adatto, Maurice A; Halachmi, Shlomit; Lapidoth, Moshe


    Over 50,000 new tattoos are placed each year in the United States. Studies estimate that 24% of American college students have tattoos and 10% of male American adults have a tattoo. The rising popularity of tattoos has spurred a corresponding increase in tattoo removal. Not all tattoos are placed intentionally or for aesthetic reasons though. Traumatic tattoos due to unintentional penetration of exogenous pigments can also occur, as well as the placement of medical tattoos to mark treatment boundaries, for example in radiation therapy. Protocols for tattoo removal have evolved over history. The first evidence of tattoo removal attempts was found in Egyptian mummies, dated to have lived 4,000 years BC. Ancient Greek writings describe tattoo removal with salt abrasion or with a paste containing cloves of white garlic mixed with Alexandrian cantharidin. With the advent of Q-switched lasers in the late 1960s, the outcomes of tattoo removal changed radically. In addition to their selective absorption by the pigment, the extremely short pulse duration of Q-switched lasers has made them the gold standard for tattoo removal.

  4. Visual Classifier Training for Text Document Retrieval. (United States)

    Heimerl, F; Koch, S; Bosch, H; Ertl, T


    Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora.

  5. Classifying Finitely Generated Indecomposable RA Loops

    CERN Document Server

    Cornelissen, Mariana


    In 1995, E. Jespers, G. Leal and C. Polcino Milies classified all finite ring alternative loops (RA loops for short) which are not direct products of proper subloops. In this paper we extend this result to finitely generated RA loops and provide an explicit description of all such loops.

  6. Classifying web pages with visual features

    NARCIS (Netherlands)

    de Boer, V.; van Someren, M.; Lupascu, T.; Filipe, J.; Cordeiro, J.


    To automatically classify and process web pages, current systems use the textual content of those pages, including both the displayed content and the underlying (HTML) code. However, a very important feature of a web page is its visual appearance. In this paper, we show that using generic visual fea

  7. Neural Classifier Construction using Regularization, Pruning

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan;


    In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunction...

  8. Large margin classifier-based ensemble tracking (United States)

    Wang, Yuru; Liu, Qiaoyuan; Yin, Minghao; Wang, ShengSheng


    In recent years, many studies consider visual tracking as a two-class classification problem. The key problem is to construct a classifier with sufficient accuracy in distinguishing the target from its background and sufficient generalize ability in handling new frames. However, the variable tracking conditions challenges the existing methods. The difficulty mainly comes from the confused boundary between the foreground and background. This paper handles this difficulty by generalizing the classifier's learning step. By introducing the distribution data of samples, the classifier learns more essential characteristics in discriminating the two classes. Specifically, the samples are represented in a multiscale visual model. For features with different scales, several large margin distribution machine (LDMs) with adaptive kernels are combined in a Baysian way as a strong classifier. Where, in order to improve the accuracy and generalization ability, not only the margin distance but also the sample distribution is optimized in the learning step. Comprehensive experiments are performed on several challenging video sequences, through parameter analysis and field comparison, the proposed LDM combined ensemble tracker is demonstrated to perform with sufficient accuracy and generalize ability in handling various typical tracking difficulties.

  9. Design and evaluation of neural classifiers

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Pedersen, Morten With; Hansen, Lars Kai;


    In this paper we propose a method for the design of feedforward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme we derive a modified form of the entropy error measure and an algebraic estimate of the test error. In conjunction...

  10. Adaptively robust filtering with classified adaptive factors

    Institute of Scientific and Technical Information of China (English)

    CUI Xianqiang; YANG Yuanxi


    The key problems in applying the adaptively robust filtering to navigation are to establish an equivalent weight matrix for the measurements and a suitable adaptive factor for balancing the contributions of the measurements and the predicted state information to the state parameter estimates. In this paper, an adaptively robust filtering with classified adaptive factors was proposed, based on the principles of the adaptively robust filtering and bi-factor robust estimation for correlated observations. According to the constant velocity model of Kalman filtering, the state parameter vector was divided into two groups, namely position and velocity. The estimator of the adaptively robust filtering with classified adaptive factors was derived, and the calculation expressions of the classified adaptive factors were presented. Test results show that the adaptively robust filtering with classified adaptive factors is not only robust in controlling the measurement outliers and the kinematic state disturbing but also reasonable in balancing the contributions of the predicted position and velocity, respectively, and its filtering accuracy is superior to the adaptively robust filter with single adaptive factor based on the discrepancy of the predicted position or the predicted velocity.

  11. Face detection by aggregated Bayesian network classifiers

    NARCIS (Netherlands)

    Pham, T.V.; Worring, M.; Smeulders, A.W.M.


    A face detection system is presented. A new classification method using forest-structured Bayesian networks is used. The method is used in an aggregated classifier to discriminate face from non-face patterns. The process of generating non-face patterns is integrated with the construction of the aggr

  12. MScanner: a classifier for retrieving Medline citations

    Directory of Open Access Journals (Sweden)

    Altman Russ B


    Full Text Available Abstract Background Keyword searching through PubMed and other systems is the standard means of retrieving information from Medline. However, ad-hoc retrieval systems do not meet all of the needs of databases that curate information from literature, or of text miners developing a corpus on a topic that has many terms indicative of relevance. Several databases have developed supervised learning methods that operate on a filtered subset of Medline, to classify Medline records so that fewer articles have to be manually reviewed for relevance. A few studies have considered generalisation of Medline classification to operate on the entire Medline database in a non-domain-specific manner, but existing applications lack speed, available implementations, or a means to measure performance in new domains. Results MScanner is an implementation of a Bayesian classifier that provides a simple web interface for submitting a corpus of relevant training examples in the form of PubMed IDs and returning results ranked by decreasing probability of relevance. For maximum speed it uses the Medical Subject Headings (MeSH and journal of publication as a concise document representation, and takes roughly 90 seconds to return results against the 16 million records in Medline. The web interface provides interactive exploration of the results, and cross validated performance evaluation on the relevant input against a random subset of Medline. We describe the classifier implementation, cross validate it on three domain-specific topics, and compare its performance to that of an expert PubMed query for a complex topic. In cross validation on the three sample topics against 100,000 random articles, the classifier achieved excellent separation of relevant and irrelevant article score distributions, ROC areas between 0.97 and 0.99, and averaged precision between 0.69 and 0.92. Conclusion MScanner is an effective non-domain-specific classifier that operates on the entire Medline

  13. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service. (United States)

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard


    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  14. Object localization based on smoothing preprocessing and cascade classifier (United States)

    Zhang, Xingfu; Liu, Lei; Zhao, Feng


    An improved algorithm for image location is proposed in this paper. Firstly, the image is smoothed and the partial noise is removed. Then use the cascade classifier to train a template. Finally, the template is used to detect the related images. The advantage of the algorithm is that it is robust to noise and the proportion of the image is not sensitive to change. At the same time, the algorithm also has the advantages of fast computation speed. In this paper, a real truck bottom picture is chosen as the experimental object. Images of normal components and faulty components are all included in the image sample. Experimental results show that the accuracy rate of the image is more than 90 percent when the grade is more than 40. So we can draw a conclusion that the algorithm proposed in this paper can be applied to the actual image localization project.

  15. Automated morphological analysis approach for classifying colorectal microscopic images (United States)

    Marghani, Khaled A.; Dlay, Satnam S.; Sharif, Bayan S.; Sims, Andrew J.


    Automated medical image diagnosis using quantitative measurements is extremely helpful for cancer prognosis to reach a high degree of accuracy and thus make reliable decisions. In this paper, six morphological features based on texture analysis were studied in order to categorize normal and cancer colon mucosa. They were derived after a series of pre-processing steps to generate a set of different shape measurements. Based on the shape and the size, six features known as Euler Number, Equivalent Diamater, Solidity, Extent, Elongation, and Shape Factor AR were extracted. Mathematical morphology is used firstly to remove background noise from segmented images and then to obtain different morphological measures to describe shape, size, and texture of colon glands. The automated system proposed is tested to classifying 102 microscopic samples of colorectal tissues, which consist of 44 normal color mucosa and 58 cancerous. The results were first statistically evaluated, using one-way ANOVA method in order to examine the significance of each feature extracted. Then significant features are selected in order to classify the dataset into two categories. Finally, using two discrimination methods; linear method and k-means clustering, important classification factors were estimated. In brief, this study demonstrates that abnormalities in low-level power tissue morphology can be distinguished using quantitative image analysis. This investigation shows the potential of an automated vision system in histopathology. Furthermore, it has the advantage of being objective, and more importantly a valuable diagnostic decision support tool.

  16. Hair removal

    DEFF Research Database (Denmark)

    Haedersdal, Merete; Haak, Christina S


    Hair removal with optical devices has become a popular mainstream treatment that today is considered the most efficient method for the reduction of unwanted hair. Photothermal destruction of hair follicles constitutes the fundamental concept of hair removal with red and near-infrared wavelengths...... suitable for targeting follicular and hair shaft melanin: normal mode ruby laser (694 nm), normal mode alexandrite laser (755 nm), pulsed diode lasers (800, 810 nm), long-pulse Nd:YAG laser (1,064 nm), and intense pulsed light (IPL) sources (590-1,200 nm). The ideal patient has thick dark terminal hair......, white skin, and a normal hormonal status. Currently, no method of lifelong permanent hair eradication is available, and it is important that patients have realistic expectations. Substantial evidence has been found for short-term hair removal efficacy of up to 6 months after treatment with the available...

  17. Improving 2D Boosted Classifiers Using Depth LDA Classifier for Robust Face Detection

    Directory of Open Access Journals (Sweden)

    Mahmood Rahat


    Full Text Available Face detection plays an important role in Human Robot Interaction. Many of services provided by robots depend on face detection. This paper presents a novel face detection algorithm which uses depth data to improve the efficiency of a boosted classifier on 2D data for reduction of false positive alarms. The proposed method uses two levels of cascade classifiers. The classifiers of the first level deal with 2D data and classifiers of the second level use depth data captured by a stereo camera. The first level employs conventional cascade of boosted classifiers which eliminates many of nonface sub windows. The remaining sub windows are used as input to the second level. After calculating the corresponding depth model of the sub windows, a heuristic classifier along with a Linear Discriminant analysis (LDA classifier is applied on the depth data to reject remaining non face sub windows. The experimental results of the proposed method using a Bumblebee-2 stereo vision system on a mobile platform for real time detection of human faces in natural cluttered environments reveal significantly reduction of false positive alarms of 2D face detector.

  18. Semantic Features for Classifying Referring Search Terms

    Energy Technology Data Exchange (ETDEWEB)

    May, Chandler J.; Henry, Michael J.; McGrath, Liam R.; Bell, Eric B.; Marshall, Eric J.; Gregory, Michelle L.


    When an internet user clicks on a result in a search engine, a request is submitted to the destination web server that includes a referrer field containing the search terms given by the user. Using this information, website owners can analyze the search terms leading to their websites to better understand their visitors needs. This work explores some of the features that can be used for classification-based analysis of such referring search terms. We present initial results for the example task of classifying HTTP requests countries of origin. A system that can accurately predict the country of origin from query text may be a valuable complement to IP lookup methods which are susceptible to the obfuscation of dereferrers or proxies. We suggest that the addition of semantic features improves classifier performance in this example application. We begin by looking at related work and presenting our approach. After describing initial experiments and results, we discuss paths forward for this work.

  19. Max-margin based Bayesian classifier

    Institute of Scientific and Technical Information of China (English)

    Tao-cheng HU‡; Jin-hui YU


    There is a tradeoff between generalization capability and computational overhead in multi-class learning. We propose a generative probabilistic multi-class classifi er, considering both the generalization capability and the learning/prediction rate. We show that the classifi er has a max-margin property. Thus, prediction on future unseen data can nearly achieve the same performance as in the training stage. In addition, local variables are eliminated, which greatly simplifi es the optimization problem. By convex and probabilistic analysis, an efficient online learning algorithm is developed. The algorithm aggregates rather than averages dualities, which is different from the classical situations. Empirical results indicate that our method has a good generalization capability and coverage rate.

  20. Classifying bed inclination using pressure images. (United States)

    Baran Pouyan, M; Ostadabbas, S; Nourani, M; Pompeo, M


    Pressure ulcer is one of the most prevalent problems for bed-bound patients in hospitals and nursing homes. Pressure ulcers are painful for patients and costly for healthcare systems. Accurate in-bed posture analysis can significantly help in preventing pressure ulcers. Specifically, bed inclination (back angle) is a factor contributing to pressure ulcer development. In this paper, an efficient methodology is proposed to classify bed inclination. Our approach uses pressure values collected from a commercial pressure mat system. Then, by applying a number of image processing and machine learning techniques, the approximate degree of bed is estimated and classified. The proposed algorithm was tested on 15 subjects with various sizes and weights. The experimental results indicate that our method predicts bed inclination in three classes with 80.3% average accuracy.

  1. Classifying sows' activity types from acceleration patterns

    DEFF Research Database (Denmark)

    Cornou, Cecile; Lundbye-Christensen, Søren


    -dimensional axes, plus the length of the acceleration vector) are selected for each activity. Each time series is modeled using a Dynamic Linear Model with cyclic components. The classification method, based on a Multi-Process Kalman Filter (MPKF), is applied to a total of 15 times series of 120 observations......An automated method of classifying sow activity using acceleration measurements would allow the individual sow's behavior to be monitored throughout the reproductive cycle; applications for detecting behaviors characteristic of estrus and farrowing or to monitor illness and welfare can be foreseen....... This article suggests a method of classifying five types of activity exhibited by group-housed sows. The method involves the measurement of acceleration in three dimensions. The five activities are: feeding, walking, rooting, lying laterally and lying sternally. Four time series of acceleration (the three...

  2. Combining supervised classifiers with unlabeled data

    Institute of Scientific and Technical Information of China (English)

    刘雪艳; 张雪英; 李凤莲; 黄丽霞


    Ensemble learning is a wildly concerned issue. Traditional ensemble techniques are always adopted to seek better results with labeled data and base classifiers. They fail to address the ensemble task where only unlabeled data are available. A label propagation based ensemble (LPBE) approach is proposed to further combine base classification results with unlabeled data. First, a graph is constructed by taking unlabeled data as vertexes, and the weights in the graph are calculated by correntropy function. Average prediction results are gained from base classifiers, and then propagated under a regularization framework and adaptively enhanced over the graph. The proposed approach is further enriched when small labeled data are available. The proposed algorithms are evaluated on several UCI benchmark data sets. Results of simulations show that the proposed algorithms achieve satisfactory performance compared with existing ensemble methods.

  3. Classifying objects in LWIR imagery via CNNs (United States)

    Rodger, Iain; Connor, Barry; Robertson, Neil M.


    The aim of the presented work is to demonstrate enhanced target recognition and improved false alarm rates for a mid to long range detection system, utilising a Long Wave Infrared (LWIR) sensor. By exploiting high quality thermal image data and recent techniques in machine learning, the system can provide automatic target recognition capabilities. A Convolutional Neural Network (CNN) is trained and the classifier achieves an overall accuracy of > 95% for 6 object classes related to land defence. While the highly accurate CNN struggles to recognise long range target classes, due to low signal quality, robust target discrimination is achieved for challenging candidates. The overall performance of the methodology presented is assessed using human ground truth information, generating classifier evaluation metrics for thermal image sequences.

  4. Letter identification and the neural image classifier. (United States)

    Watson, Andrew B; Ahumada, Albert J


    Letter identification is an important visual task for both practical and theoretical reasons. To extend and test existing models, we have reviewed published data for contrast sensitivity for letter identification as a function of size and have also collected new data. Contrast sensitivity increases rapidly from the acuity limit but slows and asymptotes at a symbol size of about 1 degree. We recast these data in terms of contrast difference energy: the average of the squared distances between the letter images and the average letter image. In terms of sensitivity to contrast difference energy, and thus visual efficiency, there is a peak around ¼ degree, followed by a marked decline at larger sizes. These results are explained by a Neural Image Classifier model that includes optical filtering and retinal neural filtering, sampling, and noise, followed by an optimal classifier. As letters are enlarged, sensitivity declines because of the increasing size and spacing of the midget retinal ganglion cell receptive fields in the periphery.

  5. Classification Studies in an Advanced Air Classifier (United States)

    Routray, Sunita; Bhima Rao, R.


    In the present paper, experiments are carried out using VSK separator which is an advanced air classifier to recover heavy minerals from beach sand. In classification experiments the cage wheel speed and the feed rate are set and the material is fed to the air cyclone and split into fine and coarse particles which are collected in separate bags. The size distribution of each fraction was measured by sieve analysis. A model is developed to predict the performance of the air classifier. The objective of the present model is to predict the grade efficiency curve for a given set of operating parameters such as cage wheel speed and feed rate. The overall experimental data with all variables studied in this investigation is fitted to several models. It is found that the present model is fitting good to the logistic model.

  6. Comparing cosmic web classifiers using information theory (United States)

    Leclercq, Florent; Lavaux, Guilhem; Jasche, Jens; Wandelt, Benjamin


    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  7. Comparing cosmic web classifiers using information theory

    CERN Document Server

    Leclercq, Florent; Jasche, Jens; Wandelt, Benjamin


    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-web, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  8. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads


    a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential......This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...

  9. Statistical Mechanics of Soft Margin Classifiers


    Risau-Gusman, Sebastian; Gordon, Mirta B.


    We study the typical learning properties of the recently introduced Soft Margin Classifiers (SMCs), learning realizable and unrealizable tasks, with the tools of Statistical Mechanics. We derive analytically the behaviour of the learning curves in the regime of very large training sets. We obtain exponential and power laws for the decay of the generalization error towards the asymptotic value, depending on the task and on general characteristics of the distribution of stabilities of the patte...

  10. Deterministic Pattern Classifier Based on Genetic Programming

    Institute of Scientific and Technical Information of China (English)

    LI Jian-wu; LI Min-qiang; KOU Ji-song


    This paper proposes a supervised training-test method with Genetic Programming (GP) for pattern classification. Compared and contrasted with traditional methods with regard to deterministic pattern classifiers, this method is true for both linear separable problems and linear non-separable problems. For specific training samples, it can formulate the expression of discriminate function well without any prior knowledge. At last, an experiment is conducted, and the result reveals that this system is effective and practical.

  11. Reconfiguration-based implementation of SVM classifier on FPGA for Classifying Microarray data. (United States)

    Hussain, Hanaa M; Benkrid, Khaled; Seker, Huseyin


    Classifying Microarray data, which are of high dimensional nature, requires high computational power. Support Vector Machines-based classifier (SVM) is among the most common and successful classifiers used in the analysis of Microarray data but also requires high computational power due to its complex mathematical architecture. Implementing SVM on hardware exploits the parallelism available within the algorithm kernels to accelerate the classification of Microarray data. In this work, a flexible, dynamically and partially reconfigurable implementation of the SVM classifier on Field Programmable Gate Array (FPGA) is presented. The SVM architecture achieved up to 85× speed-up over equivalent general purpose processor (GPP) showing the capability of FPGAs in enhancing the performance of SVM-based analysis of Microarray data as well as future bioinformatics applications.

  12. Effectiveness of hydrogen peroxide and electron-beam irradiation treatment for removal and inactivation of viruses in equine-derived xenografts. (United States)

    Cusinato, Riccardo; Pacenti, Monia; Martello, Thomas; Fattori, Paolo; Morroni, Marco; Palù, Giorgio


    Bone grafting is a common procedure for bone reconstruction in dentistry, orthopedics, and neurosurgery. A wide range of grafts are currently used, and xenografts are regarded as an interesting alternative to autogenous bone because all mammals share the same bone mineral component composition and morphology. Antigens must be eliminated from bone grafts derived from animal tissues in order to make them biocompatible. Moreover, the processing method must also safely inactivate and/or remove viruses or other potential infectious agents. This study assessed the efficacy of two steps applied in manufacturing some equine-derived xenografts: hydrogen-peroxide and e-beam sterilization treatments for inactivation and removal of viruses in equine bone granules (cortical and cancellous) and collagen and pericardium membranes. Viruses belonging to three different human viral species (Herpes simplex virus type 1, Coxsackievirus B1, and Influenzavirus type A H1N1) were selected and used to spike semi-processed biomaterials. For each viral species, the tissue culture infective dose (TCID50) on cell lines and the number of genome copies through qPCR were assessed. Both treatments were found to be effective at virus inactivation. Considering the model viruses studied, the application of hydrogen peroxide and e-beam irradiation could also be considered effective for processing bone tissue of human origin.

  13. Intelligent neural network classifier for automatic testing (United States)

    Bai, Baoxing; Yu, Heping


    This paper is concerned with an application of a multilayer feedforward neural network for the vision detection of industrial pictures, and introduces a high characteristics image processing and recognizing system which can be used for real-time testing blemishes, streaks and cracks, etc. on the inner walls of high-accuracy pipes. To take full advantage of the functions of the artificial neural network, such as the information distributed memory, large scale self-adapting parallel processing, high fault-tolerance ability, this system uses a multilayer perceptron as a regular detector to extract features of the images to be inspected and classify them.

  14. Classifying spaces of degenerating polarized Hodge structures

    CERN Document Server

    Kato, Kazuya


    In 1970, Phillip Griffiths envisioned that points at infinity could be added to the classifying space D of polarized Hodge structures. In this book, Kazuya Kato and Sampei Usui realize this dream by creating a logarithmic Hodge theory. They use the logarithmic structures begun by Fontaine-Illusie to revive nilpotent orbits as a logarithmic Hodge structure. The book focuses on two principal topics. First, Kato and Usui construct the fine moduli space of polarized logarithmic Hodge structures with additional structures. Even for a Hermitian symmetric domain D, the present theory is a refinem

  15. Learning Rates for -Regularized Kernel Classifiers

    Directory of Open Access Journals (Sweden)

    Hongzhi Tong


    Full Text Available We consider a family of classification algorithms generated from a regularization kernel scheme associated with -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derived under some assumptions on the kernel, the input space, the marginal distribution, and the approximation error.

  16. Gearbox Condition Monitoring Using Advanced Classifiers

    Directory of Open Access Journals (Sweden)

    P. Večeř


    Full Text Available New efficient and reliable methods for gearbox diagnostics are needed in automotive industry because of growing demand for production quality. This paper presents the application of two different classifiers for gearbox diagnostics – Kohonen Neural Networks and the Adaptive-Network-based Fuzzy Interface System (ANFIS. Two different practical applications are presented. In the first application, the tested gearboxes are separated into two classes according to their condition indicators. In the second example, ANFIS is applied to label the tested gearboxes with a Quality Index according to the condition indicators. In both applications, the condition indicators were computed from the vibration of the gearbox housing. 

  17. Cubical sets as a classifying topos

    DEFF Research Database (Denmark)

    Spitters, Bas

    Coquand’s cubical set model for homotopy type theory provides the basis for a computational interpretation of the univalence axiom and some higher inductive types, as implemented in the cubical proof assistant. We show that the underlying cube category is the opposite of the Lawvere theory of De...... Morgan algebras. The topos of cubical sets itself classifies the theory of ‘free De Morgan algebras’. This provides us with a topos with an internal ‘interval’. Using this interval we construct a model of type theory following van den Berg and Garner. We are currently investigating the precise relation...

  18. Optimizing A syndromic surveillance text classifier for influenza-like illness: Does document source matter? (United States)

    South, Brett R; South, Brett Ray; Chapman, Wendy W; Chapman, Wendy; Delisle, Sylvain; Shen, Shuying; Kalp, Ericka; Perl, Trish; Samore, Matthew H; Gundlapalli, Adi V


    Syndromic surveillance systems that incorporate electronic free-text data have primarily focused on extracting concepts of interest from chief complaint text, emergency department visit notes, and nurse triage notes. Due to availability and access, there has been limited work in the area of surveilling the full text of all electronic note documents compared with more specific document sources. This study provides an evaluation of the performance of a text classifier for detection of influenza-like illness (ILI) by document sources that are commonly used for biosurveillance by comparing them to routine visit notes, and a full electronic note corpus approach. Evaluating the performance of an automated text classifier for syndromic surveillance by source document will inform decisions regarding electronic textual data sources for potential use by automated biosurveillance systems. Even when a full electronic medical record is available, commonly available surveillance source documents provide acceptable statistical performance for automated ILI surveillance.

  19. 5 CFR 1312.23 - Access to classified information. (United States)


    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Access to classified information. 1312.23... Classified Information § 1312.23 Access to classified information. Classified information may be made... “need to know” and the access is essential to the accomplishment of official government duties....

  20. Rotational Study of Ambiguous Taxonomic Classified Asteroids (United States)

    Linder, Tyler R.; Sanchez, Rick; Wuerker, Wolfgang; Clayson, Timothy; Giles, Tucker


    The Sloan Digital Sky Survey (SDSS) moving object catalog (MOC4) provided the largest ever catalog of asteroid spectrophotometry observations. Carvano et al. (2010), while analyzing MOC4, discovered that individual observations of asteroids which were observed multiple times did not classify into the same photometric-based taxonomic class. A small subset of those asteroids were classified as having both the presence and absence of a 1um silicate absorption feature. If these variations are linked to differences in surface mineralogy, the prevailing assumption that an asteroid’s surface composition is predominantly homogenous would need to be reexamined. Furthermore, our understanding of the evolution of the asteroid belt, as well as the linkage between certain asteroids and meteorite types may need to be modified.This research is an investigation to determine the rotational rates of these taxonomically ambiguous asteroids. Initial questions to be answered:Do these asteroids have unique or nonstandard rotational rates?Is there any evidence in their light curve to suggest an abnormality?Observations were taken using PROMPT6 a 0.41-m telescope apart of the SKYNET network at Cerro Tololo Inter-American Observatory (CTIO). Observations were calibrated and analyzed using Canopus software. Initial results will be presented at AAS.

  1. Objectively classifying Southern Hemisphere extratropical cyclones (United States)

    Catto, Jennifer


    There has been a long tradition in attempting to separate extratropical cyclones into different classes depending on their cloud signatures, airflows, synoptic precursors, or upper-level flow features. Depending on these features, the cyclones may have different impacts, for example in their precipitation intensity. It is important, therefore, to understand how the distribution of different cyclone classes may change in the future. Many of the previous classifications have been performed manually. In order to be able to evaluate climate models and understand how extratropical cyclones might change in the future, we need to be able to use an automated method to classify cyclones. Extratropical cyclones have been identified in the Southern Hemisphere from the ERA-Interim reanalysis dataset with a commonly used identification and tracking algorithm that employs 850 hPa relative vorticity. A clustering method applied to large-scale fields from ERA-Interim at the time of cyclone genesis (when the cyclone is first detected), has been used to objectively classify identified cyclones. The results are compared to the manual classification of Sinclair and Revell (2000) and the four objectively identified classes shown in this presentation are found to match well. The relative importance of diabatic heating in the clusters is investigated, as well as the differing precipitation characteristics. The success of the objective classification shows its utility in climate model evaluation and climate change studies.

  2. Adaptive classifier for steel strip surface defects (United States)

    Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li


    Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.

  3. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels


    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  4. Classifying anatomical subtypes of subjective memory impairment. (United States)

    Jung, Na-Yeon; Seo, Sang Won; Yoo, Heejin; Yang, Jin-Ju; Park, Seongbeom; Kim, Yeo Jin; Lee, Juyoun; Lee, Jin San; Jang, Young Kyoung; Lee, Jong Min; Kim, Sung Tae; Kim, Seonwoo; Kim, Eun-Joo; Na, Duk L; Kim, Hee Jin


    We aimed to categorize subjective memory impairment (SMI) individuals based on their patterns of cortical thickness and to propose simple models that can classify each subtype. We recruited 613 SMI individuals and 613 age- and gender-matched normal controls. Using hierarchical agglomerative cluster analysis, SMI individuals were divided into 3 subtypes: temporal atrophy (12.9%), minimal atrophy (52.4%), and diffuse atrophy (34.6%). Individuals in the temporal atrophy (Alzheimer's disease-like atrophy) subtype were older, had more vascular risk factors, and scored the lowest on neuropsychological tests. Combination of these factors classified the temporal atrophy subtype with 73.2% accuracy. On the other hand, individuals with the minimal atrophy (non-neurodegenerative) subtype were younger, were more likely to be female, and had depression. Combination of these factors discriminated the minimal atrophy subtype with 76.0% accuracy. We suggest that SMI can be largely categorized into 3 anatomical subtypes that have distinct clinical features. Our models may help physicians decide next steps when encountering SMI patients and may also be used in clinical trials.

  5. Comparison of Current Frame-Based Phoneme Classifiers

    Directory of Open Access Journals (Sweden)

    Vaclav Pfeifer


    Full Text Available This paper compares today’s most common frame-based classifiers. These classifiers can be divided into the two main groups – generic classifiers which creates the most probable model based on the training data (for example GMM and discriminative classifiers which focues on creating decision hyperplane. A lot of research has been done with the GMM classifiers and therefore this paper will be mainly focused on the frame-based classifiers. Two discriminative classifiers will be presented. These classifiers implements a hieararchical tree root structure over the input phoneme group which shown to be an effective. Based on these classifiers, two efficient training algorithms will be presented. We demonstrate advantages of our training algorithms by evaluating all classifiers over the TIMIT speech corpus.

  6. Quantum Hooke's law to classify pulse laser induced ultrafast melting. (United States)

    Hu, Hao; Ding, Hepeng; Liu, Feng


    Ultrafast crystal-to-liquid phase transition induced by femtosecond pulse laser excitation is an interesting material's behavior manifesting the complexity of light-matter interaction. There exist two types of such phase transitions: one occurs at a time scale shorter than a picosecond via a nonthermal process mediated by electron-hole plasma formation; the other at a longer time scale via a thermal melting process mediated by electron-phonon interaction. However, it remains unclear what material would undergo which process and why? Here, by exploiting the property of quantum electronic stress (QES) governed by quantum Hooke's law, we classify the transitions by two distinct classes of materials: the faster nonthermal process can only occur in materials like ice having an anomalous phase diagram characterized with dTm/dP melting temperature and P is pressure, above a high threshold laser fluence; while the slower thermal process may occur in all materials. Especially, the nonthermal transition is shown to be induced by the QES, acting like a negative internal pressure, which drives the crystal into a "super pressing" state to spontaneously transform into a higher-density liquid phase. Our findings significantly advance fundamental understanding of ultrafast crystal-to-liquid phase transitions, enabling quantitative a priori predictions.

  7. Oxygen-Content-Controllable Graphene Oxide from Electron-Beam-Irradiated Graphite: Synthesis, Characterization, and Removal of Aqueous Lead [Pb(II)]. (United States)

    Bai, Jing; Sun, Huimin; Yin, Xiaojie; Yin, Xianqiang; Wang, Shengsen; Creamer, Anne Elise; Xu, Lijun; Qin, Zhi; He, Feng; Gao, Bin


    A high-energy electron beam was applied to irradiate graphite for the preparation of graphene oxide (GO) with a controllable oxygen content. The obtained GO sheets were analyzed with various characterization tools. The results revealed that the oxygen-containing groups of GO increased with increasing irradiation dosages. Hence, oxygen-content-controllable synthesis of GO can be realized by changing the irradiation dosages. The GO sheets with different irradiation dosages were then used to adsorb aqueous Pb(II). The effects of contact time, pH, initial lead ion concentration, and ionic strength on Pb(II) sorption onto different GO sheets were examined. The sorption process was found to be very fast (completed within 20 min) at pH 5.0. Except ionic strength, which showed no/little effect on lead sorption, the other factors affected the sorption of aqueous Pb(II) onto GO. The maximum Pb(II) sorption capacities of GO increased with irradiation dosages, confirming that electron-beam irradiation was an effective way to increase the oxygen content of GO. These results suggested that irradiated GO with a controllable oxygen content is a promising nanomaterial for environmental cleanup, particularly for the treatment of cationic metal ions, such as Pb(II).

  8. Classifying prion and prion-like phenomena. (United States)

    Harbi, Djamel; Harrison, Paul M


    The universe of prion and prion-like phenomena has expanded significantly in the past several years. Here, we overview the challenges in classifying this data informatically, given that terms such as "prion-like", "prion-related" or "prion-forming" do not have a stable meaning in the scientific literature. We examine the spectrum of proteins that have been described in the literature as forming prions, and discuss how "prion" can have a range of meaning, with a strict definition being for demonstration of infection with in vitro-derived recombinant prions. We suggest that although prion/prion-like phenomena can largely be apportioned into a small number of broad groups dependent on the type of transmissibility evidence for them, as new phenomena are discovered in the coming years, a detailed ontological approach might be necessary that allows for subtle definition of different "flavors" of prion / prion-like phenomena.

  9. Learning Vector Quantization for Classifying Astronomical Objects

    Institute of Scientific and Technical Information of China (English)


    The sizes of astronomical surveys in different wavebands are increas-ing rapidly. Therefore, automatic classification of objects is becoming ever moreimportant. We explore the performance of learning vector quantization (LVQ) inclassifying multi-wavelength data. Our analysis concentrates on separating activesources from non-active ones. Different classes of X-ray emitters populate distinctregions of a multidimensional parameter space. In order to explore the distributionof various objects in a multidimensional parameter space, we positionally cross-correlate the data of quasars, BL Lacs, active galaxies, stars and normal galaxiesin the optical, X-ray and infrared bands. We then apply LVQ to classify them withthe obtained data. Our results show that LVQ is an effective method for separatingAGNs from stars and normal galaxies with multi-wavelength data.

  10. A cognitive approach to classifying perceived behaviors (United States)

    Benjamin, Dale Paul; Lyons, Damian


    This paper describes our work on integrating distributed, concurrent control in a cognitive architecture, and using it to classify perceived behaviors. We are implementing the Robot Schemas (RS) language in Soar. RS is a CSP-type programming language for robotics that controls a hierarchy of concurrently executing schemas. The behavior of every RS schema is defined using port automata. This provides precision to the semantics and also a constructive means of reasoning about the behavior and meaning of schemas. Our implementation uses Soar operators to build, instantiate and connect port automata as needed. Our approach is to use comprehension through generation (similar to NLSoar) to search for ways to construct port automata that model perceived behaviors. The generality of RS permits us to model dynamic, concurrent behaviors. A virtual world (Ogre) is used to test the accuracy of these automata. Soar's chunking mechanism is used to generalize and save these automata. In this way, the robot learns to recognize new behaviors.

  11. Speech Emotion Recognition Using Fuzzy Logic Classifier

    Directory of Open Access Journals (Sweden)

    Daniar aghsanavard


    Full Text Available Over the last two decades, emotions, speech recognition and signal processing have been one of the most significant issues in the adoption of techniques to detect them. Each method has advantages and disadvantages. This paper tries to suggest fuzzy speech emotion recognition based on the classification of speech's signals in order to better recognition along with a higher speed. In this system, the use of fuzzy logic system with 5 layers, which is the combination of neural progressive network and algorithm optimization of firefly, first, speech samples have been given to input of fuzzy orbit and then, signals will be investigated and primary classified in a fuzzy framework. In this model, a pattern of signals will be created for each class of signals, which results in reduction of signal data dimension as well as easier speech recognition. The obtained experimental results show that our proposed method (categorized by firefly, improves recognition of utterances.

  12. Classifying and ranking DMUs in interval DEA

    Institute of Scientific and Technical Information of China (English)

    GUO Jun-peng; WU Yu-hua; LI Wen-hua


    During efficiency evaluating by DEA, the inputs and outputs of DMUs may be intervals because of insufficient information or measure error. For this reason, interval DEA is proposed. To make the efficiency scores more discriminative, this paper builds an Interval Modified DEA (IMDEA) model based on MDEA.Furthermore, models of obtaining upper and lower bounds of the efficiency scores for each DMU are set up.Based on this, the DMUs are classified into three types. Next, a new order relation between intervals which can express the DM' s preference to the three types is proposed. As a result, a full and more eonvietive ranking is made on all the DMUs. Finally an example is given.


    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos


    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  14. Classifying antiarrhythmic actions: by facts or speculation. (United States)

    Vaughan Williams, E M


    Classification of antiarrhythmic actions is reviewed in the context of the results of the Cardiac Arrhythmia Suppression Trials, CAST 1 and 2. Six criticisms of the classification recently published (The Sicilian Gambit) are discussed in detail. The alternative classification, when stripped of speculative elements, is shown to be similar to the original classification. Claims that the classification failed to predict the efficacy of antiarrhythmic drugs for the selection of appropriate therapy have been tested by an example. The antiarrhythmic actions of cibenzoline were classified in 1980. A detailed review of confirmatory experiments and clinical trials during the past decade shows that predictions made at the time agree with subsequent results. Classification of the effects drugs actually have on functioning cardiac tissues provides a rational basis for finding the preferred treatment for a particular arrhythmia in accordance with the diagnosis.

  15. A Spiking Neural Learning Classifier System

    CERN Document Server

    Howard, Gerard; Lanzi, Pier-Luca


    Learning Classifier Systems (LCS) are population-based reinforcement learners used in a wide variety of applications. This paper presents a LCS where each traditional rule is represented by a spiking neural network, a type of network with dynamic internal state. We employ a constructivist model of growth of both neurons and dendrites that realise flexible learning by evolving structures of sufficient complexity to solve a well-known problem involving continuous, real-valued inputs. Additionally, we extend the system to enable temporal state decomposition. By allowing our LCS to chain together sequences of heterogeneous actions into macro-actions, it is shown to perform optimally in a problem where traditional methods can fail to find a solution in a reasonable amount of time. Our final system is tested on a simulated robotics platform.

  16. Segmentation of Fingerprint Images Using Linear Classifier

    Directory of Open Access Journals (Sweden)

    Xinjian Chen


    Full Text Available An algorithm for the segmentation of fingerprints and a criterion for evaluating the block feature are presented. The segmentation uses three block features: the block clusters degree, the block mean information, and the block variance. An optimal linear classifier has been trained for the classification per block and the criteria of minimal number of misclassified samples are used. Morphology has been applied as postprocessing to reduce the number of classification errors. The algorithm is tested on FVC2002 database, only 2.45% of the blocks are misclassified, while the postprocessing further reduces this ratio. Experiments have shown that the proposed segmentation method performs very well in rejecting false fingerprint features from the noisy background.

  17. Classifying supernovae using only galaxy data

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Ryan J. [Astronomy Department, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States); Mandel, Kaisey [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)


    We present a new method for probabilistically classifying supernovae (SNe) without using SN spectral or photometric data. Unlike all previous studies to classify SNe without spectra, this technique does not use any SN photometry. Instead, the method relies on host-galaxy data. We build upon the well-known correlations between SN classes and host-galaxy properties, specifically that core-collapse SNe rarely occur in red, luminous, or early-type galaxies. Using the nearly spectroscopically complete Lick Observatory Supernova Search sample of SNe, we determine SN fractions as a function of host-galaxy properties. Using these data as inputs, we construct a Bayesian method for determining the probability that an SN is of a particular class. This method improves a common classification figure of merit by a factor of >2, comparable to the best light-curve classification techniques. Of the galaxy properties examined, morphology provides the most discriminating information. We further validate this method using SN samples from the Sloan Digital Sky Survey and the Palomar Transient Factory. We demonstrate that this method has wide-ranging applications, including separating different subclasses of SNe and determining the probability that an SN is of a particular class before photometry or even spectra can. Since this method uses completely independent data from light-curve techniques, there is potential to further improve the overall purity and completeness of SN samples and to test systematic biases of the light-curve techniques. Further enhancements to the host-galaxy method, including additional host-galaxy properties, combination with light-curve methods, and hybrid methods, should further improve the quality of SN samples from past, current, and future transient surveys.

  18. A general method for baseline-removal in ultrafast electron powder diffraction data using the dual-tree complex wavelet transform (United States)

    René de Cotret, Laurent P.; Siwick, Bradley J.


    The general problem of background subtraction in ultrafast electron powder diffraction (UEPD) is presented with a focus on the diffraction patterns obtained from materials of moderately complex structure which contain many overlapping peaks and effectively no scattering vector regions that can be considered exclusively background. We compare the performance of background subtraction algorithms based on discrete and dual-tree complex (DTCWT) wavelet transforms when applied to simulated UEPD data on the M1–R phase transition in VO2 with a time-varying background. We find that the DTCWT approach is capable of extracting intensities that are accurate to better than 2% across the whole range of scattering vector simulated, effectively independent of delay time. A Python package is available. PMID:28083543

  19. 运用电子鼻检测活血化瘀中药物质基础研究%Study on the Material Foundation of Stasis-removing Chinese Medicine by Electronic Nose Detection

    Institute of Scientific and Technical Information of China (English)

    王光耀; 盛良; 王兴华; 汪宇; Te Kian Keong; Teh Siew Hoon; Ooi Ciat Hui


    Objective: To investigate whether there is a common material basis among Chinese medicines with similar effects, and whether the electronic nose can be used to quantify the property of Chinese medicines. Methods: Twelve kinds of Chinese medicinal herbs, which have the effect of promoting blood circulation and removing blood stasis, were tested by electronic nose. Principal component analysis(PCA) and the characteristic fingerprint were analysed together with differential index di and discriminant index. Results: The 12 kinds of Chinese medicinal herbs with the function of promoting blood circulation and removing blood stasis had similar PCA map and characteristic fingerprint. Conclusion: The 12 kinds of Chinese medicines have a common material basis.%目的 探讨具有相同功效的中药材是否具有共同的物质基础,运用电子鼻能否对中药的药性进行初步量化.方法 运用电子鼻检测常用具有活血化瘀功效的12种中药材,分析其PCA图和特征指纹图谱的相似性,并通过差异指数和判别指数对其作用作进一步说明.结果 检测的12种具有活血化瘀功效的中药材具有相似的PCA图和特征指纹图谱.结论 12种活血化瘀的中药具有共同的物质基础.

  20. Fuzzy Wavenet (FWN classifier for medical images

    Directory of Open Access Journals (Sweden)

    Entather Mahos


    Full Text Available The combination of wavelet theory and neural networks has lead to the development of wavelet networks. Wavelet networks are feed-forward neural networks using wavelets as activation function. Wavelets networks have been used in classification and identification problems with some success. In this work we proposed a fuzzy wavenet network (FWN, which learns by common back-propagation algorithm to classify medical images. The library of medical image has been analyzed, first. Second, Two experimental tables’ rules provide an excellent opportunity to test the ability of fuzzy wavenet network due to the high level of information variability often experienced with this type of images. We have known that the wavelet transformation is more accurate in small dimension problem. But image processing is large dimension problem then we used neural network. Results are presented on the application on the three layer fuzzy wavenet to vision system. They demonstrate a considerable improvement in performance by proposed two table’s rule for fuzzy and deterministic dilation and translation in wavelet transformation techniques.

  1. Colorization by classifying the prior knowledge

    Institute of Scientific and Technical Information of China (English)

    DU Weiwei


    When a one-dimensional luminance scalar is replaced by a vector of a colorful multi-dimension for every pixel of a monochrome image,the process is called colorization.However,colorization is under-constrained.Therefore,the prior knowledge is considered and given to the monochrome image.Colorization using optimization algorithm is an effective algorithm for the above problem.However,it cannot effectively do with some images well without repeating experiments for confirming the place of scribbles.In this paper,a colorization algorithm is proposed,which can automatically generate the prior knowledge.The idea is that firstly,the prior knowledge crystallizes into some points of the prior knowledge which is automatically extracted by downsampling and upsampling method.And then some points of the prior knowledge are classified and given with corresponding colors.Lastly,the color image can be obtained by the color points of the prior knowledge.It is demonstrated that the proposal can not only effectively generate the prior knowledge but also colorize the monochrome image according to requirements of user with some experiments.

  2. A Neural Network Classifier of Volume Datasets

    CERN Document Server

    Zukić, Dženan; Kolb, Andreas


    Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets,...

  3. Combining classifiers for robust PICO element detection

    Directory of Open Access Journals (Sweden)

    Grad Roland


    Full Text Available Abstract Background Formulating a clinical information need in terms of the four atomic parts which are Population/Problem, Intervention, Comparison and Outcome (known as PICO elements facilitates searching for a precise answer within a large medical citation database. However, using PICO defined items in the information retrieval process requires a search engine to be able to detect and index PICO elements in the collection in order for the system to retrieve relevant documents. Methods In this study, we tested multiple supervised classification algorithms and their combinations for detecting PICO elements within medical abstracts. Using the structural descriptors that are embedded in some medical abstracts, we have automatically gathered large training/testing data sets for each PICO element. Results Combining multiple classifiers using a weighted linear combination of their prediction scores achieves promising results with an f-measure score of 86.3% for P, 67% for I and 56.6% for O. Conclusions Our experiments on the identification of PICO elements showed that the task is very challenging. Nevertheless, the performance achieved by our identification method is competitive with previously published results and shows that this task can be achieved with a high accuracy for the P element but lower ones for I and O elements.

  4. Fault diagnosis with the Aladdin transient classifier (United States)

    Roverso, Davide


    The purpose of Aladdin is to assist plant operators in the early detection and diagnosis of faults and anomalies in the plant that either have an impact on the plant performance, or that could lead to a plant shutdown or component damage if allowed to go unnoticed. The kind of early fault detection and diagnosis performed by Aladdin is aimed at allowing more time for decision making, increasing the operator awareness, reducing component damage, and supporting improved plant availability and reliability. In this paper we describe in broad lines the Aladdin transient classifier, which combines techniques such as recurrent neural network ensembles, Wavelet On-Line Pre-processing (WOLP), and Autonomous Recursive Task Decomposition (ARTD), in an attempt to improve the practical applicability and scalability of this type of systems to real processes and machinery. The paper focuses then on describing an application of Aladdin to a Nuclear Power Plant (NPP) through the use of the HAMBO experimental simulator of the Forsmark 3 boiling water reactor NPP in Sweden. It should be pointed out that Aladdin is not necessarily restricted to applications in NPPs. Other types of power plants, or even other types of processes, can also benefit from the diagnostic capabilities of Aladdin.

  5. Is it important to classify ischaemic stroke?

    LENUS (Irish Health Repository)

    Iqbal, M


    Thirty-five percent of all ischemic events remain classified as cryptogenic. This study was conducted to ascertain the accuracy of diagnosis of ischaemic stroke based on information given in the medical notes. It was tested by applying the clinical information to the (TOAST) criteria. Hundred and five patients presented with acute stroke between Jan-Jun 2007. Data was collected on 90 patients. Male to female ratio was 39:51 with age range of 47-93 years. Sixty (67%) patients had total\\/partial anterior circulation stroke; 5 (5.6%) had a lacunar stroke and in 25 (28%) the mechanism of stroke could not be identified. Four (4.4%) patients with small vessel disease were anticoagulated; 5 (5.6%) with atrial fibrillation received antiplatelet therapy and 2 (2.2%) patients with atrial fibrillation underwent CEA. This study revealed deficiencies in the clinical assessment of patients and treatment was not tailored to the mechanism of stroke in some patients.

  6. Classifying Unidentified Gamma-ray Sources

    CERN Document Server

    Salvetti, David


    During its first 2 years of mission the Fermi-LAT instrument discovered more than 1,800 gamma-ray sources in the 100 MeV to 100 GeV range. Despite the application of advanced techniques to identify and associate the Fermi-LAT sources with counterparts at other wavelengths, about 40% of the LAT sources have no a clear identification remaining "unassociated". The purpose of my Ph.D. work has been to pursue a statistical approach to identify the nature of each Fermi-LAT unassociated source. To this aim, we implemented advanced machine learning techniques, such as logistic regression and artificial neural networks, to classify these sources on the basis of all the available gamma-ray information about location, energy spectrum and time variability. These analyses have been used for selecting targets for AGN and pulsar searches and planning multi-wavelength follow-up observations. In particular, we have focused our attention on the search of possible radio-quiet millisecond pulsar (MSP) candidates in the sample of...

  7. Optimized Radial Basis Function Classifier for Multi Modal Biometrics

    Directory of Open Access Journals (Sweden)

    Anand Viswanathan


    Full Text Available Biometric systems can be used for the identification or verification of humans based on their physiological or behavioral features. In these systems the biometric characteristics such as fingerprints, palm-print, iris or speech can be recorded and are compared with the samples for the identification or verification. Multimodal biometrics is more accurate and solves spoof attacks than the single modal bio metrics systems. In this study, a multimodal biometric system using fingerprint images and finger-vein patterns is proposed and also an optimized Radial Basis Function (RBF kernel classifier is proposed to identify the authorized users. The extracted features from these modalities are selected by PCA and kernel PCA and combined to classify by RBF classifier. The parameters of RBF classifier is optimized by using BAT algorithm with local search. The performance of the proposed classifier is compared with the KNN classifier, Naïve Bayesian classifier and non-optimized RBF classifier.

  8. MISR Level 2 TOA/Cloud Classifier parameters V003 (United States)

    National Aeronautics and Space Administration — This is the Level 2 TOA/Cloud Classifiers Product. It contains the Angular Signature Cloud Mask (ASCM), Regional Cloud Classifiers, Cloud Shadow Mask, and...

  9. Affine Invariant Character Recognition by Progressive Removing (United States)

    Iwamura, Masakazu; Horimatsu, Akira; Niwa, Ryo; Kise, Koichi; Uchida, Seiichi; Omachi, Shinichiro

    Recognizing characters in scene images suffering from perspective distortion is a challenge. Although there are some methods to overcome this difficulty, they are time-consuming. In this paper, we propose a set of affine invariant features and a new recognition scheme called “progressive removing” that can help reduce the processing time. Progressive removing gradually removes less feasible categories and skew angles by using multiple classifiers. We observed that progressive removing and the use of the affine invariant features reduced the processing time by about 60% in comparison to a trivial one without decreasing the recognition rate.

  10. Method of generating features optimal to a dataset and classifier

    Energy Technology Data Exchange (ETDEWEB)

    Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.


    A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.

  11. Recognition of pornographic web pages by classifying texts and images. (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve


    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  12. Reliable Devanagri Handwritten Numeral Recognition using Multiple Classifier and Flexible Zoning Approach

    Directory of Open Access Journals (Sweden)

    Pratibha Singh


    Full Text Available A reliability evaluation system for the recognition of Devanagri Numerals is proposed in this paper. Reliability of classification is very important in applications of optical character recognition. As we know that the outliers and ambiguity may affect the performance of recognition system, a rejection measure must be there for the reliable recognition of the pattern. For each character image pre-processing steps like normalization, binarization, noise removal and boundary extraction is performed. After calculating the bounding box features are extracted for each partition of the numeral image. Features are calculated on three different zoning methods. Directional feature is considered which is obtained using chain code and gradient direction quantization of the orientations. The Zoning firstly, is considered made up of uniform partitions and secondly of non-uniform compartments based on the density of the pixels. For classification 1-nearest neighbor based classifier, quadratic bayes classifier and linear bayes classifier are chosen as base classifier. The base classifiers are combined using four decision combination rules namely maximum, Median, Average and Majority Voting. The framework is used to test the reliability of recognition system against ambiguity.


    Directory of Open Access Journals (Sweden)

    M. Favorskaya


    Full Text Available Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case. Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset “Multi-modal Gesture Recognition Challenge 2013: Dataset and Results” including 393 dynamic hand-gestures was chosen. The proposed method yielded 84–91% recognition accuracy, in average, for restricted set of dynamic gestures.

  14. A Novel Design of 4-Class BCI Using Two Binary Classifiers and Parallel Mental Tasks

    Directory of Open Access Journals (Sweden)

    Tao Geng


    Full Text Available A novel 4-class single-trial brain computer interface (BCI based on two (rather than four or more binary linear discriminant analysis (LDA classifiers is proposed, which is called a “parallel BCI.” Unlike other BCIs where mental tasks are executed and classified in a serial way one after another, the parallel BCI uses properly designed parallel mental tasks that are executed on both sides of the subject body simultaneously, which is the main novelty of the BCI paradigm used in our experiments. Each of the two binary classifiers only classifies the mental tasks executed on one side of the subject body, and the results of the two binary classifiers are combined to give the result of the 4-class BCI. Data was recorded in experiments with both real movement and motor imagery in 3 able-bodied subjects. Artifacts were not detected or removed. Offline analysis has shown that, in some subjects, the parallel BCI can generate a higher accuracy than a conventional 4-class BCI, although both of them have used the same feature selection and classification algorithms.

  15. Development of a combined GIS, neural network and Bayesian classifier methodology for classifying remotely sensed data (United States)

    Schneider, Claudio Albert

    This research is aimed at the solution of two common but still largely unsolved problems in the classification of remotely sensed data: (1) Classification accuracy of remotely sensed data decreases significantly in mountainous terrain, where topography strongly influences the spectral response of the features on the ground; and (2) when attempting to obtain more detailed classifications, e.g. forest cover types or species, rather than just broad categories of forest such as coniferous or deciduous, the accuracy of the classification generally decreases significantly. The main objective of the study was to develop a widely applicable and efficient classification procedure for mapping forest and other cover types in mountainous terrain, using an integrated GIS/neural network/Bayesian classification approach. The performance of this new technique was compared to a standard supervised Maximum Likelihood classification technique, a "conventional" Bayesian/Maximum Likelihood classification, and to a "conventional" neural network classifier. Results indicate a considerable improvement of the new technique over the standard Maximum Likelihood classification technique, as well as a better accuracy than the "conventional" Bayesian/Maximum Likelihood classifier (13.08 percent improvement in overall accuracy), but the "conventional" neural network classifiers outperformed all the techniques compared in this study, with an overall accuracy improvement of 15.94 percent as compared to the standard Maximum Likelihood classifier (from 46.77 percent to 62.71 percent). However, the overall accuracies of all the classification techniques compared in this study were relative low. It is believed that this was caused by problems related to the inadequacy of the reference data. On the other hand, the results also indicate the need to develop a different sampling design to more effectively cover the variability across all the parameters needed by the neural network classification technique

  16. Counting, Measuring And The Semantics Of Classifiers

    Directory of Open Access Journals (Sweden)

    Susan Rothstein


    Full Text Available This paper makes two central claims. The first is that there is an intimate and non-trivial relation between the mass/count distinction on the one hand and the measure/individuation distinction on the other: a (if not the defining property of mass nouns is that they denote sets of entities which can be measured, while count nouns denote sets of entities which can be counted. Crucially, this is a difference in grammatical perspective and not in ontological status. The second claim is that the mass/count distinction between two types of nominals has its direct correlate at the level of classifier phrases: classifier phrases like two bottles of wine are ambiguous between a counting, or individuating, reading and a measure reading. On the counting reading, this phrase has count semantics, on the measure reading it has mass semantics.ReferencesBorer, H. 1999. ‘Deconstructing the construct’. In K. Johnson & I. Roberts (eds. ‘Beyond Principles and Parameters’, 43–89. Dordrecht: Kluwer publications.Borer, H. 2008. ‘Compounds: the view from Hebrew’. In R. Lieber & P. Stekauer (eds. ‘The Oxford Handbook of Compounds’, 491–511. Oxford: Oxford University Press.Carlson, G. 1977b. Reference to Kinds in English. Ph.D. thesis, University of Massachusetts at Amherst.Carlson, G. 1997. Quantifiers and Selection. Ph.D. thesis, University of Leiden.Carslon, G. 1977a. ‘Amount relatives’. Language 53: 520–542.Chierchia, G. 2008. ‘Plurality of mass nouns and the notion of ‘semantic parameter”. In S. Rothstein (ed. ‘Events and Grammar’, 53–103. Dordrecht: Kluwer.Danon, G. 2008. ‘Definiteness spreading in the Hebrew construct state’. Lingua 118: 872–906., B. 1992. ‘Toward a common semantics for English count and mass nouns’. Linguistics and Philosophy 15: 597–640., A. & Landman, F. 1998. ‘Strange relatives of the third kind

  17. The Ideal Voting Interface: Classifying Usability

    Directory of Open Access Journals (Sweden)

    Damien Mac Namara


    Full Text Available This work presents a feature-oriented taxonomy for commercial electronic voting machines, which focuses on usability aspects. Based on this analysis, we propose a ‘Just-Like-Paper’  (JLP classification method which identifies five broad categories of eVoting interface. We extend the classification to investigate its application as an indicator of voting efficiency and identify a universal ten-step process encompassing all possible voting steps spanning the twenty-six machines studied. Our analysis concludes that multi-functional and progressive interfaces are likely to be more efficient versus multi-modal voter-activated machines.

  18. Cooling system for electronic components

    Energy Technology Data Exchange (ETDEWEB)

    Anderl, William James; Colgan, Evan George; Gerken, James Dorance; Marroquin, Christopher Michael; Tian, Shurong


    Embodiments of the present invention provide for non interruptive fluid cooling of an electronic enclosure. One or more electronic component packages may be removable from a circuit card having a fluid flow system. When installed, the electronic component packages are coincident to and in a thermal relationship with the fluid flow system. If a particular electronic component package becomes non-functional, it may be removed from the electronic enclosure without affecting either the fluid flow system or other neighboring electronic component packages.

  19. Cooling system for electronic components

    Energy Technology Data Exchange (ETDEWEB)

    Anderl, William James; Colgan, Evan George; Gerken, James Dorance; Marroquin, Christopher Michael; Tian, Shurong


    Embodiments of the present invention provide for non interruptive fluid cooling of an electronic enclosure. One or more electronic component packages may be removable from a circuit card having a fluid flow system. When installed, the electronic component packages are coincident to and in a thermal relationship with the fluid flow system. If a particular electronic component package becomes non-functional, it may be removed from the electronic enclosure without affecting either the fluid flow system or other neighboring electronic component packages.

  20. A new approach to classifier fusion based on upper integral. (United States)

    Wang, Xi-Zhao; Wang, Ran; Feng, Hui-Min; Wang, Hua-Chao


    Fusing a number of classifiers can generally improve the performance of individual classifiers, and the fuzzy integral, which can clearly express the interaction among the individual classifiers, has been acknowledged as an effective tool of fusion. In order to make the best use of the individual classifiers and their combinations, we propose in this paper a new scheme of classifier fusion based on upper integrals, which differs from all the existing models. Instead of being a fusion operator, the upper integral is used to reasonably arrange the finite resources, and thus to maximize the classification efficiency. By solving an optimization problem of upper integrals, we obtain a scheme for assigning proportions of examples to different individual classifiers and their combinations. According to these proportions, new examples could be classified by different individual classifiers and their combinations, and the combination of classifiers that specific examples should be submitted to depends on their performance. The definition of upper integral guarantees such a conclusion that the classification efficiency of the fused classifier is not less than that of any individual classifier theoretically. Furthermore, numerical simulations demonstrate that most existing fusion methodologies, such as bagging and boosting, can be improved by our upper integral model.

  1. Image Classifying Registration and Dynamic Region Merging

    Directory of Open Access Journals (Sweden)

    Himadri Nath Moulick


    Full Text Available In this paper, we address a complex image registration issue arising when the dependencies between intensities of images to be registered are not spatially homogeneous. Such a situation is frequentlyencountered in medical imaging when a pathology present in one of the images modifies locally intensity dependencies observed on normal tissues. Usual image registration models, which are based on a single global intensity similarity criterion, fail to register such images, as they are blind to local deviations of intensity dependencies. Such a limitation is also encountered in contrast enhanced images where there exist multiple pixel classes having different properties of contrast agent absorption. In this paper, we propose a new model in which the similarity criterion is adapted locally to images by classification of image intensity dependencies. Defined in a Bayesian framework, the similarity criterion is a mixture of probability distributions describing dependencies on two classes. The model also includes a class map which locates pixels of the two classes and weights the two mixture components. The registration problem is formulated both as an energy minimization problem and as a Maximum A Posteriori (MAP estimation problem. It is solved using a gradient descent algorithm. In the problem formulation and resolution, the image deformation and the class map are estimated at the same time, leading to an original combination of registration and classification that we call image classifying registration. Whenever sufficient information about class location is available in applications, the registration can also be performed on its own by fixing a given class map. Finally, we illustrate the interest of our model on two real applications from medical imaging: template-based segmentation of contrast-enhanced images and lesion detection in mammograms. We also conduct an evaluation of our model on simulated medical data and show its ability to take into

  2. Rule Based Ensembles Using Pair Wise Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    Moslem Mohammadi Jenghara


    Full Text Available In value estimation, the inexperienced people's estimation average is good approximation to true value, provided that the answer of these individual are independent. Classifier ensemble is the implementation of mentioned principle in classification tasks that are investigated in two aspects. In the first aspect, feature space is divided into several local regions and each region is assigned with a highly competent classifier and in the second, the base classifiers are applied in parallel and equally experienced in some ways to achieve a group consensus. In this paper combination of two methods are used. An important consideration in classifier combination is that much better results can be achieved if diverse classifiers, rather than similar classifiers, are combined. To achieve diversity in classifiers output, the symmetric pairwise weighted feature space is used and the outputs of trained classifiers over the weighted feature space are combined to inference final result. In this paper MLP classifiers are used as the base classifiers. The Experimental results show that the applied method is promising.

  3. 3D cerebral MR image segmentation using multiple-classifier system. (United States)

    Amiri, Saba; Movahedi, Mohammad Mehdi; Kazemi, Kamran; Parsaei, Hossein


    The three soft brain tissues white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) identified in a magnetic resonance (MR) image via image segmentation techniques can aid in structural and functional brain analysis, brain's anatomical structures measurement and visualization, neurodegenerative disorders diagnosis, and surgical planning and image-guided interventions, but only if obtained segmentation results are correct. This paper presents a multiple-classifier-based system for automatic brain tissue segmentation from cerebral MR images. The developed system categorizes each voxel of a given MR image as GM, WM, and CSF. The algorithm consists of preprocessing, feature extraction, and supervised classification steps. In the first step, intensity non-uniformity in a given MR image is corrected and then non-brain tissues such as skull, eyeballs, and skin are removed from the image. For each voxel, statistical features and non-statistical features were computed and used a feature vector representing the voxel. Three multilayer perceptron (MLP) neural networks trained using three different datasets were used as the base classifiers of the multiple-classifier system. The output of the base classifiers was fused using majority voting scheme. Evaluation of the proposed system was performed using Brainweb simulated MR images with different noise and intensity non-uniformity and internet brain segmentation repository (IBSR) real MR images. The quantitative assessment of the proposed method using Dice, Jaccard, and conformity coefficient metrics demonstrates improvement (around 5 % for CSF) in terms of accuracy as compared to single MLP classifier and the existing methods and tools such FSL-FAST and SPM. As accurately segmenting a MR image is of paramount importance for successfully promoting the clinical application of MR image segmentation techniques, the improvement obtained by using multiple-classifier-based system is encouraging.

  4. Bayesian classifiers applied to the Tennessee Eastman process. (United States)

    Dos Santos, Edimilson Batista; Ebecken, Nelson F F; Hruschka, Estevam R; Elkamel, Ali; Madhuranthakam, Chandra M R


    Fault diagnosis includes the main task of classification. Bayesian networks (BNs) present several advantages in the classification task, and previous works have suggested their use as classifiers. Because a classifier is often only one part of a larger decision process, this article proposes, for industrial process diagnosis, the use of a Bayesian method called dynamic Markov blanket classifier that has as its main goal the induction of accurate Bayesian classifiers having dependable probability estimates and revealing actual relationships among the most relevant variables. In addition, a new method, named variable ordering multiple offspring sampling capable of inducing a BN to be used as a classifier, is presented. The performance of these methods is assessed on the data of a benchmark problem known as the Tennessee Eastman process. The obtained results are compared with naive Bayes and tree augmented network classifiers, and confirm that both proposed algorithms can provide good classification accuracies as well as knowledge about relevant variables.

  5. Stochastic margin-based structure learning of Bayesian network classifiers. (United States)

    Pernkopf, Franz; Wohlmayr, Michael


    The margin criterion for parameter learning in graphical models gained significant impact over the last years. We use the maximum margin score for discriminatively optimizing the structure of Bayesian network classifiers. Furthermore, greedy hill-climbing and simulated annealing search heuristics are applied to determine the classifier structures. In the experiments, we demonstrate the advantages of maximum margin optimized Bayesian network structures in terms of classification performance compared to traditionally used discriminative structure learning methods. Stochastic simulated annealing requires less score evaluations than greedy heuristics. Additionally, we compare generative and discriminative parameter learning on both generatively and discriminatively structured Bayesian network classifiers. Margin-optimized Bayesian network classifiers achieve similar classification performance as support vector machines. Moreover, missing feature values during classification can be handled by discriminatively optimized Bayesian network classifiers, a case where purely discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  6. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers



    Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on...

  7. Construction of unsupervised sentiment classifier on idioms resources

    Institute of Scientific and Technical Information of China (English)

    谢松县; 王挺


    Sentiment analysis is the computational study of how opinions, attitudes, emotions, and perspectives are expressed in language, and has been the important task of natural language processing. Sentiment analysis is highly valuable for both research and practical applications. The focuses were put on the difficulties in the construction of sentiment classifiers which normally need tremendous labeled domain training data, and a novel unsupervised framework was proposed to make use of the Chinese idiom resources to develop a general sentiment classifier. Furthermore, the domain adaption of general sentiment classifier was improved by taking the general classifier as the base of a self-training procedure to get a domain self-training sentiment classifier. To validate the effect of the unsupervised framework, several experiments were carried out on publicly available Chinese online reviews dataset. The experiments show that the proposed framework is effective and achieves encouraging results. Specifically, the general classifier outperforms two baselines (a Naïve 50% baseline and a cross-domain classifier), and the bootstrapping self-training classifier approximates the upper bound domain-specific classifier with the lowest accuracy of 81.5%, but the performance is more stable and the framework needs no labeled training dataset.

  8. 6 CFR 7.23 - Emergency release of classified information. (United States)


    ... Classified Information Non-disclosure Form. In emergency situations requiring immediate verbal release of... information through approved communication channels by the most secure and expeditious method possible, or...

  9. Study on the activity of electron transfer of photosystem Ⅱ removed manganese cluster by using exogenous electron carriers%利用外源电子载体研究去除锰簇的光系统Ⅱ的电子传递活性

    Institute of Scientific and Technical Information of China (English)

    由万胜; 黄海丽; 康阳; 姚明东; 陈钧


    光系统Ⅱ(PSⅡ)利用锰簇(OEC)氧化水分解并将电子通过类囊体膜上的电子传递链传递到外源电子受体侧。但是去除锰簇的光系统Ⅱ(T ris‐Wash BBY )是否具有电子传递能力仍是一个值得研究的问题。因此本文引进外源电子供体1,5‐二苯基卡巴肼(DPC)代替锰簇向电子受体侧提供电子,研究Tris‐Wash BBY的电子转移活性。实验表明T ris‐Wash BBY在光照的条件下可以将 DPC的电子传递给外源电子受体2,6‐二氯酚靛酚(DCPIP),并且利用紫外可见光谱的方法,通过分析外源电子受体DCPIP的还原量来表征T ris‐Wash BBY和BBY的电子传递活性。%Photosystem Ⅱ (BBY) could split water on manganese cluster (OEC) and transfer elec‐trons to exogenous electron acceptor side through the thylakoid membrane electron transport chain . But it is still worthy to study whether photosystem Ⅱ (Tris‐Wash BBY) which removed the manga‐nese cluster has capability of the electron transfer .Therefore ,this article introduces an exogenous e‐lectron donor 1 ,5‐Diphenylcarbazide (DPC) insteading of manganese cluster to provide electron for measuring the activity of electron transfer of Tris‐Wash BBY .The results show that electrons could transport from DPC to DCPIP under illumination in Tris‐Wash BBY .We could characterize electron transfer activity of T ris‐Wash BBY and BBY by analyzing the reduction amount of DCPIP based on UV‐visible spectroscopy methods .

  10. Optimal properties of centroid-based classifiers for very high-dimensional data

    CERN Document Server

    Hall, Peter; 10.1214/09-AOS736


    We show that scale-adjusted versions of the centroid-based classifier enjoys optimal properties when used to discriminate between two very high-dimensional populations where the principal differences are in location. The scale adjustment removes the tendency of scale differences to confound differences in means. Certain other distance-based methods, for example, those founded on nearest-neighbor distance, do not have optimal performance in the sense that we propose. Our results permit varying degrees of sparsity and signal strength to be treated, and require only mild conditions on dependence of vector components. Additionally, we permit the marginal distributions of vector components to vary extensively. In addition to providing theory we explore numerical properties of a centroid-based classifier, and show that these features reflect theoretical accounts of performance.

  11. Selection of effective EEG channels in brain computer interfaces based on inconsistencies of classifiers. (United States)

    Yang, Huijuan; Guan, Cuntai; Ang, Kai Keng; Phua, Kok Soon; Wang, Chuanchu


    This paper proposed a novel method to select the effective Electroencephalography (EEG) channels for the motor imagery tasks based on the inconsistencies from multiple classifiers. The inconsistency criterion for channel selection was designed based on the fluctuation of the classification accuracies among different classifiers when the noisy channels were included. These noisy channels were then identified and removed till a required number of channels was selected or a predefined classification accuracy with reference to baseline was obtained. Experiments conducted on a data set of 13 healthy subjects performing hand grasping and idle revealed that the EEG channels from the motor area were most frequently selected. Furthermore, the mean increases of 4.07%, 3.10% and 1.77% of the averaged accuracies in comparison with the four existing channel selection methods were achieved for the non-feedback, feedback and calibration sessions, respectively, by selecting as low as seven channels. These results further validated the effectiveness of our proposed method.

  12. Characteristics of the molar surface after removal of cervical enamel projections: comparison of three different rotating instruments (United States)


    Purpose The aim of this study was to evaluate and compare tooth surface characteristics in extracted human molars after cervical enamel projections (CEPs) were removed with the use of three rotating instruments. Methods We classified 60 extracted molars due to periodontal lesion with CEPs into grade I, II, or III, according to the Masters and Hoskins’ criteria. Each group contained 20 specimens. Three rotating instruments were used to remove the CEPs: a piezoelectric ultrasonic scaler, a periodontal bur, and a diamond bur. Tooth surface characteristics before and after removal of the projections were then evaluated with scanning electron microscopy (SEM). We analyzed the characteristics of the tooth surfaces with respect to roughness and whether the enamel projections had been completely removed. Results In SEM images, surfaces treated with the diamond bur were smoothest, but this instrument caused considerable harm to tooth structures near the CEPs. The piezoelectric ultrasonic scaler group produced the roughest surface but caused less harm to the tooth structure near the furcation. In general, the surfaces treated with the periodontal bur were smoother than those treated with the ultrasonic scaler, and the periodontal bur did not invade adjacent tooth structures. Conclusions For removal of grade II CEPs, the most effective instrument was the diamond bur. However, in removing grade III projections, the diamond bur can destroy both adjacent tooth structures and the periodontal apparatus. In such cases, careful use of the periodontal bur may be an appropriate substitute. PMID:27127691

  13. Parathyroid gland removal (United States)

    Removal of parathyroid gland; Parathyroidectomy; Hyperparathyroidism - parathyroidectomy; PTH - parathyroidectomy ... and pain-free) for this surgery. Usually the parathyroid glands are removed using a 2- to 4- ...

  14. Quantum classifying spaces and universal quantum characteristic classes

    CERN Document Server

    Durdevic, M


    A construction of the noncommutative-geometric counterparts of classical classifying spaces is presented, for general compact matrix quantum structure groups. A quantum analogue of the classical concept of the classifying map is introduced and analyzed. Interrelations with the abstract algebraic theory of quantum characteristic classes are discussed. Various non-equivalent approaches to defining universal characteristic classes are outlined.

  15. Classifying queries submitted to a vertical search engine

    NARCIS (Netherlands)

    Berendsen, R.; Kovachev, B.; Meij, E.; de Rijke, M.; Weerkamp, W.


    We propose and motivate a scheme for classifying queries submitted to a people search engine. We specify a number of features for automatically classifying people queries into the proposed classes and examine the eectiveness of these features. Our main nding is that classication is feasible and that

  16. 16 CFR 1610.4 - Requirements for classifying textiles. (United States)


    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Requirements for classifying textiles. 1610... REGULATIONS STANDARD FOR THE FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.4 Requirements for classifying textiles. (a) Class 1, Normal Flammability. Class 1 textiles exhibit normal flammability and...

  17. Classifying spaces with virtually cyclic stabilizers for linear groups

    DEFF Research Database (Denmark)

    Degrijse, Dieter Dries; Köhl, Ralf; Petrosyan, Nansen


    We show that every discrete subgroup of GL(n, ℝ) admits a finite-dimensional classifying space with virtually cyclic stabilizers. Applying our methods to SL(3, ℤ), we obtain a four-dimensional classifying space with virtually cyclic stabilizers and a decomposition of the algebraic K-theory of its...

  18. 21 CFR 1402.4 - Information classified by another agency. (United States)


    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Information classified by another agency. 1402.4 Section 1402.4 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY MANDATORY DECLASSIFICATION REVIEW § 1402.4 Information classified by another agency. When a request is received for information that...

  19. 25 CFR 304.3 - Classifying and marking of silver. (United States)


    ... 25 Indians 2 2010-04-01 2010-04-01 false Classifying and marking of silver. 304.3 Section 304.3 Indians INDIAN ARTS AND CRAFTS BOARD, DEPARTMENT OF THE INTERIOR NAVAJO, PUEBLO, AND HOPI SILVER, USE OF GOVERNMENT MARK § 304.3 Classifying and marking of silver. For the present the Indian Arts and Crafts...

  20. Analysis of Sequence Based Classifier Prediction for HIV Subtypes

    Directory of Open Access Journals (Sweden)

    S. Santhosh Kumar


    Full Text Available Human immunodeficiency virus (HIV is a lent virus that causes acquired immunodeficiency syndrome (AIDS. The main drawback in HIV treatment process is its sub type prediction. The sub type and group classification of HIV is based on its genetic variability and location. HIV can be divided into two major types, HIV type 1 (HIV-1 and HIV type 2 (HIV-2. Many classifier approaches have been used to classify HIV subtypes based on their group, but some of cases are having two groups in one; in such cases the classification becomes more complex. The methodology used is this paper based on the HIV sequences. For this work several classifier approaches are used to classify the HIV1 and HIV2. For implementation of the work a real time patient database is taken and the patient records are experimented and the final best classifier is identified with quick response time and least error rate.

  1. Malignancy and Abnormality Detection of Mammograms using Classifier Ensembling

    Directory of Open Access Journals (Sweden)

    Nawazish Naveed


    Full Text Available The breast cancer detection and diagnosis is a critical and complex procedure that demands high degree of accuracy. In computer aided diagnostic systems, the breast cancer detection is a two stage procedure. First, to classify the malignant and benign mammograms, while in second stage, the type of abnormality is detected. In this paper, we have developed a novel architecture to enhance the classification of malignant and benign mammograms using multi-classification of malignant mammograms into six abnormality classes. DWT (Discrete Wavelet Transformation features are extracted from preprocessed images and passed through different classifiers. To improve accuracy, results generated by various classifiers are ensembled. The genetic algorithm is used to find optimal weights rather than assigning weights to the results of classifiers on the basis of heuristics. The mammograms declared as malignant by ensemble classifiers are divided into six classes. The ensemble classifiers are further used for multiclassification using one-against-all technique for classification. The output of all ensemble classifiers is combined by product, median and mean rule. It has been observed that the accuracy of classification of abnormalities is more than 97% in case of mean rule. The Mammographic Image Analysis Society dataset is used for experimentation.

  2. Radon Removal from Liquid Xenon

    Energy Technology Data Exchange (ETDEWEB)

    Martens, Kai [IPMU, The University of Tokyo, 456 Higashi-Mozumi, Kamioka-cho, Hida-shi, Gifu 506-1205 (Japan)


    Efforts are underway in Kamioka to develop a new method to remove Rn directly from the liquid phase of Xe. The idea is based on the observation that in the electronic structure of liquid noble gases charges can get trapped on impurities, and the charged impurities then be drifted through the noble gas liquid. In the case of Rn drifting the impurity into a suitable storage volume is enough as it will decay.

  3. Faint spatial object classifier construction based on data mining technology (United States)

    Lou, Xin; Zhao, Yang; Liao, Yurong; Nie, Yong-ming


    Data mining can effectively obtain the faint spatial object's patterns and characteristics, the universal relations and other implicated data characteristics, the key of which is classifier construction. Faint spatial object classifier construction with spatial data mining technology for faint spatial target detection is proposed based on theoretical analysis of design procedures and guidelines in detail. For the one-sidedness weakness during dealing with the fuzziness and randomness using this method, cloud modal classifier is proposed. Simulating analyzing results indicate that this method can realize classification quickly through feature combination and effectively resolve the one-sidedness weakness problem.

  4. Representation of classifier distributions in terms of hypergeometric functions

    Institute of Scientific and Technical Information of China (English)


    This paper derives alternative analytical expressions for classifier product distributions in terms of Gauss hypergeometric function, 2F1, by considering feed distribution defined in terms of Gates-Gaudin-Schumann function and efficiency curve defined in terms of a logistic function. It is shown that classifier distributions under dispersed conditions of classification pivot at a common size and the distributions are difference similar.The paper also addresses an inverse problem of classifier distributions wherein the feed distribution and efficiency curve are identified from the measured product distributions without needing to know the solid flow split of particles to any of the product streams.

  5. Computer-aided diagnosis system for classifying benign and malignant thyroid nodules in multi-stained FNAB cytological images. (United States)

    Gopinath, Balasubramanian; Shanthi, Natesan


    An automated computer-aided diagnosis system is developed to classify benign and malignant thyroid nodules using multi-stained fine needle aspiration biopsy (FNAB) cytological images. In the first phase, the image segmentation is performed to remove the background staining information and retain the appropriate foreground cell objects in cytological images using mathematical morphology and watershed transform segmentation methods. Subsequently, statistical features are extracted using two-level discrete wavelet transform (DWT) decomposition, gray level co-occurrence matrix (GLCM) and Gabor filter based methods. The classifiers k-nearest neighbor (k-NN), Elman neural network (ENN) and support vector machine (SVM) are tested for classifying benign and malignant thyroid nodules. The combination of watershed segmentation, GLCM features and k-NN classifier results a lowest diagnostic accuracy of 60 %. The highest diagnostic accuracy of 93.33 % is achieved by ENN classifier trained with the statistical features extracted by Gabor filter bank from the images segmented by morphology and watershed transform segmentation methods. It is also observed that SVM classifier results its highest diagnostic accuracy of 90 % for DWT and Gabor filter based features along with morphology and watershed transform segmentation methods. The experimental results suggest that the developed system with multi-stained thyroid FNAB images would be useful for identifying thyroid cancer irrespective of staining protocol used.

  6. Classification of Multiple Chinese Liquors by Means of a QCM-based E-Nose and MDS-SVM Classifier. (United States)

    Li, Qiang; Gu, Yu; Jia, Jing


    Chinese liquors are internationally well-known fermentative alcoholic beverages. They have unique flavors attributable to the use of various bacteria and fungi, raw materials, and production processes. Developing a novel, rapid, and reliable method to identify multiple Chinese liquors is of positive significance. This paper presents a pattern recognition system for classifying ten brands of Chinese liquors based on multidimensional scaling (MDS) and support vector machine (SVM) algorithms in a quartz crystal microbalance (QCM)-based electronic nose (e-nose) we designed. We evaluated the comprehensive performance of the MDS-SVM classifier that predicted all ten brands of Chinese liquors individually. The prediction accuracy (98.3%) showed superior performance of the MDS-SVM classifier over the back-propagation artificial neural network (BP-ANN) classifier (93.3%) and moving average-linear discriminant analysis (MA-LDA) classifier (87.6%). The MDS-SVM classifier has reasonable reliability, good fitting and prediction (generalization) performance in classification of the Chinese liquors. Taking both application of the e-nose and validation of the MDS-SVM classifier into account, we have thus created a useful method for the classification of multiple Chinese liquors.

  7. High speed intelligent classifier of tomatoes by colour, size and weight

    Energy Technology Data Exchange (ETDEWEB)

    Cement, J.; Novas, N.; Gazquez, J. A.; Manzano-Agugliaro, F.


    At present most horticultural products are classified and marketed according to quality standards, which provide a common language for growers, packers, buyers and consumers. The standardisation of both product and packaging enables greater speed and efficiency in management and marketing. Of all the vegetables grown in greenhouses, tomatoes are predominant in both surface area and tons produced. This paper will present the development and evaluation of a low investment classification system of tomatoes with these objectives: to put it at the service of producing farms and to classify for trading standards. An intelligent classifier of tomatoes has been developed by weight, diameter and colour. This system has optimised the necessary algorithms for data processing in the case of tomatoes, so that productivity is greatly increased, with the use of less expensive and lower performance electronics. The prototype is able to achieve very high speed classification, 12.5 ratings per second, using accessible and low cost commercial equipment for this. It decreases fourfold the manual sorting time and is not sensitive to the variety of tomato classified. This system facilitates the processes of standardisation and quality control, increases the competitiveness of tomato farms and impacts positively on profitability. The automatic classification system described in this work represents a contribution from the economic point of view, as it is profitable for a farm in the short term (less than six months), while the existing systems, can only be used in large trading centers. (Author) 36 refs.


    Institute of Scientific and Technical Information of China (English)

    Liu Qingshan; Lu Hanqing; Ma Songde


    A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional density is estimated by KDE and the bandwidthof the kernel function is estimated by Expectation Maximum (EM) algorithm. Two subspaceanalysis methods-linear Principal Component Analysis (PCA) and Kernel-based PCA (KPCA)are respectively used to extract features, and the proposed method is compared with ProbabilisticReasoning Models (PRM), Nearest Center (NC) and NN classifiers which are widely used in facerecognition systems. The experiments are performed on two benchmarks and the experimentalresults show that the KDE outperforms PRM, NC and NN classifiers.


    Institute of Scientific and Technical Information of China (English)

    Ning Xu; Guohua Li; Zhichu Huang


    Research on the flow field inside a turbo classifier is complicated though important. According to the stochastic trajectory model of particles in gas-solid two-phase flow, and adopting the PHOENICS code, numerical simulation is carried out on the flow field, including particle trajectory, in the inner cavity of a turbo classifier, using both straight and backward crooked elbow blades. Computation results show that when the backward crooked elbow blades are used, the mixed stream that passes through the two blades produces a vortex in the positive direction which counteracts the attached vortex in the opposite direction due to the high-speed turbo rotation, making the flow steadier, thus improving both the grade efficiency and precision of the turbo classifier. This research provides positive theoretical evidences for designing sub-micron particle classifiers with high efficiency and accuracy.

  10. 42 CFR 37.50 - Interpreting and classifying chest roentgenograms. (United States)


    ... interpreted and classified in accordance with the ILO Classification system and recorded on a Roentgenographic... under the Act, shall have immediately available for reference a complete set of the ILO...

  11. A novel statistical method for classifying habitat generalists and specialists

    DEFF Research Database (Denmark)

    Chazdon, Robin L; Chao, Anne; Colwell, Robert K


    We develop a novel statistical approach for classifying generalists and specialists in two distinct habitats. Using a multinomial model based on estimated species relative abundance in two habitats, our method minimizes bias due to differences in sampling intensities between two habitat types...... as well as bias due to insufficient sampling within each habitat. The method permits a robust statistical classification of habitat specialists and generalists, without excluding rare species a priori. Based on a user-defined specialization threshold, the model classifies species into one of four groups...... fraction (57.7%) of bird species with statistical confidence. Based on a conservative specialization threshold and adjustment for multiple comparisons, 64.4% of tree species in the full sample were too rare to classify with confidence. Among the species classified, OG specialists constituted the largest...

  12. A semi-automated approach to building text summarisation classifiers

    Directory of Open Access Journals (Sweden)

    Matias Garcia-Constantino


    Full Text Available An investigation into the extraction of useful information from the free text element of questionnaires, using a semi-automated summarisation extraction technique, is described. The summarisation technique utilises the concept of classification but with the support of domain/human experts during classifier construction. A realisation of the proposed technique, SARSET (Semi-Automated Rule Summarisation Extraction Tool, is presented and evaluated using real questionnaire data. The results of this evaluation are compared against the results obtained using two alternative techniques to build text summarisation classifiers. The first of these uses standard rule-based classifier generators, and the second is founded on the concept of building classifiers using secondary data. The results demonstrate that the proposed semi-automated approach outperforms the other two approaches considered.

  13. One pass learning for generalized classifier neural network. (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu


    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance.

  14. A cardiorespiratory classifier of voluntary and involuntary electrodermal activity

    Directory of Open Access Journals (Sweden)

    Sejdic Ervin


    Full Text Available Abstract Background Electrodermal reactions (EDRs can be attributed to many origins, including spontaneous fluctuations of electrodermal activity (EDA and stimuli such as deep inspirations, voluntary mental activity and startling events. In fields that use EDA as a measure of psychophysiological state, the fact that EDRs may be elicited from many different stimuli is often ignored. This study attempts to classify observed EDRs as voluntary (i.e., generated from intentional respiratory or mental activity or involuntary (i.e., generated from startling events or spontaneous electrodermal fluctuations. Methods Eight able-bodied participants were subjected to conditions that would cause a change in EDA: music imagery, startling noises, and deep inspirations. A user-centered cardiorespiratory classifier consisting of 1 an EDR detector, 2 a respiratory filter and 3 a cardiorespiratory filter was developed to automatically detect a participant's EDRs and to classify the origin of their stimulation as voluntary or involuntary. Results Detected EDRs were classified with a positive predictive value of 78%, a negative predictive value of 81% and an overall accuracy of 78%. Without the classifier, EDRs could only be correctly attributed as voluntary or involuntary with an accuracy of 50%. Conclusions The proposed classifier may enable investigators to form more accurate interpretations of electrodermal activity as a measure of an individual's psychophysiological state.

  15. LESS: a model-based classifier for sparse subspaces. (United States)

    Veenman, Cor J; Tax, David M J


    In this paper, we specifically focus on high-dimensional data sets for which the number of dimensions is an order of magnitude higher than the number of objects. From a classifier design standpoint, such small sample size problems have some interesting challenges. The first challenge is to find, from all hyperplanes that separate the classes, a separating hyperplane which generalizes well for future data. A second important task is to determine which features are required to distinguish the classes. To attack these problems, we propose the LESS (Lowest Error in a Sparse Subspace) classifier that efficiently finds linear discriminants in a sparse subspace. In contrast with most classifiers for high-dimensional data sets, the LESS classifier incorporates a (simple) data model. Further, by means of a regularization parameter, the classifier establishes a suitable trade-off between subspace sparseness and classification accuracy. In the experiments, we show how LESS performs on several high-dimensional data sets and compare its performance to related state-of-the-art classifiers like, among others, linear ridge regression with the LASSO and the Support Vector Machine. It turns out that LESS performs competitively while using fewer dimensions.

  16. Low rank updated LS-SVM classifiers for fast variable selection. (United States)

    Ojeda, Fabian; Suykens, Johan A K; De Moor, Bart


    Least squares support vector machine (LS-SVM) classifiers are a class of kernel methods whose solution follows from a set of linear equations. In this work we present low rank modifications to the LS-SVM classifiers that are useful for fast and efficient variable selection. The inclusion or removal of a candidate variable can be represented as a low rank modification to the kernel matrix (linear kernel) of the LS-SVM classifier. In this way, the LS-SVM solution can be updated rather than being recomputed, which improves the efficiency of the overall variable selection process. Relevant variables are selected according to a closed form of the leave-one-out (LOO) error estimator, which is obtained as a by-product of the low rank modifications. The proposed approach is applied to several benchmark data sets as well as two microarray data sets. When compared to other related algorithms used for variable selection, simulations applying our approach clearly show a lower computational complexity together with good stability on the generalization error.

  17. A Lightweight Data Preprocessing Strategy with Fast Contradiction Analysis for Incremental Classifier Learning

    Directory of Open Access Journals (Sweden)

    Simon Fong


    Full Text Available A prime objective in constructing data streaming mining models is to achieve good accuracy, fast learning, and robustness to noise. Although many techniques have been proposed in the past, efforts to improve the accuracy of classification models have been somewhat disparate. These techniques include, but are not limited to, feature selection, dimensionality reduction, and the removal of noise from training data. One limitation common to all of these techniques is the assumption that the full training dataset must be applied. Although this has been effective for traditional batch training, it may not be practical for incremental classifier learning, also known as data stream mining, where only a single pass of the data stream is seen at a time. Because data streams can amount to infinity and the so-called big data phenomenon, the data preprocessing time must be kept to a minimum. This paper introduces a new data preprocessing strategy suitable for the progressive purging of noisy data from the training dataset without the need to process the whole dataset at one time. This strategy is shown via a computer simulation to provide the significant benefit of allowing for the dynamic removal of bad records from the incremental classifier learning process.

  18. Teaching with Crystal Structures: Helping Students Recognize and Classify the Smallest Repeating Particle in a Given Substance (United States)

    Smithenry, Dennis W.


    Classifying a particle requires an understanding of the type of bonding that exists within and among the particles, which requires an understanding of atomic structure and electron configurations, which requires an understanding of the elements of periodic properties, and so on. Rather than getting tangled up in all of these concepts at the start…

  19. Decision Tree Classifiers for Star/Galaxy Separation (United States)

    Vasconcellos, E. C.; de Carvalho, R. R.; Gal, R. R.; LaBarbera, F. L.; Capelato, H. V.; Frago Campos Velho, H.; Trevisan, M.; Ruiz, R. S. R.


    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 = 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 19), our classifier is the only one that maintains high completeness (>80%) while simultaneously achieving low contamination (~2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 <= r <= 21.

  20. Representative Vector Machines: A Unified Framework for Classical Classifiers. (United States)

    Gui, Jie; Liu, Tongliang; Tao, Dacheng; Sun, Zhenan; Tan, Tieniu


    Classifier design is a fundamental problem in pattern recognition. A variety of pattern classification methods such as the nearest neighbor (NN) classifier, support vector machine (SVM), and sparse representation-based classification (SRC) have been proposed in the literature. These typical and widely used classifiers were originally developed from different theory or application motivations and they are conventionally treated as independent and specific solutions for pattern classification. This paper proposes a novel pattern classification framework, namely, representative vector machines (or RVMs for short). The basic idea of RVMs is to assign the class label of a test example according to its nearest representative vector. The contributions of RVMs are twofold. On one hand, the proposed RVMs establish a unified framework of classical classifiers because NN, SVM, and SRC can be interpreted as the special cases of RVMs with different definitions of representative vectors. Thus, the underlying relationship among a number of classical classifiers is revealed for better understanding of pattern classification. On the other hand, novel and advanced classifiers are inspired in the framework of RVMs. For example, a robust pattern classification method called discriminant vector machine (DVM) is motivated from RVMs. Given a test example, DVM first finds its k -NNs and then performs classification based on the robust M-estimator and manifold regularization. Extensive experimental evaluations on a variety of visual recognition tasks such as face recognition (Yale and face recognition grand challenge databases), object categorization (Caltech-101 dataset), and action recognition (Action Similarity LAbeliNg) demonstrate the advantages of DVM over other classifiers.

  1. Phenol removal pretreatment process (United States)

    Hames, Bonnie R.


    A process for removing phenols from an aqueous solution is provided, which comprises the steps of contacting a mixture comprising the solution and a metal oxide, forming a phenol metal oxide complex, and removing the complex from the mixture.


    Directory of Open Access Journals (Sweden)

    R. Arulmurugan


    Full Text Available The revival of wavelet neural networks obtained an extensive use in digital image processing. The shape representation, classification and detection play a very important role in the image analysis. Boosted Greedy Sparse Linear Discriminate Analysis (BGSLDA trains the cascade level of detection in an efficient manner. With the application of reweighting concept and deployment of class-reparability criterion, lesser search was made on more efficient weak classifiers. At the same time, Multi-Scale Histogram of Oriented Gradients (MS-HOG method removes the confined portions of images. MS-HOG algorithm includes the advanced recognition scenarios such as rotations transportations on multiple objects but does not perform effective feature classification. To overcome the drawbacks in classification of higher order units, Fusion Elevated Order Classifier (FEOC method is introduced. FEOC contains a different fusion of high order units to deal with diverse datasets by making changes in the order of units with parametric considerations. FEOC uses a prominent value of input neurons for better fitting properties resulting in a higher level of learning parameters (i.e., weights. FEOC method features are reduced using feature subset collection method. However, elevation mechanisms are significantly applied to the neuron, neuron activation function type and finally in the higher order types of neural network with the functions of adaptive in nature. FEOC have evaluated sigma-pi network representing both the Elevated order Processing Unit (EPU and pi-sigma network. The experimental performance of Fusion Elevated Order Classifier in the wavelet neural network is evaluated against BGSLDA and MS-HOG using Statlog (Landsat Satellite Data Set from UCI repository. FEOC performed in MATLAB with factors such as classification accuracy rate, false positive error, computational cost, memory consumption, response time and higher order classifier rate.


    CERN Multimedia

    Groupe ST-HM


    The Removals Service recommends you to plan your removals well in advance, taking into account the fact that the Transport and Handling Group’s main priority remains the dismantling of LEP and the installation of the LHC. The requests can be made by: Thank you for your cooperation.

  4. [Horticultural plant diseases multispectral classification using combined classified methods]. (United States)

    Feng, Jie; Li, Hong-Ning; Yang, Wei-Ping; Hou, De-Dong; Liao, Ning-Fang


    The research on multispectral data disposal is getting more and more attention with the development of multispectral technique, capturing data ability and application of multispectral technique in agriculture practice. In the present paper, a cultivated plant cucumber' familiar disease (Trichothecium roseum, Sphaerotheca fuliginea, Cladosporium cucumerinum, Corynespora cassiicola, Pseudoperonospora cubensis) is the research objects. The cucumber leaves multispectral images of 14 visible light channels, near infrared channel and panchromatic channel were captured using narrow-band multispectral imaging system under standard observation and illumination environment, and 210 multispectral data samples which are the 16 bands spectral reflectance of different cucumber disease were obtained. The 210 samples were classified by distance, relativity and BP neural network to discuss effective combination of classified methods for making a diagnosis. The result shows that the classified effective combination of distance and BP neural network classified methods has superior performance than each method, and the advantage of each method is fully used. And the flow of recognizing horticultural plant diseases using combined classified methods is presented.


    Institute of Scientific and Technical Information of China (English)


    Word Sense Disambiguation (WSD) is to decide the sense of an ambiguous word on particular context. Most of current studies on WSD only use several ambiguous words as test samples, thus leads to some limitation in practical application. In this paper, we perform WSD study based on large scale real-world corpus using two unsupervised learning algorithms based on ±n-improved Bayesian model and Dependency Grammar(DG)-improved Bayesian model. ±n-improved classifiers reduce the window size of context of ambiguous words with close-distance feature extraction method, and decrease the jamming of useless features, thus obviously improve the accuracy, reaching 83.18% (in open test). DG-improved classifier can more effectively conquer the noise effect existing in Naive-Bayesian classifier. Experimental results show that this approach does better on Chinese WSD, and the open test achieved an accuracy of 86.27%.

  6. Iris Recognition Based on LBP and Combined LVQ Classifier

    CERN Document Server

    Shams, M Y; Nomir, O; El-Awady, R M; 10.5121/ijcsit.2011.3506


    Iris recognition is considered as one of the best biometric methods used for human identification and verification, this is because of its unique features that differ from one person to another, and its importance in the security field. This paper proposes an algorithm for iris recognition and classification using a system based on Local Binary Pattern and histogram properties as a statistical approaches for feature extraction, and Combined Learning Vector Quantization Classifier as Neural Network approach for classification, in order to build a hybrid model depends on both features. The localization and segmentation techniques are presented using both Canny edge detection and Hough Circular Transform in order to isolate an iris from the whole eye image and for noise detection .Feature vectors results from LBP is applied to a Combined LVQ classifier with different classes to determine the minimum acceptable performance, and the result is based on majority voting among several LVQ classifier. Different iris da...

  7. A Film Classifier Based on Low-level Visual Features

    Directory of Open Access Journals (Sweden)

    Hui-Yu Huang


    Full Text Available We propose an approach to classify the film classes by using low level features and visual features. This approach aims to classify the films into genres. Our current domain of study is using the movie preview. A movie preview often emphasizes the theme of a film and hence provides suitable information for classifying process. In our approach, we categorize films into three broad categories: action, dramas, and thriller films. Four computable video features (average shot length, color variance, motion content and lighting key and visual features (show and fast moving effects are combined in our approach to provide the advantage information to demonstrate the movie category. The experimental results present that visual features are the useful messages for processing the film classification. On the other hand, our approach can also be extended for other potential applications, including the browsing and retrieval of videos on the internet, video-on-demand, and video libraries.

  8. Optimal threshold estimation for binary classifiers using game theory. (United States)

    Sanchez, Ignacio Enrique


    Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.

  9. An ɴ-ary λ-averaging based similarity classifier

    Directory of Open Access Journals (Sweden)

    Kurama Onesfole


    Full Text Available We introduce a new n-ary λ similarity classifier that is based on a new n-ary λ-averaging operator in the aggregation of similarities. This work is a natural extension of earlier research on similarity based classification in which aggregation is commonly performed by using the OWA-operator. So far λ-averaging has been used only in binary aggregation. Here the λ-averaging operator is extended to the n-ary aggregation case by using t-norms and t-conorms. We examine four different n-ary norms and test the new similarity classifier with five medical data sets. The new method seems to perform well when compared with the similarity classifier.

  10. A History of Classified Activities at Oak Ridge National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.


    The facilities that became Oak Ridge National Laboratory (ORNL) were created in 1943 during the United States' super-secret World War II project to construct an atomic bomb (the Manhattan Project). During World War II and for several years thereafter, essentially all ORNL activities were classified. Now, in 2000, essentially all ORNL activities are unclassified. The major purpose of this report is to provide a brief history of ORNL's major classified activities from 1943 until the present (September 2000). This report is expected to be useful to the ORNL Classification Officer and to ORNL's Authorized Derivative Classifiers and Authorized Derivative Declassifiers in their classification review of ORNL documents, especially those documents that date from the 1940s and 1950s.

  11. A native Bayesian classifier based routing protocol for VANETS (United States)

    Bao, Zhenshan; Zhou, Keqin; Zhang, Wenbo; Gong, Xiaolei


    Geographic routing protocols are one of the most hot research areas in VANET (Vehicular Ad-hoc Network). However, there are few routing protocols can take both the transmission efficient and the usage of ratio into account. As we have noticed, different messages in VANET may ask different quality of service. So we raised a Native Bayesian Classifier based routing protocol (Naive Bayesian Classifier-Greedy, NBC-Greedy), which can classify and transmit different messages by its emergency degree. As a result, we can balance the transmission efficient and the usage of ratio with this protocol. Based on Matlab simulation, we can draw a conclusion that NBC-Greedy is more efficient and stable than LR-Greedy and GPSR.

  12. Automatically Classifying the Role of Citations in Biomedical Articles (United States)

    Agarwal, Shashank; Choubey, Lisha; Yu, Hong


    Citations are widely used in scientific literature. The traditional model of referencing considers all citations to be the same; however, semantically, citations play different roles. By studying the context in which citations appear, it is possible to determine the role that they play. Here, we report on the development of an eight-category classification scheme, annotation using that scheme, and development and evaluation of supervised machine-learning classifiers using the annotated data. We annotated 1,710 sentences using the annotation schema and our trained classifier obtained an average F1-score of 76.5%. The classifier is available for free as a Java API from PMID:21346931

  13. A Topic Model Approach to Representing and Classifying Football Plays

    KAUST Repository

    Varadarajan, Jagannadan


    We address the problem of modeling and classifying American Football offense teams’ plays in video, a challenging example of group activity analysis. Automatic play classification will allow coaches to infer patterns and tendencies of opponents more ef- ficiently, resulting in better strategy planning in a game. We define a football play as a unique combination of player trajectories. To this end, we develop a framework that uses player trajectories as inputs to MedLDA, a supervised topic model. The joint maximiza- tion of both likelihood and inter-class margins of MedLDA in learning the topics allows us to learn semantically meaningful play type templates, as well as, classify different play types with 70% average accuracy. Furthermore, this method is extended to analyze individual player roles in classifying each play type. We validate our method on a large dataset comprising 271 play clips from real-world football games, which will be made publicly available for future comparisons.


    Directory of Open Access Journals (Sweden)

    M. J. Baheti


    Full Text Available With the advent of technological era, conversion of scanned document (handwritten or printed into machine editable format has attracted many researchers. This paper deals with the problem of recognition of Gujarati handwritten numerals. Gujarati numeral recognition requires performing some specific steps as a part of preprocessing. For preprocessing digitization, segmentation, normalization and thinning are done with considering that the image have almost no noise. Further affine invariant moments based model is used for feature extraction and finally Support Vector Machine (SVM and Fuzzy classifiers are used for numeral classification. . The comparison of SVM and Fuzzy classifier is made and it can be seen that SVM procured better results as compared to Fuzzy Classifier.

  15. Examining the significance of fingerprint-based classifiers

    Directory of Open Access Journals (Sweden)

    Collins Jack R


    Full Text Available Abstract Background Experimental examinations of biofluids to measure concentrations of proteins or their fragments or metabolites are being explored as a means of early disease detection, distinguishing diseases with similar symptoms, and drug treatment efficacy. Many studies have produced classifiers with a high sensitivity and specificity, and it has been argued that accurate results necessarily imply some underlying biology-based features in the classifier. The simplest test of this conjecture is to examine datasets designed to contain no information with classifiers used in many published studies. Results The classification accuracy of two fingerprint-based classifiers, a decision tree (DT algorithm and a medoid classification algorithm (MCA, are examined. These methods are used to examine 30 artificial datasets that contain random concentration levels for 300 biomolecules. Each dataset contains between 30 and 300 Cases and Controls, and since the 300 observed concentrations are randomly generated, these datasets are constructed to contain no biological information. A modest search of decision trees containing at most seven decision nodes finds a large number of unique decision trees with an average sensitivity and specificity above 85% for datasets containing 60 Cases and 60 Controls or less, and for datasets with 90 Cases and 90 Controls many DTs have an average sensitivity and specificity above 80%. For even the largest dataset (300 Cases and 300 Controls the MCA procedure finds several unique classifiers that have an average sensitivity and specificity above 88% using only six or seven features. Conclusion While it has been argued that accurate classification results must imply some biological basis for the separation of Cases from Controls, our results show that this is not necessarily true. The DT and MCA classifiers are sufficiently flexible and can produce good results from datasets that are specifically constructed to contain no

  16. Silicon nanowire arrays as learning chemical vapour classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Niskanen, A O; Colli, A; White, R; Li, H W; Spigone, E; Kivioja, J M, E-mail: [Nokia Research Center, Broers Building, 21 JJ Thomson Avenue, Cambridge CB3 0FA (United Kingdom)


    Nanowire field-effect transistors are a promising class of devices for various sensing applications. Apart from detecting individual chemical or biological analytes, it is especially interesting to use multiple selective sensors to look at their collective response in order to perform classification into predetermined categories. We show that non-functionalised silicon nanowire arrays can be used to robustly classify different chemical vapours using simple statistical machine learning methods. We were able to distinguish between acetone, ethanol and water with 100% accuracy while methanol, ethanol and 2-propanol were classified with 96% accuracy in ambient conditions.

  17. Text Classification: Classifying Plain Source Files with Neural Network

    Directory of Open Access Journals (Sweden)

    Jaromir Veber


    Full Text Available The automated text file categorization has an important place in computer engineering, particularly in the process called data management automation. A lot has been written about text classification and the methods allowing classification of these files are well known. Unfortunately most studies are theoretical and for practical implementation more research is needed. I decided to contribute with a research focused on creating of a classifier for different kinds of programs (source files, scripts…. This paper will describe practical implementation of the classifier for text files depending on file content.

  18. Classifying depth of anesthesia using EEG features, a comparison. (United States)

    Esmaeili, Vahid; Shamsollahi, Mohammad Bagher; Arefian, Noor Mohammad; Assareh, Amin


    Various EEG features have been used in depth of anesthesia (DOA) studies. The objective of this study was to find the excellent features or combination of them than can discriminate between different anesthesia states. Conducting a clinical study on 22 patients we could define 4 distinct anesthetic states: awake, moderate, general anesthesia, and isoelectric. We examined features that have been used in earlier studies using single-channel EEG signal processing method. The maximum accuracy (99.02%) achieved using approximate entropy as the feature. Some other features could well discriminate a particular state of anesthesia. We could completely classify the patterns by means of 3 features and Bayesian classifier.

  19. Online classifier adaptation for cost-sensitive learning


    Zhang, Junlin; Garcia, Jose


    In this paper, we propose the problem of online cost-sensitive clas- sifier adaptation and the first algorithm to solve it. We assume we have a base classifier for a cost-sensitive classification problem, but it is trained with respect to a cost setting different to the desired one. Moreover, we also have some training data samples streaming to the algorithm one by one. The prob- lem is to adapt the given base classifier to the desired cost setting using the steaming training samples online. ...

  20. Learning Continuous Time Bayesian Network Classifiers Using MapReduce

    Directory of Open Access Journals (Sweden)

    Simone Villa


    Full Text Available Parameter and structural learning on continuous time Bayesian network classifiers are challenging tasks when you are dealing with big data. This paper describes an efficient scalable parallel algorithm for parameter and structural learning in the case of complete data using the MapReduce framework. Two popular instances of classifiers are analyzed, namely the continuous time naive Bayes and the continuous time tree augmented naive Bayes. Details of the proposed algorithm are presented using Hadoop, an open-source implementation of a distributed file system and the MapReduce framework for distributed data processing. Performance evaluation of the designed algorithm shows a robust parallel scaling.

  1. Face recognition using composite classifier with 2DPCA (United States)

    Li, Jia; Yan, Ding


    In the conventional face recognition, most researchers focused on enhancing the precision which input data was already the member of database. However, they paid less necessary attention to confirm whether the input data belonged to database. This paper proposed an approach of face recognition using two-dimensional principal component analysis (2DPCA). It designed a novel composite classifier founded by statistical technique. Moreover, this paper utilized the advantages of SVM and Logic Regression in field of classification and therefore made its accuracy improved a lot. To test the performance of the composite classifier, the experiments were implemented on the ORL and the FERET database and the result was shown and evaluated.

  2. Classifiers in Japanese-to-English Machine Translation

    CERN Document Server

    Bond, F; Ikehara, S; Bond, Francis; Ogura, Kentaro; Ikehara, Satoru


    This paper proposes an analysis of classifiers into four major types: UNIT, METRIC, GROUP and SPECIES, based on properties of both Japanese and English. The analysis makes possible a uniform and straightforward treatment of noun phrases headed by classifiers in Japanese-to-English machine translation, and has been implemented in the MT system ALT-J/E. Although the analysis is based on the characteristics of, and differences between, Japanese and English, it is shown to be also applicable to the unrelated language Thai.

  3. An ensemble self-training protein interaction article classifier. (United States)

    Chen, Yifei; Hou, Ping; Manderick, Bernard


    Protein-protein interaction (PPI) is essential to understand the fundamental processes governing cell biology. The mining and curation of PPI knowledge are critical for analyzing proteomics data. Hence it is desired to classify articles PPI-related or not automatically. In order to build interaction article classification systems, an annotated corpus is needed. However, it is usually the case that only a small number of labeled articles can be obtained manually. Meanwhile, a large number of unlabeled articles are available. By combining ensemble learning and semi-supervised self-training, an ensemble self-training interaction classifier called EST_IACer is designed to classify PPI-related articles based on a small number of labeled articles and a large number of unlabeled articles. A biological background based feature weighting strategy is extended using the category information from both labeled and unlabeled data. Moreover, a heuristic constraint is put forward to select optimal instances from unlabeled data to improve the performance further. Experiment results show that the EST_IACer can classify the PPI related articles effectively and efficiently.

  4. Multiple-instance learning as a classifier combining problem

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M. J.; Duin, Robert P. W.


    In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the ass......In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL...... with the assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification...... posteriors. Given the instance labels, the label of a bag can be obtained as a classifier combining problem. An optimal decision rule is derived that determines the threshold on the fraction of instances in a bag that is assigned to the concept class. We provide estimators for the two parameters in the model...

  5. Weighted Hybrid Decision Tree Model for Random Forest Classifier (United States)

    Kulkarni, Vrushali Y.; Sinha, Pradeep K.; Petare, Manisha C.


    Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.

  6. Data Stream Classification Based on the Gamma Classifier

    Directory of Open Access Journals (Sweden)

    Abril Valeria Uriarte-Arcia


    Full Text Available The ever increasing data generation confronts us with the problem of handling online massive amounts of information. One of the biggest challenges is how to extract valuable information from these massive continuous data streams during single scanning. In a data stream context, data arrive continuously at high speed; therefore the algorithms developed to address this context must be efficient regarding memory and time management and capable of detecting changes over time in the underlying distribution that generated the data. This work describes a novel method for the task of pattern classification over a continuous data stream based on an associative model. The proposed method is based on the Gamma classifier, which is inspired by the Alpha-Beta associative memories, which are both supervised pattern recognition models. The proposed method is capable of handling the space and time constrain inherent to data stream scenarios. The Data Streaming Gamma classifier (DS-Gamma classifier implements a sliding window approach to provide concept drift detection and a forgetting mechanism. In order to test the classifier, several experiments were performed using different data stream scenarios with real and synthetic data streams. The experimental results show that the method exhibits competitive performance when compared to other state-of-the-art algorithms.

  7. Subtractive fuzzy classifier based driver distraction levels classification using EEG. (United States)

    Wali, Mousa Kadhim; Murugappan, Murugappan; Ahmad, Badlishah


    [Purpose] In earlier studies of driver distraction, researchers classified distraction into two levels (not distracted, and distracted). This study classified four levels of distraction (neutral, low, medium, high). [Subjects and Methods] Fifty Asian subjects (n=50, 43 males, 7 females), age range 20-35 years, who were free from any disease, participated in this study. Wireless EEG signals were recorded by 14 electrodes during four types of distraction stimuli (Global Position Systems (GPS), music player, short message service (SMS), and mental tasks). We derived the amplitude spectrum of three different frequency bands, theta, alpha, and beta of EEG. Then, based on fusion of discrete wavelet packet transforms and fast fourier transform yield, we extracted two features (power spectral density, spectral centroid frequency) of different wavelets (db4, db8, sym8, and coif5). Mean ± SD was calculated and analysis of variance (ANOVA) was performed. A fuzzy inference system classifier was applied to different wavelets using the two extracted features. [Results] The results indicate that the two features of sym8 posses highly significant discrimination across the four levels of distraction, and the best average accuracy achieved by the subtractive fuzzy classifier was 79.21% using the power spectral density feature extracted using the sym8 wavelet. [Conclusion] These findings suggest that EEG signals can be used to monitor distraction level intensity in order to alert drivers to high levels of distraction.

  8. Gene-expression Classifier in Papillary Thyroid Carcinoma

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Jespersen, Marie Louise; Krogdahl, Annelise;


    BACKGROUND: No reliable biomarker for metastatic potential in the risk stratification of papillary thyroid carcinoma exists. We aimed to develop a gene-expression classifier for metastatic potential. MATERIALS AND METHODS: Genome-wide expression analyses were used. Development cohort: freshly...

  9. 18 CFR 367.18 - Criteria for classifying leases. (United States)


    ... classification of the lease under the criteria in paragraph (a) of this section had the changed terms been in... the lessee) must not give rise to a new classification of a lease for accounting purposes. ... classifying leases. 367.18 Section 367.18 Conservation of Power and Water Resources FEDERAL ENERGY...

  10. Automatic Classification of Cetacean Vocalizations Using an Aural Classifier (United States)


    were inspired by research directed at discriminating the timbre of different musical instruments – a passive classification problem – which suggests...the method should be able to classify marine mammal vocalizations since these calls possess many of the acoustic attributes of music . APPROACH

  11. Building an automated SOAP classifier for emergency department reports. (United States)

    Mowery, Danielle; Wiebe, Janyce; Visweswaran, Shyam; Harkema, Henk; Chapman, Wendy W


    Information extraction applications that extract structured event and entity information from unstructured text can leverage knowledge of clinical report structure to improve performance. The Subjective, Objective, Assessment, Plan (SOAP) framework, used to structure progress notes to facilitate problem-specific, clinical decision making by physicians, is one example of a well-known, canonical structure in the medical domain. Although its applicability to structuring data is understood, its contribution to information extraction tasks has not yet been determined. The first step to evaluating the SOAP framework's usefulness for clinical information extraction is to apply the model to clinical narratives and develop an automated SOAP classifier that classifies sentences from clinical reports. In this quantitative study, we applied the SOAP framework to sentences from emergency department reports, and trained and evaluated SOAP classifiers built with various linguistic features. We found the SOAP framework can be applied manually to emergency department reports with high agreement (Cohen's kappa coefficients over 0.70). Using a variety of features, we found classifiers for each SOAP class can be created with moderate to outstanding performance with F(1) scores of 93.9 (subjective), 94.5 (objective), 75.7 (assessment), and 77.0 (plan). We look forward to expanding the framework and applying the SOAP classification to clinical information extraction tasks.

  12. Using predictive distributions to estimate uncertainty in classifying landmine targets (United States)

    Close, Ryan; Watford, Ken; Glenn, Taylor; Gader, Paul; Wilson, Joseph


    Typical classification models used for detection of buried landmines estimate a singular discriminative output. This classification is based on a model or technique trained with a given set of training data available during system development. Regardless of how well the technique performs when classifying objects that are 'similar' to the training set, most models produce undesirable (and many times unpredictable) responses when presented with object classes different from the training data. This can cause mines or other explosive objects to be misclassified as clutter, or false alarms. Bayesian regression and classification models produce distributions as output, called the predictive distribution. This paper will discuss predictive distributions and their application to characterizing uncertainty in the classification decision, from the context of landmine detection. Specifically, experiments comparing the predictive variance produced by relevance vector machines and Gaussian processes will be described. We demonstrate that predictive variance can be used to determine the uncertainty of the model in classifying an object (i.e., the classifier will know when it's unable to reliably classify an object). The experimental results suggest that degenerate covariance models (such as the relevance vector machine) are not reliable in estimating the predictive variance. This necessitates the use of the Gaussian Process in creating the predictive distribution.

  13. A Multiple Classifier Fusion Algorithm Using Weighted Decision Templates

    Directory of Open Access Journals (Sweden)

    Aizhong Mi


    Full Text Available Fusing classifiers’ decisions can improve the performance of a pattern recognition system. Many applications areas have adopted the methods of multiple classifier fusion to increase the classification accuracy in the recognition process. From fully considering the classifier performance differences and the training sample information, a multiple classifier fusion algorithm using weighted decision templates is proposed in this paper. The algorithm uses a statistical vector to measure the classifier’s performance and makes a weighed transform on each classifier according to the reliability of its output. To make a decision, the information in the training samples around an input sample is used by the k-nearest-neighbor rule if the algorithm evaluates the sample as being highly likely to be misclassified. An experimental comparison was performed on 15 data sets from the KDD’99, UCI, and ELENA databases. The experimental results indicate that the algorithm can achieve better classification performance. Next, the algorithm was applied to cataract grading in the cataract ultrasonic phacoemulsification operation. The application result indicates that the proposed algorithm is effective and can meet the practical requirements of the operation.

  14. Recognition of Characters by Adaptive Combination of Classifiers

    Institute of Scientific and Technical Information of China (English)

    WANG Fei; LI Zai-ming


    In this paper, the visual feature space based on the long Horizontals, the long Verticals,and the radicals are given. An adaptive combination of classifiers, whose coefficients vary with the input pattern, is also proposed. Experiments show that the approach is promising for character recognition in video sequences.

  15. Classifying aquatic macrophytes as indicators of eutrophication in European lakes

    NARCIS (Netherlands)

    Penning, W.E.; Mjelde, M.; Dudley, B.; Hellsten, S.; Hanganu, J.; Kolada, A.; van den Berg, Marcel S.; Poikane, S.; Phillips, G.; Willby, N.; Ecke, F.


    Aquatic macrophytes are one of the biological quality elements in the Water Framework Directive (WFD) for which status assessments must be defined. We tested two methods to classify macrophyte species and their response to eutrophication pressure: one based on percentiles of occurrence along a phosp

  16. Discrimination-Aware Classifiers for Student Performance Prediction (United States)

    Luo, Ling; Koprinska, Irena; Liu, Wei


    In this paper we consider discrimination-aware classification of educational data. Mining and using rules that distinguish groups of students based on sensitive attributes such as gender and nationality may lead to discrimination. It is desirable to keep the sensitive attributes during the training of a classifier to avoid information loss but…

  17. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Directory of Open Access Journals (Sweden)

    M. Al-Rousan


    Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  18. 18 CFR 3a.12 - Authority to classify official information. (United States)


    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Authority to classify official information. 3a.12 Section 3a.12 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES NATIONAL SECURITY INFORMATION Classification §...

  19. 18 CFR 3a.71 - Accountability for classified material. (United States)


    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Accountability for classified material. 3a.71 Section 3a.71 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES NATIONAL SECURITY INFORMATION Accountability for...

  20. Enhancing atlas based segmentation with multiclass linear classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Sdika, Michaël, E-mail: [Université de Lyon, CREATIS, CNRS UMR 5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne 69300 (France)


    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.

  1. Bayesian network classifiers for categorizing cortical GABAergic interneurons. (United States)

    Mihaljević, Bojan; Benavides-Piccione, Ruth; Bielza, Concha; DeFelipe, Javier; Larrañaga, Pedro


    An accepted classification of GABAergic interneurons of the cerebral cortex is a major goal in neuroscience. A recently proposed taxonomy based on patterns of axonal arborization promises to be a pragmatic method for achieving this goal. It involves characterizing interneurons according to five axonal arborization features, called F1-F5, and classifying them into a set of predefined types, most of which are established in the literature. Unfortunately, there is little consensus among expert neuroscientists regarding the morphological definitions of some of the proposed types. While supervised classifiers were able to categorize the interneurons in accordance with experts' assignments, their accuracy was limited because they were trained with disputed labels. Thus, here we automatically classify interneuron subsets with different label reliability thresholds (i.e., such that every cell's label is backed by at least a certain (threshold) number of experts). We quantify the cells with parameters of axonal and dendritic morphologies and, in order to predict the type, also with axonal features F1-F4 provided by the experts. Using Bayesian network classifiers, we accurately characterize and classify the interneurons and identify useful predictor variables. In particular, we discriminate among reliable examples of common basket, horse-tail, large basket, and Martinotti cells with up to 89.52% accuracy, and single out the number of branches at 180 μm from the soma, the convex hull 2D area, and the axonal features F1-F4 as especially useful predictors for distinguishing among these types. These results open up new possibilities for an objective and pragmatic classification of interneurons.

  2. Hair removal in adolescence

    Directory of Open Access Journals (Sweden)

    Sandra Pereira


    Full Text Available Introduction: Due to hormonal stimulation during puberty, changes occur in hair type and distribution. In both sexes, body and facial unwanted hair may have a negative psychological impact on the teenager. There are several available methods of hair removal, but the choice of the most suitable one for each individual can raise doubts. Objective: To review the main methods of hair removal and clarify their indications, advantages and disadvantages. Development: There are several removal methods currently available. Shaving and depilation with chemicals products are temporary methods, that need frequent repetition, because hair removal is next to the cutaneous surface. The epilating methods in which there is full hair extraction include: epilation with wax, thread, tweezers, epilating machines, laser, intense pulsed light, and electrolysis. Conclusions: The age of beginning hair removal and the method choice must be individualized and take into consideration the skin and hair type, location, dermatological and endocrine problems, removal frequency, cost and personal preferences.

  3. Removal of heavy metals using waste eggshell

    Institute of Scientific and Technical Information of China (English)


    The removal capacity of toxic heavy metals by the reused eggshell was studied. As a pretreatment process for the preparation of reused material from waste eggshell, calcination was performed in the furnace at 800℃ for 2 h after crushing the dried waste eggshell. Calcination behavior, qualitative and quantitative elemental information, mineral type and surface characteristics before and after calcination of eggshell were examined by thermal gravimetric analysis (TGA), X-ray fluorescence (XRF), X-ray diffraction (XRD) and scanning electron microscopy (SEM), respectively. After calcination, the major inorganic composition was identified as Ca (lime, 99.63%) and K, P and Sr were identified as minor components. When calcined eggshell was applied in the treatment of synthetic wastewater containing heavy metals, a complete removal of Cd as well as above 99% removal of Cr was observed after 10 min. Although the natural eggshell had some removal capacity of Cd and Cr, a complete removal was not accomplished even after 60 min due to quite slower removal rate. However, in contrast to Cd and Cr, an efficient removal of Pb was observed with the natural eggshell rather than the calcined eggshell. From the application of the calcined eggshell in the treatment of real electroplating wastewater, the calcined eggshell showed a promising removal capacity of heavy metal ions as well as had a good neutralization capacity in the treatment of strong acidic wastewater.

  4. Comparison of machine learning classifiers for influenza detection from emergency department free-text reports. (United States)

    López Pineda, Arturo; Ye, Ye; Visweswaran, Shyam; Cooper, Gregory F; Wagner, Michael M; Tsui, Fuchiang Rich


    Influenza is a yearly recurrent disease that has the potential to become a pandemic. An effective biosurveillance system is required for early detection of the disease. In our previous studies, we have shown that electronic Emergency Department (ED) free-text reports can be of value to improve influenza detection in real time. This paper studies seven machine learning (ML) classifiers for influenza detection, compares their diagnostic capabilities against an expert-built influenza Bayesian classifier, and evaluates different ways of handling missing clinical information from the free-text reports. We identified 31,268 ED reports from 4 hospitals between 2008 and 2011 to form two different datasets: training (468 cases, 29,004 controls), and test (176 cases and 1620 controls). We employed Topaz, a natural language processing (NLP) tool, to extract influenza-related findings and to encode them into one of three values: Acute, Non-acute, and Missing. Results show that all ML classifiers had areas under ROCs (AUC) ranging from 0.88 to 0.93, and performed significantly better than the expert-built Bayesian model. Missing clinical information marked as a value of missing (not missing at random) had a consistently improved performance among 3 (out of 4) ML classifiers when it was compared with the configuration of not assigning a value of missing (missing completely at random). The case/control ratios did not affect the classification performance given the large number of training cases. Our study demonstrates ED reports in conjunction with the use of ML and NLP with the handling of missing value information have a great potential for the detection of infectious diseases.

  5. Particle adhesion and removal

    CERN Document Server

    Mittal, K L


    The book provides a comprehensive and easily accessible reference source covering all important aspects of particle adhesion and removal.  The core objective is to cover both fundamental and applied aspects of particle adhesion and removal with emphasis on recent developments.  Among the topics to be covered include: 1. Fundamentals of surface forces in particle adhesion and removal.2. Mechanisms of particle adhesion and removal.3. Experimental methods (e.g. AFM, SFA,SFM,IFM, etc.) to understand  particle-particle and particle-substrate interactions.4. Mechanics of adhesion of micro- and  n

  6. Region 9 Removal Sites (United States)

    U.S. Environmental Protection Agency — Point geospatial dataset representing locations of CERCLA (Superfund) Removal sites. CERCLA (Comprehensive Environmental Response, Compensation, and Liability Act)...

  7. Will Dam Removal Increase Nitrogen Flux to Estuaries?

    Directory of Open Access Journals (Sweden)

    Arthur J. Gold


    Full Text Available To advance the science of dam removal, analyses of functions and benefits need to be linked to individual dam attributes and effects on downstream receiving waters. We examined 7550 dams in the New England (USA region for possible tradeoffs associated with dam removal. Dam removal often generates improvements for safety or migratory fish passage but might increase nitrogen (N flux and eutrophication in coastal watersheds. We estimated N loading and removal with algorithms using geospatial data on land use, stream flow and hydrography. We focused on dams with reservoirs that increase retention time at specific points of river reaches, creating localized hotspots of elevated N removal. Approximately 2200 dams with reservoirs had potential benefits for N removal based on N loading, retention time and depth. Across stream orders, safety concerns on these N removal dams ranged between 28% and 44%. First order streams constituted the majority of N removal dams (70%, but only 3% of those were classified as high value for fish passage. In cases where dam removal might eliminate N removal function from a particular reservoir, site-specific analyses are warranted to improve N delivery estimates and examine alternatives that retain the reservoir while enhancing fish passage and safety.

  8. Electronics and electronic systems

    CERN Document Server

    Olsen, George H


    Electronics and Electronic Systems explores the significant developments in the field of electronics and electronic devices. This book is organized into three parts encompassing 11 chapters that discuss the fundamental circuit theory and the principles of analog and digital electronics. This book deals first with the passive components of electronic systems, such as resistors, capacitors, and inductors. These topics are followed by a discussion on the analysis of electronic circuits, which involves three ways, namely, the actual circuit, graphical techniques, and rule of thumb. The remaining p

  9. Face Detection Using Adaboosted SVM-Based Component Classifier

    CERN Document Server

    Valiollahzadeh, Seyyed Majid; Nazari, Mohammad


    Recently, Adaboost has been widely used to improve the accuracy of any given learning algorithm. In this paper we focus on designing an algorithm to employ combination of Adaboost with Support Vector Machine as weak component classifiers to be used in Face Detection Task. To obtain a set of effective SVM-weaklearner Classifier, this algorithm adaptively adjusts the kernel parameter in SVM instead of using a fixed one. Proposed combination outperforms in generalization in comparison with SVM on imbalanced classification problem. The proposed here method is compared, in terms of classification accuracy, to other commonly used Adaboost methods, such as Decision Trees and Neural Networks, on CMU+MIT face database. Results indicate that the performance of the proposed method is overall superior to previous Adaboost approaches.

  10. Feasibility study for banking loan using association rule mining classifier

    Directory of Open Access Journals (Sweden)

    Agus Sasmito Aribowo


    Full Text Available The problem of bad loans in the koperasi can be reduced if the koperasi can detect whether member can complete the mortgage debt or decline. The method used for identify characteristic patterns of prospective lenders in this study, called Association Rule Mining Classifier. Pattern of credit member will be converted into knowledge and used to classify other creditors. Classification process would separate creditors into two groups: good credit and bad credit groups. Research using prototyping for implementing the design into an application using programming language and development tool. The process of association rule mining using Weighted Itemset Tidset (WIT–tree methods. The results shown that the method can predict the prospective customer credit. Training data set using 120 customers who already know their credit history. Data test used 61 customers who apply for credit. The results concluded that 42 customers will be paying off their loans and 19 clients are decline

  11. Nonlinear interpolation fractal classifier for multiple cardiac arrhythmias recognition

    Energy Technology Data Exchange (ETDEWEB)

    Lin, C.-H. [Department of Electrical Engineering, Kao-Yuan University, No. 1821, Jhongshan Rd., Lujhu Township, Kaohsiung County 821, Taiwan (China); Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)], E-mail:; Du, Y.-C.; Chen Tainsong [Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)


    This paper proposes a method for cardiac arrhythmias recognition using the nonlinear interpolation fractal classifier. A typical electrocardiogram (ECG) consists of P-wave, QRS-complexes, and T-wave. Iterated function system (IFS) uses the nonlinear interpolation in the map and uses similarity maps to construct various data sequences including the fractal patterns of supraventricular ectopic beat, bundle branch ectopic beat, and ventricular ectopic beat. Grey relational analysis (GRA) is proposed to recognize normal heartbeat and cardiac arrhythmias. The nonlinear interpolation terms produce family functions with fractal dimension (FD), the so-called nonlinear interpolation function (NIF), and make fractal patterns more distinguishing between normal and ill subjects. The proposed QRS classifier is tested using the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Compared with other methods, the proposed hybrid methods demonstrate greater efficiency and higher accuracy in recognizing ECG signals.

  12. Using Syntactic-Based Kernels for Classifying Temporal Relations

    Institute of Scientific and Technical Information of China (English)

    Seyed Abolghasem Mirroshandel; Gholamreza Ghassem-Sani; Mahdy Khayyamian


    Temporal relation classification is one of contemporary demanding tasks of natural language processing. This task can be used in various applications such as question answering, summarization, and language specific information retrieval. In this paper, we propose an improved algorithm for classifying temporal relations, between events or between events and time, using support vector machines (SVM). Along with gold-standard corpus features, the proposed method aims at exploiting some useful automatically generated syntactic features to improve the accuracy of classification. Accordingly, a number of novel kernel functions are introduced and evaluated. Our evaluations clearly demonstrate that adding syntactic features results in a considerable improvement over the state-of-the-art method of classifying temporal relations.

  13. Efficient iris recognition via ICA feature and SVM classifier

    Institute of Scientific and Technical Information of China (English)

    Wang Yong; Xu Luping


    To improve flexibility and reliability of iris recognition algorithm while keeping iris recognition success rate, an iris recognition approach for combining SVM with ICA feature extraction model is presented. SVM is a kind of classifier which has demonstrated high generalization capabilities in the object recognition problem. And ICA is a feature extraction technique which can be considered a generalization of principal component analysis. In this paper, ICA is used to generate a set of subsequences of feature vectors for iris feature extraction. Then each subsequence is classified using support vector machine sequence kernels. Experiments are made on CASIA iris database, the result indicates combination of SVM and ICA can improve iris recognition flexibility and reliability while keeping recognition success rate.


    Directory of Open Access Journals (Sweden)

    B.N. Prathibha


    Full Text Available Breast cancer is a primary cause of mortality and morbidity in women. Reports reveal that earlier the detection of abnormalities, better the improvement in survival. Digital mammograms are one of the most effective means for detecting possible breast anomalies at early stages. Digital mammograms supported with Computer Aided Diagnostic (CAD systems help the radiologists in taking reliable decisions. The proposed CAD system extracts wavelet features and spectral features for the better classification of mammograms. The Support Vector Machines classifier is used to analyze 206 mammogram images from Mias database pertaining to the severity of abnormality, i.e., benign and malign. The proposed system gives 93.14% accuracy for discrimination between normal-malign and 87.25% accuracy for normal-benign samples and 89.22% accuracy for benign-malign samples. The study reveals that features extracted in hybrid transform domain with SVM classifier proves to be a promising tool for analysis of mammograms.

  15. Scoring and Classifying Examinees Using Measurement Decision Theory

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner


    Full Text Available This paper describes and evaluates the use of measurement decision theory (MDT to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1 the classification accuracy of tests scored using decision theory; (2 the effectiveness of different sequential testing procedures; and (3 the number of items needed to make a classification. A large percentage of examinees can be classified accurately with very few items using decision theory. A Java Applet for self instruction and software for generating, calibrating and scoring MDT data are provided.

  16. The fuzzy gene filter: A classifier performance assesment

    CERN Document Server

    Perez, Meir


    The Fuzzy Gene Filter (FGF) is an optimised Fuzzy Inference System designed to rank genes in order of differential expression, based on expression data generated in a microarray experiment. This paper examines the effectiveness of the FGF for feature selection using various classification architectures. The FGF is compared to three of the most common gene ranking algorithms: t-test, Wilcoxon test and ROC curve analysis. Four classification schemes are used to compare the performance of the FGF vis-a-vis the standard approaches: K Nearest Neighbour (KNN), Support Vector Machine (SVM), Naive Bayesian Classifier (NBC) and Artificial Neural Network (ANN). A nested stratified Leave-One-Out Cross Validation scheme is used to identify the optimal number top ranking genes, as well as the optimal classifier parameters. Two microarray data sets are used for the comparison: a prostate cancer data set and a lymphoma data set.

  17. Security Enrichment in Intrusion Detection System Using Classifier Ensemble

    Directory of Open Access Journals (Sweden)

    Uma R. Salunkhe


    Full Text Available In the era of Internet and with increasing number of people as its end users, a large number of attack categories are introduced daily. Hence, effective detection of various attacks with the help of Intrusion Detection Systems is an emerging trend in research these days. Existing studies show effectiveness of machine learning approaches in handling Intrusion Detection Systems. In this work, we aim to enhance detection rate of Intrusion Detection System by using machine learning technique. We propose a novel classifier ensemble based IDS that is constructed using hybrid approach which combines data level and feature level approach. Classifier ensembles combine the opinions of different experts and improve the intrusion detection rate. Experimental results show the improved detection rates of our system compared to reference technique.

  18. Unascertained measurement classifying model of goaf collapse prediction

    Institute of Scientific and Technical Information of China (English)

    DONG Long-jun; PENG Gang-jian; FU Yu-hua; BAI Yun-fei; LIU You-fang


    Based on optimized forecast method of unascertained classifying, a unascertained measurement classifying model (UMC) to predict mining induced goaf collapse was established. The discriminated factors of the model are influential factors including overburden layer type, overburden layer thickness, the complex degree of geologic structure,the inclination angle of coal bed, volume rate of the cavity region, the vertical goaf depth from the surface and space superposition layer of the goaf region. Unascertained measurement (UM) function of each factor was calculated. The unascertained measurement to indicate the classification center and the grade of waiting forecast sample was determined by the UM distance between the synthesis index of waiting forecast samples and index of every classification. The training samples were tested by the established model, and the correct rate is 100%. Furthermore, the seven waiting forecast samples were predicted by the UMC model. The results show that the forecast results are fully consistent with the actual situation.

  19. Evaluation of LDA Ensembles Classifiers for Brain Computer Interface (United States)

    Arjona, Cristian; Pentácolo, José; Gareis, Iván; Atum, Yanina; Gentiletti, Gerardo; Acevedo, Rubén; Rufiner, Leonardo


    The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.

  20. Logarithmic Spiral-based Construction of RBF Classifiers

    Directory of Open Access Journals (Sweden)

    Mohamed Wajih Guerfala


    Full Text Available Clustering process is defined as grouping similar objects together into homogeneous groups or clusters. Objects that belong to one cluster should be very similar to each other, but objects in different clusters will be dissimilar. It aims to simplify the representation of the initial data. The automatic classification recovers all the methods allowing the automatic construction of such groups. This paper describes the design of radial basis function (RBF neural classifiers using a new algorithm for characterizing the hidden layer structure. This algorithm, called k-means Mahalanobis distance, groups the training data class by class in order to calculate the optimal number of clusters of the hidden layer, using two validity indexes. To initialize the initial clusters of k-means algorithm, the method of logarithmic spiral golden angle has been used. Two real data sets (Iris and Wine are considered to improve the efficiency of the proposed approach and the obtained results are compared with basic literature classifier

  1. Dendritic spine detection using curvilinear structure detector and LDA classifier. (United States)

    Zhang, Yong; Zhou, Xiaobo; Witt, Rochelle M; Sabatini, Bernardo L; Adjeroh, Donald; Wong, Stephen T C


    Dendritic spines are small, bulbous cellular compartments that carry synapses. Biologists have been studying the biochemical pathways by examining the morphological and statistical changes of the dendritic spines at the intracellular level. In this paper a novel approach is presented for automated detection of dendritic spines in neuron images. The dendritic spines are recognized as small objects of variable shape attached or detached to multiple dendritic backbones in the 2D projection of the image stack along the optical direction. We extend the curvilinear structure detector to extract the boundaries as well as the centerlines for the dendritic backbones and spines. We further build a classifier using Linear Discriminate Analysis (LDA) to classify the attached spines into valid and invalid types to improve the accuracy of the spine detection. We evaluate the proposed approach by comparing with the manual results in terms of backbone length, spine number, spine length, and spine density.

  2. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... to a state-of-the-art method for cartilage segmentation using one stage nearest neighbour classifier. Our method achieved better results than the state-of-the-art method for tibial as well as femoral cartilage segmentation. The next main contribution of the thesis deals with learning features autonomously...... image, respectively and this system is referred as triplanar convolutional neural network in the thesis. We applied the triplanar CNN for segmenting articular cartilage in knee MRI and compared its performance with the same state-of-the-art method which was used as a benchmark for cascaded classifier...

  3. Neural Networks Classifier for Data Selection in Statistical Machine Translation


    Peris, Álvaro; Chinea-Rios, Mara; Casacuberta, Francisco


    We address the data selection problem in statistical machine translation (SMT) as a classification task. The new data selection method is based on a neural network classifier. We present a new method description and empirical results proving that our data selection method provides better translation quality, compared to a state-of-the-art method (i.e., Cross entropy). Moreover, the empirical results reported are coherent across different language pairs.

  4. Mathematical Modeling and Analysis of Classified Marketing of Agricultural Products

    Institute of Scientific and Technical Information of China (English)

    Fengying; WANG


    Classified marketing of agricultural products was analyzed using the Logistic Regression Model. This method can take full advantage of information in agricultural product database,to find factors influencing best selling degree of agricultural products,and make quantitative analysis accordingly. Using this model,it is also able to predict sales of agricultural products,and provide reference for mapping out individualized sales strategy for popularizing agricultural products.

  5. Manually Classified Errors in Czech-Slovak Translation


    Galuščáková, Petra; Bojar, Ondřej


    Outputs of five Czech-Slovak machine translation systems (Česílko, Česílko 2, Google Translate and Moses with different settings) for first 50 sentences of WMT 2010 testing set. The translations were manually processed and the errors were marked and classified according to the scheme by Vilar et al. (David Vilar, Jia Xu, Luis Fernando D’Haro, Hermann Ney: Error Analysis of Statistical Machine Translation Output, Proceedings of LREC-2006, 2006)

  6. Review of the US Department of Energy Classified Visits Program

    Energy Technology Data Exchange (ETDEWEB)

    Martin, S W; Killinger, M H; Segura, M A


    This review examines the US Department of Energy (DOE) Classified Visits Program, which is administered by the Office of Safeguards and Security. The overall purpose of this analysis is to (1) ensure that DOE policy and implementing procedures are appropriate to maintain US national security intentions; (2) evaluate the effectiveness of the process used across the DOE complex; and (3) recommend changes which will enhance the overall efficiency of the process while maintaining the program's integrity.

  7. Dynamical Logic Driven by Classified Inferences Including Abduction (United States)

    Sawa, Koji; Gunji, Yukio-Pegio


    We propose a dynamical model of formal logic which realizes a representation of logical inferences, deduction and induction. In addition, it also represents abduction which is classified by Peirce as the third inference following deduction and induction. The three types of inference are represented as transformations of a directed graph. The state of a relation between objects of the model fluctuates between the collective and the distinctive. In addition, the location of the relation in the sequence of the relation influences its state.

  8. Classifying Floating Potential Measurement Unit Data Products as Science Data (United States)

    Coffey, Victoria; Minow, Joseph


    We are Co-Investigators for the Floating Potential Measurement Unit (FPMU) on the International Space Station (ISS) and members of the FPMU operations and data analysis team. We are providing this memo for the purpose of classifying raw and processed FPMU data products and ancillary data as NASA science data with unrestricted, public availability in order to best support science uses of the data.

  9. Face Recognition Combining Eigen Features with a Parzen Classifier

    Institute of Scientific and Technical Information of China (English)

    SUN Xin; LIU Bing; LIU Ben-yong


    A face recognition scheme is proposed, wherein a face image is preprocessed by pixel averaging and energy normalizing to reduce data dimension and brightness variation effect, followed by the Fourier transform to estimate the spectrum of the preprocessed image. The principal component analysis is conducted on the spectra of a face image to obtain eigen features. Combining eigen features with a Parzen classifier, experiments are taken on the ORL face database.

  10. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    Directory of Open Access Journals (Sweden)

    Sang-Hoon Hong


    Full Text Available The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol synthetic aperture radar (PolSAR data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering. We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.

  11. Comparison of artificial intelligence classifiers for SIP attack data (United States)

    Safarik, Jakub; Slachta, Jiri


    Honeypot application is a source of valuable data about attacks on the network. We run several SIP honeypots in various computer networks, which are separated geographically and logically. Each honeypot runs on public IP address and uses standard SIP PBX ports. All information gathered via honeypot is periodically sent to the centralized server. This server classifies all attack data by neural network algorithm. The paper describes optimizations of a neural network classifier, which lower the classification error. The article contains the comparison of two neural network algorithm used for the classification of validation data. The first is the original implementation of the neural network described in recent work; the second neural network uses further optimizations like input normalization or cross-entropy cost function. We also use other implementations of neural networks and machine learning classification algorithms. The comparison test their capabilities on validation data to find the optimal classifier. The article result shows promise for further development of an accurate SIP attack classification engine.

  12. Multimodal biometric fusion using multiple-input correlation filter classifiers (United States)

    Hennings, Pablo; Savvides, Marios; Vijaya Kumar, B. V. K.


    In this work we apply a computationally efficient, closed form design of a jointly optimized filter bank of correlation filter classifiers for biometric verification with the use of multiple biometrics from individuals. Advanced correlation filters have been used successfully for biometric classification, and have shown robustness in verifying faces, palmprints and fingerprints. In this study we address the issues of performing robust biometric verification when multiple biometrics from the same person are available at the moment of authentication; we implement biometric fusion by using a filter bank of correlation filter classifiers which are jointly optimized with each biometric, instead of designing separate independent correlation filter classifiers for each biometric and then fuse the resulting match scores. We present results using fingerprint and palmprint images from a data set of 40 people, showing a considerable advantage in verification performance producing a large margin of separation between the impostor and authentic match scores. The method proposed in this paper is a robust and secure method for authenticating an individual.

  13. GS-TEC: the Gaia Spectrophotometry Transient Events Classifier

    CERN Document Server

    Blagorodnova, Nadejda; Wyrzykowski, \\Lukasz; Irwin, Mike; Walton, Nicholas A


    We present an algorithm for classifying the nearby transient objects detected by the Gaia satellite. The algorithm will use the low-resolution spectra from the blue and red spectro-photometers on board of the satellite. Taking a Bayesian approach we model the spectra using the newly constructed reference spectral library and literature-driven priors. We find that for magnitudes brighter than 19 in Gaia $G$ magnitude, around 75\\% of the transients will be robustly classified. The efficiency of the algorithm for SNe type I is higher than 80\\% for magnitudes $G\\leq$18, dropping to approximately 60\\% at magnitude $G$=19. For SNe type II, the efficiency varies from 75 to 60\\% for $G\\leq$18, falling to 50\\% at $G$=19. The purity of our classifier is around 95\\% for SNe type I for all magnitudes. For SNe type II it is over 90\\% for objects with $G \\leq$19. GS-TEC also estimates the redshifts with errors of $\\sigma_z \\le$ 0.01 and epochs with uncertainties $\\sigma_t \\simeq$ 13 and 32 days for type SNe I and SNe II re...

  14. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang


    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  15. Self-organizing map classifier for stressed speech recognition (United States)

    Partila, Pavol; Tovarek, Jaromir; Voznak, Miroslav


    This paper presents a method for detecting speech under stress using Self-Organizing Maps. Most people who are exposed to stressful situations can not adequately respond to stimuli. Army, police, and fire department occupy the largest part of the environment that are typical of an increased number of stressful situations. The role of men in action is controlled by the control center. Control commands should be adapted to the psychological state of a man in action. It is known that the psychological changes of the human body are also reflected physiologically, which consequently means the stress effected speech. Therefore, it is clear that the speech stress recognizing system is required in the security forces. One of the possible classifiers, which are popular for its flexibility, is a self-organizing map. It is one type of the artificial neural networks. Flexibility means independence classifier on the character of the input data. This feature is suitable for speech processing. Human Stress can be seen as a kind of emotional state. Mel-frequency cepstral coefficients, LPC coefficients, and prosody features were selected for input data. These coefficients were selected for their sensitivity to emotional changes. The calculation of the parameters was performed on speech recordings, which can be divided into two classes, namely the stress state recordings and normal state recordings. The benefit of the experiment is a method using SOM classifier for stress speech detection. Results showed the advantage of this method, which is input data flexibility.

  16. Binary Classifier Calibration Using an Ensemble of Linear Trend Estimation (United States)

    Naeini, Mahdi Pakdaman; Cooper, Gregory F.


    Learning accurate probabilistic models from data is crucial in many practical tasks in data mining. In this paper we present a new non-parametric calibration method called ensemble of linear trend estimation (ELiTE). ELiTE utilizes the recently proposed ℓ1 trend ltering signal approximation method [22] to find the mapping from uncalibrated classification scores to the calibrated probability estimates. ELiTE is designed to address the key limitations of the histogram binning-based calibration methods which are (1) the use of a piecewise constant form of the calibration mapping using bins, and (2) the assumption of independence of predicted probabilities for the instances that are located in different bins. The method post-processes the output of a binary classifier to obtain calibrated probabilities. Thus, it can be applied with many existing classification models. We demonstrate the performance of ELiTE on real datasets for commonly used binary classification models. Experimental results show that the method outperforms several common binary-classifier calibration methods. In particular, ELiTE commonly performs statistically significantly better than the other methods, and never worse. Moreover, it is able to improve the calibration power of classifiers, while retaining their discrimination power. The method is also computationally tractable for large scale datasets, as it is practically O(N log N) time, where N is the number of samples.

  17. Analysis of classifiers performance for classification of potential microcalcification (United States)

    M. N., Arun K.; Sheshadri, H. S.


    Breast cancer is a significant public health problem in the world. According to the literature early detection improve breast cancer prognosis. Mammography is a screening tool used for early detection of breast cancer. About 10-30% cases are missed during the routine check as it is difficult for the radiologists to make accurate analysis due to large amount of data. The Microcalcifications (MCs) are considered to be important signs of breast cancer. It has been reported in literature that 30% - 50% of breast cancer detected radio graphically show MCs on mammograms. Histologic examinations report 62% to 79% of breast carcinomas reveals MCs. MC are tiny, vary in size, shape, and distribution, and MC may be closely connected to surrounding tissues. There is a major challenge using the traditional classifiers in the classification of individual potential MCs as the processing of mammograms in appropriate stage generates data sets with an unequal amount of information for both classes (i.e., MC, and Not-MC). Most of the existing state-of-the-art classification approaches are well developed by assuming the underlying training set is evenly distributed. However, they are faced with a severe bias problem when the training set is highly imbalanced in distribution. This paper addresses this issue by using classifiers which handle the imbalanced data sets. In this paper, we also compare the performance of classifiers which are used in the classification of potential MC.


    Institute of Scientific and Technical Information of China (English)

    Wang Xianji; Ye Xueyi; Li Bin; Li Xin; Zhuang Zhenquan


    When using AdaBoost to select discriminant features from some feature space (e.g. Gabor feature space) for face recognition, cascade structure is usually adopted to leverage the asymmetry in the distribution of positive and negative samples. Each node in the cascade structure is a classifier trained by AdaBoost with an asymmetric learning goal of high recognition rate but only moderate low false positive rate. One limitation of AdaBoost arises in the context of skewed example distribution and cascade classifiers: AdaBoost minimizes the classification error, which is not guaranteed to achieve the asymmetric node learning goal. In this paper, we propose to use the asymmetric AdaBoost (Asym-Boost) as a mechanism to address the asymmetric node learning goal. Moreover, the two parts of the selecting features and forming ensemble classifiers are decoupled, both of which occur simultaneously in AsymBoost and AdaBoost. Fisher Linear Discriminant Analysis (FLDA) is used on the selected features to learn a linear discriminant function that maximizes the separability of data among the different classes, which we think can improve the recognition performance. The proposed algorithm is dem onstrated with face recognition using a Gabor based representation on the FERET database. Experimental results show that the proposed algorithm yields better recognition performance than AdaBoost itself.

  19. A Novel Cascade Classifier for Automatic Microcalcification Detection.

    Directory of Open Access Journals (Sweden)

    Seung Yeon Shin

    Full Text Available In this paper, we present a novel cascaded classification framework for automatic detection of individual and clusters of microcalcifications (μC. Our framework comprises three classification stages: i a random forest (RF classifier for simple features capturing the second order local structure of individual μCs, where non-μC pixels in the target mammogram are efficiently eliminated; ii a more complex discriminative restricted Boltzmann machine (DRBM classifier for μC candidates determined in the RF stage, which automatically learns the detailed morphology of μC appearances for improved discriminative power; and iii a detector to detect clusters of μCs from the individual μC detection results, using two different criteria. From the two-stage RF-DRBM classifier, we are able to distinguish μCs using explicitly computed features, as well as learn implicit features that are able to further discriminate between confusing cases. Experimental evaluation is conducted on the original Mammographic Image Analysis Society (MIAS and mini-MIAS databases, as well as our own Seoul National University Bundang Hospital digital mammographic database. It is shown that the proposed method outperforms comparable methods in terms of receiver operating characteristic (ROC and precision-recall curves for detection of individual μCs and free-response receiver operating characteristic (FROC curve for detection of clustered μCs.

  20. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)


    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  1. Image Classifying Registration for Gaussian & Bayesian Techniques: A Review

    Directory of Open Access Journals (Sweden)

    Rahul Godghate,


    Full Text Available A Bayesian Technique for Image Classifying Registration to perform simultaneously image registration and pixel classification. Medical image registration is critical for the fusion of complementary information about patient anatomy and physiology, for the longitudinal study of a human organ over time and the monitoring of disease development or treatment effect, for the statistical analysis of a population variation in comparison to a so-called digital atlas, for image-guided therapy, etc. A Bayesian Technique for Image Classifying Registration is well-suited to deal with image pairs that contain two classes of pixels with different inter-image intensity relationships. We will show through different experiments that the model can be applied in many different ways. For instance if the class map is known, then it can be used for template-based segmentation. If the full model is used, then it can be applied to lesion detection by image comparison. Experiments have been conducted on both real and simulated data. It show that in the presence of an extra-class, the classifying registration improves both the registration and the detection, especially when the deformations are small. The proposed model is defined using only two classes but it is straightforward to extend it to an arbitrary number of classes.

  2. Classifying Human Body Acceleration Patterns Using a Hierarchical Temporal Memory (United States)

    Sassi, Federico; Ascari, Luca; Cagnoni, Stefano

    This paper introduces a novel approach to the detection of human body movements during daily life. With the sole use of one wearable wireless triaxial accelerometer attached to one's chest, this approach aims at classifying raw acceleration data robustly, to detect many common human behaviors without requiring any specific a-priori knowledge about movements. The proposed approach consists of feeding sensory data into a specifically trained Hierarchical Temporal Memory (HTM) to extract invariant spatial-temporal patterns that characterize different body movements. The HTM output is then classified using a Support Vector Machine (SVM) into different categories. The performance of this new HTM+SVM combination is compared with a single SVM using real-word data corresponding to movements like "standing", "walking", "jumping" and "falling", acquired from a group of different people. Experimental results show that the HTM+SVM approach can detect behaviors with very high accuracy and is more robust, with respect to noise, than a classifier based solely on SVMs.

  3. Deep learning for electronic cleansing in dual-energy CT colonography (United States)

    Tachibana, Rie; Näppi, Janne J.; Hironakaa, Toru; Kim, Se Hyung; Yoshida, Hiroyuki


    The purpose of this study was to develop a novel deep-learning-based electronic cleansing (EC) method for dual-energy CT colonography (DE-CTC). In this method, an ensemble of deep convolutional neural networks (DCNNs) is used to classify each voxel of DE-CTC image volumes into one of five multi-material (MUMA) classes: luminal air, soft tissue, tagged fecal material, or a partial-volume boundary between air and tagging or that of soft tissue and tagging. Each DCNN acts as a voxel classifier. At each voxel, a region-of-interest (ROI) centered at the voxel is extracted. After mapping the pixels of the ROI to the input layer of a DCNN, a series of convolutional and max-pooling layers is used to extract features with increasing levels of abstraction. The output layer produces the probabilities at which the input voxel belongs to each of the five MUMA classes. To develop an ensemble of DCNNs, we trained multiple DCNNs based on multi-spectral image volumes derived from the DE-CTC images, including material decomposition images and virtual monochromatic images. The outputs of these DCNNs were then combined by means of a meta-classifier for precise classification of the voxels. Finally, the electronically cleansed CTC images were generated by removing regions that were classified as other than soft tissue, followed by colon surface reconstruction. Preliminary results based on 184,320 images sampled from 30 clinical CTC cases showed a higher accuracy in labeling these classes than that of our previous machine-learning methods, indicating that deep-learning-based multi-spectral EC can accurately remove residual fecal materials from CTC images without generating major EC artifacts.

  4. Removal of broken hardware. (United States)

    Hak, David J; McElvany, Matthew


    Despite advances in metallurgy, fatigue failure of hardware is common when a fracture fails to heal. Revision procedures can be difficult, usually requiring removal of intact or broken hardware. Several different methods may need to be attempted to successfully remove intact or broken hardware. Broken intramedullary nail cross-locking screws may be advanced out by impacting with a Steinmann pin. Broken open-section (Küntscher type) intramedullary nails may be removed using a hook. Closed-section cannulated intramedullary nails require additional techniques, such as the use of guidewires or commercially available extraction tools. Removal of broken solid nails requires use of a commercial ratchet grip extractor or a bone window to directly impact the broken segment. Screw extractors, trephines, and extraction bolts are useful for removing stripped or broken screws. Cold-welded screws and plates can complicate removal of locked implants and require the use of carbide drills or high-speed metal cutting tools. Hardware removal can be a time-consuming process, and no single technique is uniformly successful.

  5. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers. (United States)

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M


    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject "at rest"). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing "signal" (brain activity) can be distinguished form the "noise" components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX ("FMRIB's ICA-based X-noiseifier"), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original

  6. Laser removal of tattoos. (United States)

    Kuperman-Beade, M; Levine, V J; Ashinoff, R


    Tattoos are placed for different reasons. A technique for tattoo removal which produces selective removal of each tattoo pigment, with minimal risk of scarring, is needed. Nonspecific methods have a high incidence of scarring, textural, and pigmentary alterations compared with the use of Q-switched lasers. With new advances in Q-switched laser technology, tattoo removal can be achieved with minimal risk of scarring and permanent pigmentary alteration. There are five types of tattoos: amateur, professional, cosmetic, medicinal, and traumatic. Amateur tattoos require less treatment sessions than professional multicolored tattoos. Other factors to consider when evaluating tattoos for removal are: location, age and the skin type of the patient. Treatment should begin by obtaining a pre-operative history. Since treatment with the Q-switched lasers is painful, use of a local injection with lidocaine or topical anaesthesia cream may be used prior to laser treatment. Topical broad-spectrum antibacterial ointment is applied immediately following the procedure. Three types of lasers are currently used for tattoo removal: Q-switched ruby laser (694 nm), Q-switched Nd:YAG laser (532 nm, 1064 nm), and Q-switched alexandrite laser (755 nm). The Q-switched ruby and alexandrite lasers are useful for removing black, blue and green pigments. The Q-switched 532 nm Nd:YAG laser can be used to remove red pigments and the 1064 nm Nd:YAG laser is used for removal of black and blue pigments. The most common adverse effects following laser tattoo treatment with the Q-switched ruby laser include textural change, scarring, and pigmentary alteration. Transient hypopigmentation and textural changes have been reported in up to 50 and 12%, respectively, of patients treated with the Q-switched alexandrite laser. Hyperpigmentation and textural changes are infrequent adverse effects of the Q-switched Nd:YAG laser and the incidence of hypopigmentary changes is much lower than with the ruby laser

  7. Evaluation of toxicity and removal of color in textile effluent treated with electron beam; Avaliacao da toxicidade e remocao da cor de um efluente textil tratado com feixe de eletrons

    Energy Technology Data Exchange (ETDEWEB)

    Morais, Aline Viana de


    The textile industry is among the main activities Brazil, being relevant in number of jobs, quantity and diversity of products and mainly by the volume of water used in industrial processes and effluent generation. These effluents are complex mixtures which are characterized by the presence of dyes, surfactants, metal sequestering agents, salts and other potentially toxic chemicals for the aquatic biota. Considering the lack of adequate waste management to these treatments, new technologies are essential in highlighting the advanced oxidation processes such as ionizing radiation electron beam. This study includes the preparation of a standard textile effluent chemical laboratory and its treatment by electron beam from electron accelerator in order to reduce the toxicity and intense staining resulting from Cl. Blue 222 dye. The treatment caused a reduction in toxicity to exposed organisms with 34.55% efficiency for the Daphnia similis micro crustacean and 47.83% for Brachionus plicatilis rotifer at a dose of 2.5 kGy. The Vibrio fischeri bacteria obtained better results after treatment with a dose of 5 kGy showing 57.29% efficiency. Color reduction was greater than 90% at a dose of 2.5 kGy. This experiment has also carried out some preliminary tests on the sensitivity of the D. similis and V. fischeri organisms to exposure of some of the products used in this bleaching and dyeing and two water reuse simulations in new textile processing after the treating the effluent with electron beam. (author)

  8. Whole toxicity removal for industrial and domestic effluents treated with electron beam radiation, evaluated with Vibrio fischeri, Daphnia similis and Poecilia reticulata; Reducao da toxicidade aguda de efluentes industriais e domesticos tratados por irradiacao com feixe de eletrons, avaliada com as especies Vibrio fischeri, Daphnia similis and Poecilia reticulata

    Energy Technology Data Exchange (ETDEWEB)

    Borrely, Sueli Ivone


    Several studies have been performed in order to apply ionizing radiation to treat real complexes effluents from different sources, at IPEN. This paper shows the results of such kind of application devoted to influents and effluents from Suzano Wastewater Treatment Plant, Sao Paulo, Suzano WTP, from SABESP. The purpose of the work was to evaluate the radiation technology according to ecotoxicological aspects. The evaluation was carried out on the toxicity bases which included three sampling sites as follows: complex industrial effluents; domestic sewage mixed to the industrial discharge (GM) and final secondary effluent. The tested-organisms for toxicity evaluation were: the marine bacteria Vibrio fischeri, the microcrustacean Daphnia similis and the guppy Poecilia reticulata. The fish tests were applied only for secondary final effluents. The results demonstrated the original acute toxicity levels as well as the efficiency of electron beam for its reduction. An important acute toxicity removal was achieved: from 75% up to 95% with 50 kGy (UNA), 20 kGy (GM) and 5.0 kGy for the final effluent. The toxicity removal was a consequence of several organic solvents decomposed by radiation and acute toxicity reduction was about 95%. When the toxicity was evaluated for fish the radiation efficiency reached from 40% to 60%. The hypothesis tests showed a statistical significant removal in the developed studies condition. No residual hydrogen peroxide was found after 5.0 kGy was applied to final effluent. (author)

  9. Understanding and classifying metabolite space and metabolite-likeness.

    Directory of Open Access Journals (Sweden)

    Julio E Peironcely

    Full Text Available While the entirety of 'Chemical Space' is huge (and assumed to contain between 10(63 and 10(200 'small molecules', distinct subsets of this space can nonetheless be defined according to certain structural parameters. An example of such a subspace is the chemical space spanned by endogenous metabolites, defined as 'naturally occurring' products of an organisms' metabolism. In order to understand this part of chemical space in more detail, we analyzed the chemical space populated by human metabolites in two ways. Firstly, in order to understand metabolite space better, we performed Principal Component Analysis (PCA, hierarchical clustering and scaffold analysis of metabolites and non-metabolites in order to analyze which chemical features are characteristic for both classes of compounds. Here we found that heteroatom (both oxygen and nitrogen content, as well as the presence of particular ring systems was able to distinguish both groups of compounds. Secondly, we established which molecular descriptors and classifiers are capable of distinguishing metabolites from non-metabolites, by assigning a 'metabolite-likeness' score. It was found that the combination of MDL Public Keys and Random Forest exhibited best overall classification performance with an AUC value of 99.13%, a specificity of 99.84% and a selectivity of 88.79%. This performance is slightly better than previous classifiers; and interestingly we found that drugs occupy two distinct areas of metabolite-likeness, the one being more 'synthetic' and the other being more 'metabolite-like'. Also, on a truly prospective dataset of 457 compounds, 95.84% correct classification was achieved. Overall, we are confident that we contributed to the tasks of classifying metabolites, as well as to understanding metabolite chemical space better. This knowledge can now be used in the development of new drugs that need to resemble metabolites, and in our work particularly for assessing the metabolite

  10. Reactor for removing ammonia (United States)

    Luo, Weifang; Stewart, Kenneth D.


    Disclosed is a device for removing trace amounts of ammonia from a stream of gas, particularly hydrogen gas, prepared by a reformation apparatus. The apparatus is used to prevent PEM "poisoning" in a fuel cell receiving the incoming hydrogen stream.

  11. Nephrectomy (Kidney Removal) (United States)

    ... if your entire kidney needs to be removed. Robot-assisted laparoscopic surgery. In a variation of laparoscopic ... living kidney donor. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2014. AskMayoExpert. Partial nephrectomy. Rochester, ...

  12. Asbestos Removal Case History. (United States)

    Haney, Stanley J.


    The engineer for a California school district describes the asbestos removal from the ceilings of El Camino High School. Discusses forming a design team, use of consultants, specifications, relations with contractors, and staff notification. (MLF)

  13. Breast lump removal (United States)

    ... cannot feel it when examining you, a wire localization will be done before the surgery. A radiologist ... send the lump to a laboratory for more testing. Why the Procedure is Performed Surgery to remove ...

  14. Classifying Cubic Edge-Transitive Graphs of Order 8

    Indian Academy of Sciences (India)

    Mehdi Alaeiyan; M K Hosseinipoor


    A simple undirected graph is said to be semisymmetric if it is regular and edge-transitive but not vertex-transitive. Let be a prime. It was shown by Folkman (J. Combin. Theory 3(1967) 215--232) that a regular edge-transitive graph of order 2 or 22 is necessarily vertex-transitive. In this paper, an extension of his result in the case of cubic graphs is given. It is proved that, every cubic edge-transitive graph of order 8 is symmetric, and then all such graphs are classified.

  15. Support vector machine classifiers for large data sets.

    Energy Technology Data Exchange (ETDEWEB)

    Gertz, E. M.; Griffin, J. D.


    This report concerns the generation of support vector machine classifiers for solving the pattern recognition problem in machine learning. Several methods are proposed based on interior point methods for convex quadratic programming. Software implementations are developed by adapting the object-oriented packaging OOQP to the problem structure and by using the software package PETSc to perform time-intensive computations in a distributed setting. Linear systems arising from classification problems with moderately large numbers of features are solved by using two techniques--one a parallel direct solver, the other a Krylov-subspace method incorporating novel preconditioning strategies. Numerical results are provided, and computational experience is discussed.

  16. On the statistical assessment of classifiers using DNA microarray data

    Directory of Open Access Journals (Sweden)

    Carella M


    Full Text Available Abstract Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22 and tumor (25 specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045 as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS and Support Vector Machines (SVM classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035 and e = 18% (p = 0.037 respectively. Moreover, the error rate

  17. A Fast Scalable Classifier Tightly Integrated with RDBMS

    Institute of Scientific and Technical Information of China (English)

    刘红岩; 陆宏钧; 陈剑


    In this paper, we report our success in building efficient scalable classifiers by exploring the capabilities of modern relational database management systems(RDBMS).In addition to high classification accuracy, the unique features of theapproach include its high training speed, linear scalability, and simplicity in implementation. More importantly,the major computation required in the approachcan be implemented using standard functions provided by the modern relational DBMS.Besides, with the effective rule pruning strategy, the algorithm proposed inthis paper can produce a compact set of classification rules. The results of experiments conducted for performance evaluation and analysis are presented.


    Directory of Open Access Journals (Sweden)

    Rohilla Seema


    Full Text Available The biopharmaceutical classification system (BCS is a scientific approach for classifying drug substances based on their dose/solubility ratio and intestinal permeability. The BCS has been developed to allow prediction of in vivo pharmacokinetic performance of drug products from measurements of permeability and solubility. Moreover, the drugs can be categorized into four classes of BCS on the basis of permeability and solubility namely; high permeability high solubility, high permeability low solubility, low permeability high solubility and low permeability low solubility. The present review summarizes the principles, objectives, benefits, classification and applications of BCS.

  19. Colorfulness Enhancement Using Image Classifier Based on Chroma-histogram

    Institute of Scientific and Technical Information of China (English)

    Moon-cheol KIM; Kyoung-won LIM


    The paper proposes a colorfulness enhancement of pictorial images using image classifier based on chroma histogram.This ap-poach firstly estimates strength of colorfulness of images and their types.With such determined information,the algorithm automatically adjusts image colorfulness for a better natural image look.With the help of an additional detection of skin colors and a pixel chroma adaptive local processing,the algorithm produces more natural image look.The algorithm performance had been tested with an image quality judgment experiment of 20 persons.The experimental result indicates a better image preference.

  20. Decision Bayes Criteria for Optimal Classifier Based on Probabilistic Measures

    Institute of Scientific and Technical Information of China (English)

    Wissal Drira; Faouzi Ghorbel


    This paper addresses the high dimension sample problem in discriminate analysis under nonparametric and supervised assumptions. Since there is a kind of equivalence between the probabilistic dependence measure and the Bayes classification error probability, we propose to use an iterative algorithm to optimize the dimension reduction for classification with a probabilistic approach to achieve the Bayes classifier. The estimated probabilities of different errors encountered along the different phases of the system are realized by the Kernel estimate which is adjusted in a means of the smoothing parameter. Experiment results suggest that the proposed approach performs well.

  1. Advanced Coating Removal Techniques (United States)

    Seibert, Jon


    An important step in the repair and protection against corrosion damage is the safe removal of the oxidation and protective coatings without further damaging the integrity of the substrate. Two such methods that are proving to be safe and effective in this task are liquid nitrogen and laser removal operations. Laser technology used for the removal of protective coatings is currently being researched and implemented in various areas of the aerospace industry. Delivering thousands of focused energy pulses, the laser ablates the coating surface by heating and dissolving the material applied to the substrate. The metal substrate will reflect the laser and redirect the energy to any remaining protective coating, thus preventing any collateral damage the substrate may suffer throughout the process. Liquid nitrogen jets are comparable to blasting with an ultra high-pressure water jet but without the residual liquid that requires collection and removal .As the liquid nitrogen reaches the surface it is transformed into gaseous nitrogen and reenters the atmosphere without any contamination to surrounding hardware. These innovative technologies simplify corrosion repair by eliminating hazardous chemicals and repetitive manual labor from the coating removal process. One very significant advantage is the reduction of particulate contamination exposure to personnel. With the removal of coatings adjacent to sensitive flight hardware, a benefit of each technique for the space program is that no contamination such as beads, water, or sanding residue is left behind when the job is finished. One primary concern is the safe removal of coatings from thin aluminum honeycomb face sheet. NASA recently conducted thermal testing on liquid nitrogen systems and found that no damage occurred on 1/6", aluminum substrates. Wright Patterson Air Force Base in conjunction with Boeing and NASA is currently testing the laser remOval technique for process qualification. Other applications of liquid

  2. Laser hair removal pearls. (United States)

    Tierney, Emily P; Goldberg, David J


    A number of lasers and light devices are now available for the treatment of unwanted hair. The goal of laser hair removal is to damage stem cells in the bulge of the follicle through the targeting of melanin, the endogenous chromophore for laser and light devices utilized to remove hair. The competing chromophores in the skin and hair, oxyhemoglobin and water, have a decreased absorption between 690 nm and 1000 nm, thus making this an ideal range for laser and light sources. Pearls of laser hair removal are presented in this review, focusing on four areas of recent development: 1 treatment of blond, white and gray hair; 2 paradoxical hypertrichosis; 3 laser hair removal in children; and 4 comparison of lasers and IPL. Laser and light-based technologies to remove hair represents one of the most exciting areas where discoveries by dermatologists have led to novel treatment approaches. It is likely that in the next decade, continued advancements in this field will bring us closer to the development of a more permanent and painless form of hair removal.

  3. Using Bayesian neural networks to classify forest scenes (United States)

    Vehtari, Aki; Heikkonen, Jukka; Lampinen, Jouko; Juujarvi, Jouni


    We present results that compare the performance of Bayesian learning methods for neural networks on the task of classifying forest scenes into trees and background. Classification task is demanding due to the texture richness of the trees, occlusions of the forest scene objects and diverse lighting conditions under operation. This makes it difficult to determine which are optimal image features for the classification. A natural way to proceed is to extract many different types of potentially suitable features, and to evaluate their usefulness in later processing stages. One approach to cope with large number of features is to use Bayesian methods to control the model complexity. Bayesian learning uses a prior on model parameters, combines this with evidence from a training data, and the integrates over the resulting posterior to make predictions. With this method, we can use large networks and many features without fear of overfitting. For this classification task we compare two Bayesian learning methods for multi-layer perceptron (MLP) neural networks: (1) The evidence framework of MacKay uses a Gaussian approximation to the posterior weight distribution and maximizes with respect to hyperparameters. (2) In a Markov Chain Monte Carlo (MCMC) method due to Neal, the posterior distribution of the network parameters is numerically integrated using the MCMC method. As baseline classifiers for comparison we use (3) MLP early stop committee, (4) K-nearest-neighbor and (5) Classification And Regression Tree.

  4. Classifying paragraph types using linguistic features: Is paragraph positioning important?

    Directory of Open Access Journals (Sweden)

    Scott A. Crossley, Kyle Dempsey & Danielle S. McNamara


    Full Text Available This study examines the potential for computational tools and human raters to classify paragraphs based on positioning. In this study, a corpus of 182 paragraphs was collected from student, argumentative essays. The paragraphs selected were initial, middle, and final paragraphs and their positioning related to introductory, body, and concluding paragraphs. The paragraphs were analyzed by the computational tool Coh-Metrix on a variety of linguistic features with correlates to textual cohesion and lexical sophistication and then modeled using statistical techniques. The paragraphs were also classified by human raters based on paragraph positioning. The performance of the reported model was well above chance and reported an accuracy of classification that was similar to human judgments of paragraph type (66% accuracy for human versus 65% accuracy for our model. The model's accuracy increased when longer paragraphs that provided more linguistic coverage and paragraphs judged by human raters to be of higher quality were examined. The findings support the notions that paragraph types contain specific linguistic features that allow them to be distinguished from one another. The finding reported in this study should prove beneficial in classroom writing instruction and in automated writing assessment.

  5. Using Narrow Band Photometry to Classify Stars and Brown Dwarfs

    CERN Document Server

    Mainzer, A K; Sievers, J L; Young, E T; Lean, Ian S. Mc


    We present a new system of narrow band filters in the near infrared that can be used to classify stars and brown dwarfs. This set of four filters, spanning the H band, can be used to identify molecular features unique to brown dwarfs, such as H2O and CH4. The four filters are centered at 1.495 um (H2O), 1.595 um (continuum), 1.66 um (CH4), and 1.75 um (H2O). Using two H2O filters allows us to solve for individual objects' reddenings. This can be accomplished by constructing a color-color-color cube and rotating it until the reddening vector disappears. We created a model of predicted color-color-color values for different spectral types by integrating filter bandpass data with spectra of known stars and brown dwarfs. We validated this model by making photometric measurements of seven known L and T dwarfs, ranging from L1 - T7.5. The photometric measurements agree with the model to within +/-0.1 mag, allowing us to create spectral indices for different spectral types. We can classify A through early M stars to...

  6. Integrating language models into classifiers for BCI communication: a review (United States)

    Speier, W.; Arnold, C.; Pouratian, N.


    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  7. Even more Chironomid species for classifying lake nutrient status

    Directory of Open Access Journals (Sweden)

    Les Ruse


    Full Text Available The European Union Water Framework Directive (WFD classifies ecological status of a waterbody by the determination of its natural reference state to provide a measure of perturbation by human impacts based on taxonomic composition and abundance of aquatic species. Ruse (2010; 2011 has provided methods of assessing anthropogenic perturbations to lake ecological status, in terms of nutrient enrichment and acidification, by analysing collections of floating pupal exuviae discarded by emerging adult Chironomidae. The previous nutrient assessment method was derived from chironomid and environmental data collected during 178 lake surveys of all WFD types found in Britain. Canonical Correspondence Analysis provided species optima in relation to phosphate and nitrogen concentrations. Species found in less than three surveys were excluded from analysis in case of spurious association with environmental values. Since Ruse (2010 an additional 72 lakes have been surveyed adding 31 more species for use in nutrient status assessment. These additional scoring species are reported here. The practical application of the Chironomid Pupal Exuvial Technique (CPET to classify WFD lake nutrient status is demonstrated using CPET survey data from lakes in Poland.

  8. An Ocular Protein Triad Can Classify Four Complex Retinal Diseases (United States)

    Kuiper, J. J. W.; Beretta, L.; Nierkens, S.; van Leeuwen, R.; Ten Dam-van Loon, N. H.; Ossewaarde-van Norel, J.; Bartels, M. C.; de Groot-Mijnes, J. D. F.; Schellekens, P.; de Boer, J. H.; Radstake, T. R. D. J.


    Retinal diseases generally are vision-threatening conditions that warrant appropriate clinical decision-making which currently solely dependents upon extensive clinical screening by specialized ophthalmologists. In the era where molecular assessment has improved dramatically, we aimed at the identification of biomarkers in 175 ocular fluids to classify four archetypical ocular conditions affecting the retina (age-related macular degeneration, idiopathic non-infectious uveitis, primary vitreoretinal lymphoma, and rhegmatogenous retinal detachment) with one single test. Unsupervised clustering of ocular proteins revealed a classification strikingly similar to the clinical phenotypes of each disease group studied. We developed and independently validated a parsimonious model based merely on three proteins; interleukin (IL)-10, IL-21, and angiotensin converting enzyme (ACE) that could correctly classify patients with an overall accuracy, sensitivity and specificity of respectively, 86.7%, 79.4% and 92.5%. Here, we provide proof-of-concept for molecular profiling as a diagnostic aid for ophthalmologists in the care for patients with retinal conditions.

  9. Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces.

    Directory of Open Access Journals (Sweden)

    Hossein Bashashati

    Full Text Available A problem that impedes the progress in Brain-Computer Interface (BCI research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA as the classifier of choice for BCI systems.

  10. Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces. (United States)

    Bashashati, Hossein; Ward, Rabab K; Birch, Gary E; Bashashati, Ali


    A problem that impedes the progress in Brain-Computer Interface (BCI) research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA) as the classifier of choice for BCI systems.


    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi


    Full Text Available Search Engines are used for retrieving the information from the web. Most of the times, the importance is laid on top 10 results sometimes it may shrink as top 5, because of the time constraint and reliability on the search engines. Users believe that top 10 or 5 of total results are more relevant. Here comes the problem of spamdexing. It is a method to deceive the search result quality. Falsified metrics such as inserting enormous amount of keywords or links in website may take that website to the top 10 or 5 positions. This paper proposes a classifier based on the Reptree (Regression tree representative. As an initial step Link-based features such as neighbors, pagerank, truncated pagerank, trustrank and assortativity related attributes are inferred. Based on this features, tree is constructed. The tree uses the feature inference to differentiate spam sites from legitimate sites. WEBSPAM-UK-2007 dataset is taken as a base. It is preprocessed and converted into five datasets FEATA, FEATB, FEATC, FEATD and FEATE. Only link based features are taken for experiments. This paper focus on link spam alone. Finally a representative tree is created which will more precisely classify the web spam entries. Results are given. Regression tree classification seems to perform well as shown through experiments.

  12. An Ocular Protein Triad Can Classify Four Complex Retinal Diseases (United States)

    Kuiper, J. J. W.; Beretta, L.; Nierkens, S.; van Leeuwen, R.; ten Dam-van Loon, N. H.; Ossewaarde-van Norel, J.; Bartels, M. C.; de Groot-Mijnes, J. D. F.; Schellekens, P.; de Boer, J. H.; Radstake, T. R. D. J.


    Retinal diseases generally are vision-threatening conditions that warrant appropriate clinical decision-making which currently solely dependents upon extensive clinical screening by specialized ophthalmologists. In the era where molecular assessment has improved dramatically, we aimed at the identification of biomarkers in 175 ocular fluids to classify four archetypical ocular conditions affecting the retina (age-related macular degeneration, idiopathic non-infectious uveitis, primary vitreoretinal lymphoma, and rhegmatogenous retinal detachment) with one single test. Unsupervised clustering of ocular proteins revealed a classification strikingly similar to the clinical phenotypes of each disease group studied. We developed and independently validated a parsimonious model based merely on three proteins; interleukin (IL)-10, IL-21, and angiotensin converting enzyme (ACE) that could correctly classify patients with an overall accuracy, sensitivity and specificity of respectively, 86.7%, 79.4% and 92.5%. Here, we provide proof-of-concept for molecular profiling as a diagnostic aid for ophthalmologists in the care for patients with retinal conditions. PMID:28128370

  13. Discriminating complex networks through supervised NDR and Bayesian classifier (United States)

    Yan, Ke-Sheng; Rong, Li-Li; Yu, Kai


    Discriminating complex networks is a particularly important task for the purpose of the systematic study of networks. In order to discriminate unknown networks exactly, a large set of network measurements are needed to be taken into account for comprehensively considering network properties. However, as we demonstrate in this paper, these measurements are nonlinear correlated with each other in general, resulting in a wide variety of redundant measurements which unintentionally explain the same aspects of network properties. To solve this problem, we adopt supervised nonlinear dimensionality reduction (NDR) to eliminate the nonlinear redundancy and visualize networks in a low-dimensional projection space. Though unsupervised NDR can achieve the same aim, we illustrate that supervised NDR is more appropriate than unsupervised NDR for discrimination task. After that, we perform Bayesian classifier (BC) in the projection space to discriminate the unknown network by considering the projection score vectors as the input of the classifier. We also demonstrate the feasibility and effectivity of this proposed method in six extensive research real networks, ranging from technological to social or biological. Moreover, the effectiveness and advantage of the proposed method is proved by the contrast experiments with the existing method.

  14. Occlusion Handling via Random Subspace Classifiers for Human Detection. (United States)

    Marín, Javier; Vázquez, David; López, Antonio M; Amores, Jaume; Kuncheva, Ludmila I


    This paper describes a general method to address partial occlusions for human detection in still images. The random subspace method (RSM) is chosen for building a classifier ensemble robust against partial occlusions. The component classifiers are chosen on the basis of their individual and combined performance. The main contribution of this work lies in our approach's capability to improve the detection rate when partial occlusions are present without compromising the detection performance on non occluded data. In contrast to many recent approaches, we propose a method which does not require manual labeling of body parts, defining any semantic spatial components, or using additional data coming from motion or stereo. Moreover, the method can be easily extended to other object classes. The experiments are performed on three large datasets: the INRIA person dataset, the Daimler Multicue dataset, and a new challenging dataset, called PobleSec, in which a considerable number of targets are partially occluded. The different approaches are evaluated at the classification and detection levels for both partially occluded and non-occluded data. The experimental results show that our detector outperforms state-of-the-art approaches in the presence of partial occlusions, while offering performance and reliability similar to those of the holistic approach on non-occluded data. The datasets used in our experiments have been made publicly available for benchmarking purposes.

  15. Decision Tree Classifiers for Star/Galaxy Separation

    CERN Document Server

    Vasconcellos, E C; Gal, R R; LaBarbera, F L; Capelato, H V; Velho, H F Campos; Trevisan, M; Ruiz, R S R


    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of $884,126$ SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: $14\\le r\\le21$ ($85.2%$) and $r\\ge19$ ($82.1%$). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT and Ball et al. (2006). We find that our FT classifier is comparable or better in completeness over the full magnitude range $15\\le r\\le21$, with m...

  16. Phenotype Recognition with Combined Features and Random Subspace Classifier Ensemble

    Directory of Open Access Journals (Sweden)

    Pham Tuan D


    Full Text Available Abstract Background Automated, image based high-content screening is a fundamental tool for discovery in biological science. Modern robotic fluorescence microscopes are able to capture thousands of images from massively parallel experiments such as RNA interference (RNAi or small-molecule screens. As such, efficient computational methods are required for automatic cellular phenotype identification capable of dealing with large image data sets. In this paper we investigated an efficient method for the extraction of quantitative features from images by combining second order statistics, or Haralick features, with curvelet transform. A random subspace based classifier ensemble with multiple layer perceptron (MLP as the base classifier was then exploited for classification. Haralick features estimate image properties related to second-order statistics based on the grey level co-occurrence matrix (GLCM, which has been extensively used for various image processing applications. The curvelet transform has a more sparse representation of the image than wavelet, thus offering a description with higher time frequency resolution and high degree of directionality and anisotropy, which is particularly appropriate for many images rich with edges and curves. A combined feature description from Haralick feature and curvelet transform can further increase the accuracy of classification by taking their complementary information. We then investigate the applicability of the random subspace (RS ensemble method for phenotype classification based on microscopy images. A base classifier is trained with a RS sampled subset of the original feature set and the ensemble assigns a class label by majority voting. Results Experimental results on the phenotype recognition from three benchmarking image sets including HeLa, CHO and RNAi show the effectiveness of the proposed approach. The combined feature is better than any individual one in the classification accuracy. The

  17. A compact 3D VLSI classifier using bagging threshold network ensembles. (United States)

    Bermak, A; Martinez, D


    A bagging ensemble consists of a set of classifiers trained independently and combined by a majority vote. Such a combination improves generalization performance but can require large amounts of memory and computation, a serious drawback for addressing portable real-time pattern recognition applications. We report here a compact three-dimensional (3D) multiprecision very large-scale integration (VLSI) implementation of a bagging ensemble. In our circuit, individual classifiers are decision trees implemented as threshold networks - one layer of threshold logic units (TLUs) followed by combinatorial logic functions. The hardware was fabricated using 0.7-/spl mu/m CMOS technology and packaged using MCM-V micro-packaging technology. The 3D chip implements up to 192 TLUs operating at a speed of up to 48 GCPPS and implemented in a volume of (/spl omega/ /spl times/ L /spl times/ h) = (2 /spl times/ 2 /spl times/ 0.7) cm/sup 3/. The 3D circuit features a high level of programmability and flexibility offering the possibility to make an efficient use of the hardware resources in order to reduce the power consumption. Successful operation of the 3D chip for various precisions and ensemble sizes is demonstrated through an electronic nose application.

  18. Classifying regional development in Iran (Application of Composite Index Approach

    Directory of Open Access Journals (Sweden)

    A. Sharifzadeh


    Full Text Available Extended abstract1- IntroductionThe spatial economy of Iran, like that of so many other developing countries, is characterized by an uneven spatial pattern of economic activities. The problem of spatial inequality emerged when efficiency-oriented sectoral policies came into conflict with the spatial dimension of development (Atash, 1988. Due to this conflict, extreme imbalanced development in Iran was created. Moreover spatial uneven distribution of economic activities in Iran is unknown and incomplete. So, there is an urgent need for more efficient and effective design, targeting and implementing interventions to manage spatial imbalances in development. Hence, the identification of development patterns at spatial scale and the factors generating them can help improve planning if development programs are focused on removing the constraints adversely affecting development in potentially good areas. There is a need for research that would describe and explain the problem of spatial development patterns as well as proposal of possible strategies, which can be used to develop the country and reduce the spatial imbalances. The main objective of this research was to determine spatial economic development level in order to identify spatial pattern of development and explain determinants of such imbalance in Iran based on methodology of composite index of development. Then, Iran provinces were ranked and classified according to the calculated composite index. To collect the required data, census of 2006 and yearbook in various times were used. 2- Theoretical basesTheories of regional inequality as well as empirical evidence regarding actual trends at the national or international level have been discussed and debated in the economic literature for over three decades. Early debates concerning the impact of market mechanisms on regional inequality in the West (Myrdal, 1957 have become popular again in the 1990s. There is a conflict on probable outcomes

  19. Complexity Measure Revisited: A New Algorithm for Classifying Cardiac Arrhythmias (United States)


    Ayesta, L. Serrano, I. Romero Department of Electrical and Electronic Engineering, Public University of Navarra Campus de Arrosadía, 31006 Pamplona, Spain...Number Task Number Work Unit Number Performing Organization Name(s) and Address(es) Department of Electrical and Electronic Engineering Public ... University of Navarra Campus de Arrosadia, 31006 Pamplona, Spain Performing Organization Report Number Sponsoring/Monitoring Agency Name(s) and Address(es

  20. Lambda-perceptron: an adaptive classifier for data-streams


    Pavlidis, N.; Tasoulis, Dimitrios; Adams, N.M.; Hand, D J


    Streaming data introduce challenges mainly due to changing data distributions (population drift). To accommodate population drift we develop a novel linear adaptive online classification method motivated by ideas from adaptive filtering. Our approach allows the impact of past data on parameter estimates to be gradually removed, a process termed forgetting, yielding completely online adaptive algorithms. Extensive experimental results show that this approach adjusts the forgetting mechanism to...

  1. Least Square Support Vector Machine Classifier vs a Logistic Regression Classifier on the Recognition of Numeric Digits

    Directory of Open Access Journals (Sweden)

    Danilo A. López-Sarmiento


    Full Text Available In this paper is compared the performance of a multi-class least squares support vector machine (LSSVM mc versus a multi-class logistic regression classifier to problem of recognizing the numeric digits (0-9 handwritten. To develop the comparison was used a data set consisting of 5000 images of handwritten numeric digits (500 images for each number from 0-9, each image of 20 x 20 pixels. The inputs to each of the systems were vectors of 400 dimensions corresponding to each image (not done feature extraction. Both classifiers used OneVsAll strategy to enable multi-classification and a random cross-validation function for the process of minimizing the cost function. The metrics of comparison were precision and training time under the same computational conditions. Both techniques evaluated showed a precision above 95 %, with LS-SVM slightly more accurate. However the computational cost if we found a marked difference: LS-SVM training requires time 16.42 % less than that required by the logistic regression model based on the same low computational conditions.

  2. Arsenic removal from water (United States)

    Moore, Robert C.; Anderson, D. Richard


    Methods for removing arsenic from water by addition of inexpensive and commonly available magnesium oxide, magnesium hydroxide, calcium oxide, or calcium hydroxide to the water. The hydroxide has a strong chemical affinity for arsenic and rapidly adsorbs arsenic, even in the presence of carbonate in the water. Simple and commercially available mechanical methods for removal of magnesium hydroxide particles with adsorbed arsenic from drinking water can be used, including filtration, dissolved air flotation, vortex separation, or centrifugal separation. A method for continuous removal of arsenic from water is provided. Also provided is a method for concentrating arsenic in a water sample to facilitate quantification of arsenic, by means of magnesium or calcium hydroxide adsorption.

  3. Road network extraction in classified SAR images using genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    肖志强; 鲍光淑; 蒋晓确


    Due to the complicated background of objectives and speckle noise, it is almost impossible to extract roads directly from original synthetic aperture radar(SAR) images. A method is proposed for extraction of road network from high-resolution SAR image. Firstly, fuzzy C means is used to classify the filtered SAR image unsupervisedly, and the road pixels are isolated from the image to simplify the extraction of road network. Secondly, according to the features of roads and the membership of pixels to roads, a road model is constructed, which can reduce the extraction of road network to searching globally optimization continuous curves which pass some seed points. Finally, regarding the curves as individuals and coding a chromosome using integer code of variance relative to coordinates, the genetic operations are used to search global optimization roads. The experimental results show that the algorithm can effectively extract road network from high-resolution SAR images.

  4. Business process modeling for processing classified documents using RFID technology

    Directory of Open Access Journals (Sweden)

    Koszela Jarosław


    Full Text Available The article outlines the application of the processing approach to the functional description of the designed IT system supporting the operations of the secret office, which processes classified documents. The article describes the application of the method of incremental modeling of business processes according to the BPMN model to the description of the processes currently implemented (“as is” in a manual manner and target processes (“to be”, using the RFID technology for the purpose of their automation. Additionally, the examples of applying the method of structural and dynamic analysis of the processes (process simulation to verify their correctness and efficiency were presented. The extension of the process analysis method is a possibility of applying the warehouse of processes and process mining methods.

  5. Refining and classifying finite-time Lyapunov exponent ridges

    CERN Document Server

    Allshouse, Michael R


    While more rigorous and sophisticated methods for identifying Lagrangian based coherent structures exist, the finite-time Lyapunov exponent (FTLE) field remains a straightforward and popular method for gaining some insight into transport by complex, time-dependent two-dimensional flows. In light of its enduring appeal, and in support of good practice, we begin by investigating the effects of discretization and noise on two numerical approaches for calculating the FTLE field. A practical method to extract and refine FTLE ridges in two-dimensional flows, which builds on previous methods, is then presented. Seeking to better ascertain the role of an FTLE ridge in flow transport, we adapt an existing classification scheme and provide a thorough treatment of the challenges of classifying the types of deformation represented by an FTLE ridge. As a practical demonstration, the methods are applied to an ocean surface velocity field data set generated by a numerical model.

  6. Sex Bias in Classifying Borderline and Narcissistic Personality Disorder. (United States)

    Braamhorst, Wouter; Lobbestael, Jill; Emons, Wilco H M; Arntz, Arnoud; Witteman, Cilia L M; Bekker, Marrie H J


    This study investigated sex bias in the classification of borderline and narcissistic personality disorders. A sample of psychologists in training for a post-master degree (N = 180) read brief case histories (male or female version) and made DSM classification. To differentiate sex bias due to sex stereotyping or to base rate variation, we used different case histories, respectively: (1) non-ambiguous case histories with enough criteria of either borderline or narcissistic personality disorder to meet the threshold for classification, and (2) an ambiguous case with subthreshold features of both borderline and narcissistic personality disorder. Results showed significant differences due to sex of the patient in the ambiguous condition. Thus, when the diagnosis is not straightforward, as in the case of mixed subthreshold features, sex bias is present and is influenced by base-rate variation. These findings emphasize the need for caution in classifying personality disorders, especially borderline or narcissistic traits.

  7. Feature Fusion Based SVM Classifier for Protein Subcellular Localization Prediction. (United States)

    Rahman, Julia; Mondal, Md Nazrul Islam; Islam, Md Khaled Ben; Hasan, Md Al Mehedi


    For the importance of protein subcellular localization in different branches of life science and drug discovery, researchers have focused their attentions on protein subcellular localization prediction. Effective representation of features from protein sequences plays a most vital role in protein subcellular localization prediction specially in case of machine learning techniques. Single feature representation-like pseudo amino acid composition (PseAAC), physiochemical property models (PPM), and amino acid index distribution (AAID) contains insufficient information from protein sequences. To deal with such problems, we have proposed two feature fusion representations, AAIDPAAC and PPMPAAC, to work with Support Vector Machine classifiers, which fused PseAAC with PPM and AAID accordingly. We have evaluated the performance for both single and fused feature representation of a Gram-negative bacterial dataset. We have got at least 3% more actual accuracy by AAIDPAAC and 2% more locative accuracy by PPMPAAC than single feature representation.

  8. Performance evaluation of artificial intelligence classifiers for the medical domain. (United States)

    Smith, A E; Nugent, C D; McClean, S I


    The application of artificial intelligence systems is still not widespread in the medical field, however there is an increasing necessity for these to handle the surfeit of information available. One drawback to their implementation is the lack of criteria or guidelines for the evaluation of these systems. This is the primary issue in their acceptability to clinicians, who require them for decision support and therefore need evidence that these systems meet the special safety-critical requirements of the domain. This paper shows evidence that the most prevalent form of intelligent system, neural networks, is generally not being evaluated rigorously regarding classification precision. A taxonomy of the types of evaluation tests that can be carried out, to gauge inherent performance of the outputs of intelligent systems has been assembled, and the results of this presented in a clear and concise form, which should be applicable to all intelligent classifiers for medicine.

  9. A non-parametric 2D deformable template classifier

    DEFF Research Database (Denmark)

    Schultz, Nette; Nielsen, Allan Aasbjerg; Conradsen, Knut;


    We introduce an interactive segmentation method for a sea floor survey. The method is based on a deformable template classifier and is developed to segment data from an echo sounder post-processor called RoxAnn. RoxAnn collects two different measures for each observation point, and in this 2D...... feature space the ship-master will be able to interactively define a segmentation map, which is refined and optimized by the deformable template algorithms. The deformable templates are defined as two-dimensional vector-cycles. Local random transformations are applied to the vector-cycles, and stochastic...... relaxation in a Bayesian scheme is used. In the Bayesian likelihood a class density function and its estimate hereof is introduced, which is designed to separate the feature space. The method is verified on data collected in Øresund, Scandinavia. The data come from four geographically different areas. Two...

  10. On the way of classifying new states of active matter (United States)

    Menzel, Andreas M.


    With ongoing research into the collective behavior of self-propelled particles, new states of active matter are revealed. Some of them are entirely based on the non-equilibrium character and do not have an immediate equilibrium counterpart. In their recent work, Romanczuk et al (2016 New J. Phys. 18 063015) concentrate on the characterization of smectic-like states of active matter. A new type, referred to by the authors as smectic P, is described. In this state, the active particles form stacked layers and self-propel along them. Identifying and classifying states and phases of non-equilibrium matter, including the transitions between them, is an up-to-date effort that will certainly extend for a longer period into the future.

  11. Handwritten Bangla Alphabet Recognition using an MLP Based Classifier

    CERN Document Server

    Basu, Subhadip; Sarkar, Ram; Kundu, Mahantapas; Nasipuri, Mita; Basu, Dipak Kumar


    The work presented here involves the design of a Multi Layer Perceptron (MLP) based classifier for recognition of handwritten Bangla alphabet using a 76 element feature set Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten characters of Bangla alphabet includes 24 shadow features, 16 centroid features and 36 longest-run features. Recognition performances of the MLP designed to work with this feature set are experimentally observed as 86.46% and 75.05% on the samples of the training and the test sets respectively. The work has useful application in the development of a complete OCR system for handwritten Bangla text.

  12. Classifying algorithms for SIFT-MS technology and medical diagnosis. (United States)

    Moorhead, K T; Lee, D; Chase, J G; Moot, A R; Ledingham, K M; Scotter, J; Allardyce, R A; Senthilmohan, S T; Endre, Z


    Selected Ion Flow Tube-Mass Spectrometry (SIFT-MS) is an analytical technique for real-time quantification of trace gases in air or breath samples. SIFT-MS system thus offers unique potential for early, rapid detection of disease states. Identification of volatile organic compound (VOC) masses that contribute strongly towards a successful classification clearly highlights potential new biomarkers. A method utilising kernel density estimates is thus presented for classifying unknown samples. It is validated in a simple known case and a clinical setting before-after dialysis. The simple case with nitrogen in Tedlar bags returned a 100% success rate, as expected. The clinical proof-of-concept with seven tests on one patient had an ROC curve area of 0.89. These results validate the method presented and illustrate the emerging clinical potential of this technology.

  13. Using point-set compression to classify folk songs

    DEFF Research Database (Denmark)

    Meredith, David


    -neighbour algorithm and leave-one-out cross-validation to classify the 360 melodies into tune families. The classifications produced by the algorithms were compared with a ground-truth classification prepared by expert musicologists. Twelve of the thirteen compressors used in the experiment were based...... on the discovery of translational equivalence classes (TECs) of maximal translatable patterns (MTPs) in point-set representations of the melodies. The twelve algorithms consisted of four variants of each of three basic algorithms, COSIATEC, SIATECCompress and Forth’s algorithm. The main difference between...... similarity between folk-songs for classification purposes is highly dependent upon the actual compressor chosen. Furthermore, it seems that compressors based on finding maximal repeated patterns in point-set representations of music show more promise for NCD-based music classification than general...

  14. Classifying orbits in the restricted three-body problem

    CERN Document Server

    Zotos, Euaggelos E


    The case of the planar circular restricted three-body problem is used as a test field in order to determine the character of the orbits of a small body which moves under the gravitational influence of the two heavy primary bodies. We conduct a thorough numerical analysis on the phase space mixing by classifying initial conditions of orbits and distinguishing between three types of motion: (i) bounded, (ii) escape and (iii) collisional. The presented outcomes reveal the high complexity of this dynamical system. Furthermore, our numerical analysis shows a remarkable presence of fractal basin boundaries along all the escape regimes. Interpreting the collisional motion as leaking in the phase space we related our results to both chaotic scattering and the theory of leaking Hamiltonian systems. We also determined the escape and collisional basins and computed the corresponding escape/collisional times. We hope our contribution to be useful for a further understanding of the escape and collisional mechanism of orbi...

  15. Support vector classifier based on principal component analysis

    Institute of Scientific and Technical Information of China (English)


    Support vector classifier (SVC) has the superior advantages for small sample learning problems with high dimensions,with especially better generalization ability.However there is some redundancy among the high dimensions of the original samples and the main features of the samples may be picked up first to improve the performance of SVC.A principal component analysis (PCA) is employed to reduce the feature dimensions of the original samples and the pre-selected main features efficiently,and an SVC is constructed in the selected feature space to improve the learning speed and identification rate of SVC.Furthermore,a heuristic genetic algorithm-based automatic model selection is proposed to determine the hyperparameters of SVC to evaluate the performance of the learning machines.Experiments performed on the Heart and Adult benchmark data sets demonstrate that the proposed PCA-based SVC not only reduces the test time drastically,but also improves the identify rates effectively.

  16. A Speedy Cardiovascular Diseases Classifier Using Multiple Criteria Decision Analysis

    Directory of Open Access Journals (Sweden)

    Wah Ching Lee


    Full Text Available Each year, some 30 percent of global deaths are caused by cardiovascular diseases. This figure is worsening due to both the increasing elderly population and severe shortages of medical personnel. The development of a cardiovascular diseases classifier (CDC for auto-diagnosis will help address solve the problem. Former CDCs did not achieve quick evaluation of cardiovascular diseases. In this letter, a new CDC to achieve speedy detection is investigated. This investigation incorporates the analytic hierarchy process (AHP-based multiple criteria decision analysis (MCDA to develop feature vectors using a Support Vector Machine. The MCDA facilitates the efficient assignment of appropriate weightings to potential patients, thus scaling down the number of features. Since the new CDC will only adopt the most meaningful features for discrimination between healthy persons versus cardiovascular disease patients, a speedy detection of cardiovascular diseases has been successfully implemented.

  17. Early Detection of Breast Cancer using SVM Classifier Technique

    CERN Document Server

    Rejani, Y Ireaneus Anna


    This paper presents a tumor detection algorithm from mammogram. The proposed system focuses on the solution of two problems. One is how to detect tumors as suspicious regions with a very weak contrast to their background and another is how to extract features which categorize tumors. The tumor detection method follows the scheme of (a) mammogram enhancement. (b) The segmentation of the tumor area. (c) The extraction of features from the segmented tumor area. (d) The use of SVM classifier. The enhancement can be defined as conversion of the image quality to a better and more understandable level. The mammogram enhancement procedure includes filtering, top hat operation, DWT. Then the contrast stretching is used to increase the contrast of the image. The segmentation of mammogram images has been playing an important role to improve the detection and diagnosis of breast cancer. The most common segmentation method used is thresholding. The features are extracted from the segmented breast area. Next stage include,...

  18. Should Hypersexual Disorder be Classified as an Addiction? (United States)

    Kor, Ariel; Fogel, Yehuda; Reid, Rory C; Potenza, Marc N


    Hypersexual behavior has been documented within clinical and research settings over the past decade. Despite recent research on hypersexuality and its associated features, many questions remain how best to define and classify hypersexual behavior. Proposed diagnostic criteria for Hypersexual Disorder (HD) have been proposed for the DSM-5 and a preliminary field trial has lent some support to the reliability and validity of the HD diagnosis. However, debate exists with respect to the extent to which the disorder might be categorized as a non-substance or behavioral addiction. In this article, we will discuss this debate in the context of data citing similarities and differences between hypersexual disorder, drug addictions, and pathological gambling. The authors of this paper conclude that despite many similarities between the features of hypersexual behavior and substance-related disorders, the research on HD at this time is in its infancy and much remains to be learned before definitively characterizing HD as an addiction at this time.

  19. Statistical Mechanical Development of a Sparse Bayesian Classifier (United States)

    Uda, Shinsuke; Kabashima, Yoshiyuki


    The demand for extracting rules from high dimensional real world data is increasing in various fields. However, the possible redundancy of such data sometimes makes it difficult to obtain a good generalization ability for novel samples. To resolve this problem, we provide a scheme that reduces the effective dimensions of data by pruning redundant components for bicategorical classification based on the Bayesian framework. First, the potential of the proposed method is confirmed in ideal situations using the replica method. Unfortunately, performing the scheme exactly is computationally difficult. So, we next develop a tractable approximation algorithm, which turns out to offer nearly optimal performance in ideal cases when the system size is large. Finally, the efficacy of the developed classifier is experimentally examined for a real world problem of colon cancer classification, which shows that the developed method can be practically useful.

  20. Performance Evaluation of Bagged RBF Classifier for Data Mining Applications

    Directory of Open Access Journals (Sweden)



    Full Text Available Data mining is the use of algorithms to extract the information and patterns derived by the knowledge discovery in databases process. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. The feasibility and the benefits of the proposed approaches are demonstrated by the means of data mining applications like intrusion detection, direct marketing, and signature verification. A variety of techniques have been employed for analysis ranging from traditional statistical methods to data mining approaches. Bagging and boosting are two relatively new but popular methods for producing ensembles. In this work, bagging is evaluated on real and benchmark data sets of intrusion detection, direct marketing, and signature verification in conjunction with radial basis function classifier as the base learner. The proposed bagged radial basis function is superior to individual approach for data mining applications in terms of classification accuracy.

  1. Higher School Marketing Strategy Formation: Classifying the Factors

    Directory of Open Access Journals (Sweden)

    N. K. Shemetova


    Full Text Available The paper deals with the main trends of higher school management strategy formation. The author specifies the educational changes in the modern information society determining the strategy options. For each professional training level the author denotes the set of strategic factors affecting the educational service consumers and, therefore, the effectiveness of the higher school marketing. The given factors are classified from the stand-points of the providers and consumers of educational service (enrollees, students, graduates and postgraduates. The research methods include the statistic analysis and general methods of scientific analysis, synthesis, induction, deduction, comparison, and classification. The author is convinced that the university management should develop the necessary prerequisites for raising the graduates’ competitiveness in the labor market, and stimulate the active marketing policies of the relating subdivisions and departments. In author’s opinion, the above classification of marketing strategy factors can be used as the system of values for educational service providers. 

  2. Building multiclass classifiers for remote homology detection and fold recognition

    Directory of Open Access Journals (Sweden)

    Karypis George


    Full Text Available Abstract Background Protein remote homology detection and fold recognition are central problems in computational biology. Supervised learning algorithms based on support vector machines are currently one of the most effective methods for solving these problems. These methods are primarily used to solve binary classification problems and they have not been extensively used to solve the more general multiclass remote homology prediction and fold recognition problems. Results We present a comprehensive evaluation of a number of methods for building SVM-based multiclass classification schemes in the context of the SCOP protein classification. These methods include schemes that directly build an SVM-based multiclass model, schemes that employ a second-level learning approach to combine the predictions generated by a set of binary SVM-based classifiers, and schemes that build and combine binary classifiers for various levels of the SCOP hierarchy beyond those defining the target classes. Conclusion Analyzing the performance achieved by the different approaches on four different datasets we show that most of the proposed multiclass SVM-based classification approaches are quite effective in solving the remote homology prediction and fold recognition problems and that the schemes that use predictions from binary models constructed for ancestral categories within the SCOP hierarchy tend to not only lead to lower error rates but also reduce the number of errors in which a superfamily is assigned to an entirely different fold and a fold is predicted as being from a different SCOP class. Our results also show that the limited size of the training data makes it hard to learn complex second-level models, and that models of moderate complexity lead to consistently better results.

  3. Multivariate analysis of quantitative traits can effectively classify rapeseed germplasm

    Directory of Open Access Journals (Sweden)

    Jankulovska Mirjana


    Full Text Available In this study, the use of different multivariate approaches to classify rapeseed genotypes based on quantitative traits has been presented. Tree regression analysis, PCA analysis and two-way cluster analysis were applied in order todescribe and understand the extent of genetic variability in spring rapeseed genotype by trait data. The traits which highly influenced seed and oil yield in rapeseed were successfully identified by the tree regression analysis. Principal predictor for both response variables was number of pods per plant (NP. NP and 1000 seed weight could help in the selection of high yielding genotypes. High values for both traits and oil content could lead to high oil yielding genotypes. These traits may serve as indirect selection criteria and can lead to improvement of seed and oil yield in rapeseed. Quantitative traits that explained most of the variability in the studied germplasm were classified using principal component analysis. In this data set, five PCs were identified, out of which the first three PCs explained 63% of the total variance. It helped in facilitating the choice of variables based on which the genotypes’ clustering could be performed. The two-way cluster analysissimultaneously clustered genotypes and quantitative traits. The final number of clusters was determined using bootstrapping technique. This approach provided clear overview on the variability of the analyzed genotypes. The genotypes that have similar performance regarding the traits included in this study can be easily detected on the heatmap. Genotypes grouped in the clusters 1 and 8 had high values for seed and oil yield, and relatively short vegetative growth duration period and those in cluster 9, combined moderate to low values for vegetative growth duration and moderate to high seed and oil yield. These genotypes should be further exploited and implemented in the rapeseed breeding program. The combined application of these multivariate methods

  4. Capability of geometric features to classify ships in SAR imagery (United States)

    Lang, Haitao; Wu, Siwen; Lai, Quan; Ma, Li


    Ship classification in synthetic aperture radar (SAR) imagery has become a new hotspot in remote sensing community for its valuable potential in many maritime applications. Several kinds of ship features, such as geometric features, polarimetric features, and scattering features have been widely applied on ship classification tasks. Compared with polarimetric features and scattering features, which are subject to SAR parameters (e.g., sensor type, incidence angle, polarization, etc.) and environment factors (e.g., sea state, wind, wave, current, etc.), geometric features are relatively independent of SAR and environment factors, and easy to be extracted stably from SAR imagery. In this paper, the capability of geometric features to classify ships in SAR imagery with various resolution has been investigated. Firstly, the relationship between the geometric feature extraction accuracy and the SAR imagery resolution is analyzed. It shows that the minimum bounding rectangle (MBR) of ship can be extracted exactly in terms of absolute precision by the proposed automatic ship-sea segmentation method. Next, six simple but effective geometric features are extracted to build a ship representation for the subsequent classification task. These six geometric features are composed of length (f1), width (f2), area (f3), perimeter (f4), elongatedness (f5) and compactness (f6). Among them, two basic features, length (f1) and width (f2), are directly extracted based on the MBR of ship, the other four are derived from those two basic features. The capability of the utilized geometric features to classify ships are validated on two data set with different image resolutions. The results show that the performance of ship classification solely by geometric features is close to that obtained by the state-of-the-art methods, which obtained by a combination of multiple kinds of features, including scattering features and geometric features after a complex feature selection process.

  5. Optimising Laser Tattoo Removal (United States)

    Sardana, Kabir; Ranjan, Rashmi; Ghunawat, Sneha


    Lasers are the standard modality for tattoo removal. Though there are various factors that determine the results, we have divided them into three logical headings, laser dependant factors such as type of laser and beam modifications, tattoo dependent factors like size and depth, colour of pigment and lastly host dependent factors, which includes primarily the presence of a robust immune response. Modifications in the existing techniques may help in better clinical outcome with minimal risk of complications. This article provides an insight into some of these techniques along with a detailed account of the factors involved in tattoo removal. PMID:25949018

  6. Optimising laser tattoo removal

    Directory of Open Access Journals (Sweden)

    Kabir Sardana


    Full Text Available Lasers are the standard modality for tattoo removal. Though there are various factors that determine the results, we have divided them into three logical headings, laser dependant factors such as type of laser and beam modifications, tattoo dependent factors like size and depth, colour of pigment and lastly host dependent factors, which includes primarily the presence of a robust immune response. Modifications in the existing techniques may help in better clinical outcome with minimal risk of complications. This article provides an insight into some of these techniques along with a detailed account of the factors involved in tattoo removal.

  7. Plate removal following orthognathic surgery. (United States)

    Little, Mhairi; Langford, Richard Julian; Bhanji, Adam; Farr, David


    The objectives of this study are to determine the removal rates of orthognathic plates used during orthognathic surgery at James Cook University Hospital and describe the reasons for plate removal. 202 consecutive orthognathic cases were identified between July 2004 and July 2012. Demographics and procedure details were collected for these patients. Patients from this group who returned to theatre for plate removal between July 2004 and November 2012 were identified and their notes were analysed for data including reason for plate removal, age, smoking status, sex and time to plate removal. 3.2% of plates were removed with proportionally more plates removed from the mandible than the maxilla. 10.4% of patients required removal of one or more plate. Most plates were removed within the first post-operative year. The commonest reasons for plate removal were plate exposure and infection. The plate removal rates in our study are comparable to those seen in the literature.

  8. Assessment of the optimum degree of Sr{sub 3}Fe{sub 2}MoO{sub 9} electron-doping through oxygen removal: An X-ray powder diffraction and {sup 57}Fe Moessbauer spectroscopy study

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, Carlos A.; Viola, Maria del C. [Area de Quimica General e Inorganica, Departamento de Quimica, Facultad de Quimica, Bioquimica y Farmacia, Universidad Nacional de San Luis, Chacabuco y Pedernera, 5700 San Luis (Argentina); Pedregosa, Jose C., E-mail: [Area de Quimica General e Inorganica, Departamento de Quimica, Facultad de Quimica, Bioquimica y Farmacia, Universidad Nacional de San Luis, Chacabuco y Pedernera, 5700 San Luis (Argentina); Mercader, Roberto C., E-mail: [Departamento de Fisica, IFLP-CONICET, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, C.C. 67, 1900 La Plata (Argentina)


    We describe the preparation and structural characterization by X-ray powder diffraction (XRPD) and Moessbauer spectroscopy of three electron-doped perovskites Sr{sub 3}Fe{sub 2}MoO{sub 9-{delta}} with Fe/Mo = 2 obtained from Sr{sub 3}Fe{sub 2}MoO{sub 9}. The compounds were synthesized by topotactic reduction with H{sub 2}/N{sub 2} (5/95) at 600, 700 and 800 {sup o}C. Above 800 {sup o}C the Fe/Mo ratio changes from Fe/Mo = 2-1 < Fe/Mo < 2. The structural refinements of the XRPD data for the reduced perovskites were carried out by the Rietveld profile analysis method. The crystal structure of these phases is cubic, space group Fm3-bar m, with cationic disorder at the two different B sites that can be populated in variable proportions by the Fe atoms. The Moessbauer spectra allowed determining the evolution of the different species formed after the treatments at different temperatures and confirm that Fe ions in the samples reduced at 600, 700 and 800 {sup o}C are only in the high-spin Fe{sup 3+} electronic state.

  9. Boosting-Based On-Road Obstacle Sensing Using Discriminative Weak Classifiers (United States)

    Adhikari, Shyam Prasad; Yoo, Hyeon-Joong; Kim, Hyongsuk


    This paper proposes an extension of the weak classifiers derived from the Haar-like features for their use in the Viola-Jones object detection system. These weak classifiers differ from the traditional single threshold ones, in that no specific threshold is needed and these classifiers give a more general solution to the non-trivial task of finding thresholds for the Haar-like features. The proposed quadratic discriminant analysis based extension prominently improves the ability of the weak classifiers to discriminate objects and non-objects. The proposed weak classifiers were evaluated by boosting a single stage classifier to detect rear of car. The experiments demonstrate that the object detector based on the proposed weak classifiers yields higher classification performance with less number of weak classifiers than the detector built with traditional single threshold weak classifiers. PMID:22163852

  10. Bayesian classifier applications of airborne hyperspectral imagery processing for forested areas (United States)

    Kozoderov, Vladimir; Kondranin, Timofei; Dmitriev, Egor; Kamentsev, Vladimir


    Pattern recognition problem is outlined in the context of textural and spectral analysis of remote sensing imagery processing. Main attention is paid to Bayesian classifier that can be used to realize the processing procedures based on parallel machine-learning algorithms and high-productive computers. We consider the maximum of the posterior probability principle and the formalism of Markov random fields for the neighborhood description of the pixels for the related classes of objects with the emphasis on forests of different species and ages. The energy category of the selected classes serves to account for the likelihood measure between the registered radiances and the theoretical distribution functions approximating remotely sensed data. Optimization procedures are undertaken to solve the pattern recognition problem of the texture description for the forest classes together with finding thin nuances of their spectral distribution in the feature space. As a result, possible redundancy of the channels for imaging spectrometer due to their correlations is removed. Difficulties are revealed due to different sampling data while separating pixels, which characterize the sunlit tops, shaded space and intermediate cases of the Sun illumination conditions on the hyperspectral images. Such separation of pixels for the forest classes is maintained to enhance the recognition accuracy, but learning ensembles of data need to be agreed for these categories of pixels. We present some results of the Bayesian classifier applicability for recognizing airborne hyperspectral images using the relevant improvements in separating such pixels for the forest classes on a test area of the 4 × 10 km size encompassed by 13 airborne tracks, each forming the images by 500 pixels across the track and from 10,000 to 14,000 pixels along the track. The spatial resolution of each image is near to 1 m from the altitude near to 2 km above the ground level. The results of the hyperspectral imagery

  11. Laser Removal of Protective Treatments on Limestone (United States)

    Gómez-Heras, M.; Rebollar, E.; Alvarez de Buergo, M.; Oujja, M.; Fort, R.; Castillejo, M.

    This work presents an investigation of the laser removal of polymeric materials acting as consolidants and water-repellents on limestone used on buildings of architectural and artistic value. The removal of the consolidant Paraloid B-72 and the water-repellent product Tegosivin HL-100, applied on samples of Colmenarand Bateig limestone, was studied as a function of the laser wavelength, by using the four harmonics of a Q-switched Nd:YAG laser (1064, 532, 355 and 266 nm). Elimination of the coatings and subsequent surface modifications were monitored through colorimetry, roughness measurements and scanning electron microscopy (SEM). The fundamental laser radiation was effective in removing the treatments, although thermal alteration processes were induced on the calcite crystals of the limestone. The best results were obtained by irradiation in the near UV at 355 nm.

  12. Removing the remaining ridges in fingerprint segmentation

    Institute of Scientific and Technical Information of China (English)

    ZHU En; ZHANG Jian-ming; YIN Jian-ping; ZHANG Guo-min; HU Chun-feng


    Fingerprint segmentation is an important step in fingerprint recognition and is usually aimed to identify non-ridge regions and unrecoverable low quality ridge regions and exclude them as background so as to reduce the time expenditure of image processing and avoid detecting false features. In high and in low quality ridge regions, often are some remaining ridges which are the afterimages of the previously scanned finger and are expected to be excluded from the foreground. However, existing segmentation methods generally do not take the case into consideration, and often, the remaining ridge regions are falsely classified as foreground by segmentation algorithm with spurious features produced erroneously including unrecoverable regions as foreground. This paper proposes two steps for fingerprint segmentation aimed at removing the remaining ridge region from the foreground. The non-ridge regions and unrecoverable low quality ridge regions are removed as background in the first step, and then the foreground produced by the first step is further analyzed for possible remove of the remaining ridge region. The proposed method proved effective in avoiding detecting false ridges and in improving minutiae detection.

  13. Improved Paint Removal Technique (United States)


    4 (Phenol)1: p1P4 ji:i Condition of Point Surface Condition of Paint: Surface 4 after 45 minutes ufter 25 minutes Ten~t Pronsidure No. I. on~ Tent ...the ,.I .- pit so high velume water flow can be used to flush the pit floor clean at I the end of each day. Installation of removable grating is also

  14. A Solid Trap and Thermal Desorption System with Application to a Medical Electronic Nose

    Directory of Open Access Journals (Sweden)

    Xuntao Xu


    Full Text Available In this paper, a solid trap/thermal desorption-based odorant gas condensation system has been designed and implemented for measuring low concentration odorant gas. The technique was successfully applied to a medical electronic nose system. The developed system consists of a flow control unit, a temperature control unit and a sorbent tube. The theoretical analysis and experimental results indicate that gas condensation, together with the medical electronic nose system can significantly reduce the detection limit of the nose system and increase the system’s ability to distinguish low concentration gas samples. In addition, the integrated system can remove the influence of background components and fluctuation of operational environment. Even with strong disturbances such as water vapour and ethanol gas, the developed system can classify the test samples accurately.

  15. Classifying Software to Better Support Social Work Practice. (United States)

    Nurius, Paula; Cnaan, Ram A.


    Notes that, as social work gradually enters electronic information era, interface between social work practice and computer world is often accompanied by disharmony. Presents current classification and terminology of software, identifies drawbacks, and proposes new classification approach based on needs of social workers. Discusses how combination…

  16. Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers (United States)

    Anaya, Leticia H.


    In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed.…

  17. Classifying transcription factor targets and discovering relevant biological features

    Directory of Open Access Journals (Sweden)

    DeLisi Charles


    Full Text Available Abstract Background An important goal in post-genomic research is discovering the network of interactions between transcription factors (TFs and the genes they regulate. We have previously reported the development of a supervised-learning approach to TF target identification, and used it to predict targets of 104 transcription factors in yeast. We now include a new sequence conservation measure, expand our predictions to include 59 new TFs, introduce a web-server, and implement an improved ranking method to reveal the biological features contributing to regulation. The classifiers combine 8 genomic datasets covering a broad range of measurements including sequence conservation, sequence overrepresentation, gene expression, and DNA structural properties. Principal Findings (1 Application of the method yields an amplification of information about yeast regulators. The ratio of total targets to previously known targets is greater than 2 for 11 TFs, with several having larger gains: Ash1(4, Ino2(2.6, Yaf1(2.4, and Yap6(2.4. (2 Many predicted targets for TFs match well with the known biology of their regulators. As a case study we discuss the regulator Swi6, presenting evidence that it may be important in the DNA damage response, and that the previously uncharacterized gene YMR279C plays a role in DNA damage response and perhaps in cell-cycle progression. (3 A procedure based on recursive-feature-elimination is able to uncover from the large initial data sets those features that best distinguish targets for any TF, providing clues relevant to its biology. An analysis of Swi6 suggests a possible role in lipid metabolism, and more specifically in metabolism of ceramide, a bioactive lipid currently being investigated for anti-cancer properties. (4 An analysis of global network properties highlights the transcriptional network hubs; the factors which control the most genes and the genes which are bound by the largest set of regulators. Cell-cycle and

  18. Wreck finding and classifying with a sonar filter (United States)

    Agehed, Kenneth I.; Padgett, Mary Lou; Becanovic, Vlatko; Bornich, C.; Eide, Age J.; Engman, Per; Globoden, O.; Lindblad, Thomas; Lodgberg, K.; Waldemark, Karina E.


    Sonar detection and classification of sunken wrecks and other objects is of keen interest to many. This paper describes the use of neural networks (NN) for locating, classifying and determining the alignment of objects on a lakebed in Sweden. A complex program for data preprocessing and visualization was developed. Part of this program, The Sonar Viewer, facilitates training and testing of the NN using (1) the MATLAB Neural Networks Toolbox for multilayer perceptrons with backpropagation (BP) and (2) the neural network O-Algorithm (OA) developed by Age Eide and Thomas Lindblad. Comparison of the performance of the two neural networks approaches indicates that, for this data BP generalizes better than OA, but use of OA eliminates the need for training on non-target (lake bed) images. The OA algorithm does not work well with the smaller ships. Increasing the resolution to counteract this problem would slow down processing and require interpolation to suggest data values between the actual sonar measurements. In general, good results were obtained for recognizing large wrecks and determining their alignment. The programs developed a useful tool for further study of sonar signals in many environments. Recent developments in pulse coupled neural networks techniques provide an opportunity to extend the use in real-world applications where experimental data is difficult, expensive or time consuming to obtain.

  19. Estimating the crowding level with a neuro-fuzzy classifier (United States)

    Boninsegna, Massimo; Coianiz, Tarcisio; Trentin, Edmondo


    This paper introduces a neuro-fuzzy system for the estimation of the crowding level in a scene. Monitoring the number of people present in a given indoor environment is a requirement in a variety of surveillance applications. In the present work, crowding has to be estimated from the image processing of visual scenes collected via a TV camera. A suitable preprocessing of the images, along with an ad hoc feature extraction process, is discussed. Estimation of the crowding level in the feature space is described in terms of a fuzzy decision rule, which relies on the membership of input patterns to a set of partially overlapping crowding classes, comprehensive of doubt classifications and outliers. A society of neural networks, either multilayer perceptrons or hyper radial basis functions, is trained to model individual class-membership functions. Integration of the neural nets within the fuzzy decision rule results in an overall neuro-fuzzy classifier. Important topics concerning the generalization ability, the robustness, the adaptivity and the performance evaluation of the system are explored. Experiments with real-world data were accomplished, comparing the present approach with statistical pattern recognition techniques, namely linear discriminant analysis and nearest neighbor. Experimental results validate the neuro-fuzzy approach to a large extent. The system is currently working successfully as a part of a monitoring system in the Dinegro underground station in Genoa, Italy.

  20. A system-awareness decision classifier to automated MSN forensics (United States)

    Chu, Yin-Teshou Tsao; Fan, Kuo-Pao; Cheng, Ya-Wen; Tseng, Po-Kai; Chen, Huan; Cheng, Bo-Chao


    Data collection is the most important stage in network forensics; but under the resource constrained situations, a good evidence collection mechanism is required to provide effective event collections in a high network traffic environment. In literatures, a few network forensic tools offer MSN-messenger behavior reconstruction. Moreover, they do not have classification strategies at the collection stage when the system becomes saturated. The emphasis of this paper is to address the shortcomings of the above situations and pose a solution to select a better classification in order to ensure the integrity of the evidences in the collection stage under high-traffic network environments. A system-awareness decision classifier (SADC) mechanism is proposed in this paper. MSN-shot sensor is able to adjust the amount of data to be collected according to the current system status and to keep evidence integrity as much as possible according to the file format and the current system status. Analytical results show that proposed SADC to implement selective collection (SC) consumes less cost than full collection (FC) under heavy traffic scenarios. With the deployment of the proposed SADC mechanism, we believe that MSN-shot is able to reconstruct the MSN-messenger behaviors perfectly in the context of upcoming next generation network.

  1. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong


    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  2. Not Color Blind Using Multiband Photometry to Classify Supernovae

    CERN Document Server

    Poznanski, D; Maoz, D; Filippenko, A V; Leonard, D C; Matheson, T; Poznanski, Dovi; Gal-Yam, Avishay; Maoz, Dan; Filippenko, Alexei V.; Leonard, Douglas C.; Matheson, Thomas


    Large numbers of supernovae (SNe) have been discovered in recent years, and many more will be found in the near future. Once discovered, further study of a SN and its possible use as an astronomical tool (e.g., a distance estimator) require knowledge of the SN type. Current classification methods rely almost solely on the analysis of SN spectra to determine their type. However, spectroscopy may not be possible or practical when SNe are faint, very numerous, or discovered in archival studies. We present a classification method for SNe based on the comparison of their observed colors with synthetic ones, calculated from a large database of multi-epoch optical spectra of nearby events. We discuss the capabilities and limitations of this method. For example, type Ia SNe at redshifts z 100 days) stages. Broad-band photometry through standard Johnson-Cousins UBVRI filters can be useful to classify SNe up to z ~ 0.6. The use of Sloan Digital Sky Survey (SDSS) u'g'r'i'z' filters allows extending our classification m...

  3. A Novel Performance Metric for Building an Optimized Classifier

    Directory of Open Access Journals (Sweden)

    Mohammad Hossin


    Full Text Available Problem statement: Typically, the accuracy metric is often applied for optimizing the heuristic or stochastic classification models. However, the use of accuracy metric might lead the searching process to the sub-optimal solutions due to its less discriminating values and it is also not robust to the changes of class distribution. Approach: To solve these detrimental effects, we propose a novel performance metric which combines the beneficial properties of accuracy metric with the extended recall and precision metrics. We call this new performance metric as Optimized Accuracy with Recall-Precision (OARP. Results: In this study, we demonstrate that the OARP metric is theoretically better than the accuracy metric using four generated examples. We also demonstrate empirically that a naïve stochastic classification algorithm, which is Monte Carlo Sampling (MCS algorithm trained with the OARP metric, is able to obtain better predictive results than the one trained with the conventional accuracy metric. Additionally, the t-test analysis also shows a clear advantage of the MCS model trained with the OARP metric over the accuracy metric alone for all binary data sets. Conclusion: The experiments have proved that the OARP metric leads stochastic classifiers such as the MCS towards a better training model, which in turn will improve the predictive results of any heuristic or stochastic classification models.

  4. Pulmonary nodule detection using a cascaded SVM classifier (United States)

    Bergtholdt, Martin; Wiemker, Rafael; Klinder, Tobias


    Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.

  5. Learning multiscale and deep representations for classifying remotely sensed imagery (United States)

    Zhao, Wenzhi; Du, Shihong


    It is widely agreed that spatial features can be combined with spectral properties for improving interpretation performances on very-high-resolution (VHR) images in urban areas. However, many existing methods for extracting spatial features can only generate low-level features and consider limited scales, leading to unpleasant classification results. In this study, multiscale convolutional neural network (MCNN) algorithm was presented to learn spatial-related deep features for hyperspectral remote imagery classification. Unlike traditional methods for extracting spatial features, the MCNN first transforms the original data sets into a pyramid structure containing spatial information at multiple scales, and then automatically extracts high-level spatial features using multiscale training data sets. Specifically, the MCNN has two merits: (1) high-level spatial features can be effectively learned by using the hierarchical learning structure and (2) multiscale learning scheme can capture contextual information at different scales. To evaluate the effectiveness of the proposed approach, the MCNN was applied to classify the well-known hyperspectral data sets and compared with traditional methods. The experimental results shown a significant increase in classification accuracies especially for urban areas.

  6. Classifying and comparing fundraising performance for nonprofit hospitals. (United States)

    Erwin, Cathleen O


    Charitable contributions are becoming increasingly important to nonprofit hospitals, yet fundraising can sometimes be one of the more troublesome aspects of management for nonprofit organizations. This study utilizes an organizational effectiveness and performance framework to identify groups of nonprofit organizations as a method of classifying organizations for performance evaluation and benchmarking that may be more informative than commonly used characteristics such as organizational age and size. Cluster analysis, ANOVA and chi-square analysis are used to study 401 organizations, which includes hospital foundations as well as nonprofit hospitals directly engaged in fundraising. Three distinct clusters of organizations are identified based on performance measures of productivity, efficiency, and complexity. A general profile is developed for each cluster based upon the cluster analysis variables and subsequent analysis of variance on measures of structure, maturity, and legitimacy as well as selected institutional characteristics. This is one of only a few studies to examine fundraising performance in hospitals and hospital foundations, and is the first to utilize data from an industry survey conducted by the leading general professional association for healthcare philanthropy. It has methodological implications for the study of fundraising as well as practical implications for the strategic management of fundraising for nonprofit hospital and hospital foundations.

  7. Addressing the Challenge of Defining Valid Proteomic Biomarkers and Classifiers

    LENUS (Irish Health Repository)

    Dakna, Mohammed


    Abstract Background The purpose of this manuscript is to provide, based on an extensive analysis of a proteomic data set, suggestions for proper statistical analysis for the discovery of sets of clinically relevant biomarkers. As tractable example we define the measurable proteomic differences between apparently healthy adult males and females. We choose urine as body-fluid of interest and CE-MS, a thoroughly validated platform technology, allowing for routine analysis of a large number of samples. The second urine of the morning was collected from apparently healthy male and female volunteers (aged 21-40) in the course of the routine medical check-up before recruitment at the Hannover Medical School. Results We found that the Wilcoxon-test is best suited for the definition of potential biomarkers. Adjustment for multiple testing is necessary. Sample size estimation can be performed based on a small number of observations via resampling from pilot data. Machine learning algorithms appear ideally suited to generate classifiers. Assessment of any results in an independent test-set is essential. Conclusions Valid proteomic biomarkers for diagnosis and prognosis only can be defined by applying proper statistical data mining procedures. In particular, a justification of the sample size should be part of the study design.

  8. Impacts of classifying New York City students as overweight. (United States)

    Almond, Douglas; Lee, Ajin; Schwartz, Amy Ellen


    US schools increasingly report body mass index (BMI) to students and their parents in annual fitness "report cards." We obtained 3,592,026 BMI reports for New York City public school students for 2007-2012. We focus on female students whose BMI puts them close to their age-specific cutoff for categorization as overweight. Overweight students are notified that their BMI "falls outside a healthy weight" and they should review their BMI with a health care provider. Using a regression discontinuity design, we compare those classified as overweight but near to the overweight cutoff to those whose BMI narrowly earned them a "healthy" BMI grouping. We find that overweight categorization generates small impacts on girls' subsequent BMI and weight. Whereas presumably an intent of BMI report cards was to slow BMI growth among heavier students, BMIs and weights did not decline relative to healthy peers when assessed the following academic year. Our results speak to the discrete categorization as overweight for girls with BMIs near the overweight cutoff, not to the overall effect of BMI reporting in New York City.

  9. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Štruc Vitomir


    Full Text Available Abstract This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  10. Executed Movement Using EEG Signals through a Naive Bayes Classifier

    Directory of Open Access Journals (Sweden)

    Juliano Machado


    Full Text Available Recent years have witnessed a rapid development of brain-computer interface (BCI technology. An independent BCI is a communication system for controlling a device by human intension, e.g., a computer, a wheelchair or a neuroprosthes is, not depending on the brain’s normal output pathways of peripheral nerves and muscles, but on detectable signals that represent responsive or intentional brain activities. This paper presents a comparative study of the usage of the linear discriminant analysis (LDA and the naive Bayes (NB classifiers on describing both right- and left-hand movement through electroencephalographic signal (EEG acquisition. For the analysis, we considered the following input features: the energy of the segments of a band pass-filtered signal with the frequency band in sensorimotor rhythms and the components of the spectral energy obtained through the Welch method. We also used the common spatial pattern (CSP filter, so as to increase the discriminatory activity among movement classes. By using the database generated by this experiment, we obtained hit rates up to 70%. The results are compatible with previous studies.

  11. Linearly and Quadratically Separable Classifiers Using Adaptive Approach

    Institute of Scientific and Technical Information of China (English)

    Mohamed Abdel-Kawy Mohamed Ali Soliman; Rasha M. Abo-Bakr


    This paper presents a fast adaptive iterative algorithm to solve linearly separable classification problems in Rn.In each iteration,a subset of the sampling data (n-points,where n is the number of features) is adaptively chosen and a hyperplane is constructed such that it separates the chosen n-points at a margin e and best classifies the remaining points.The classification problem is formulated and the details of the algorithm are presented.Further,the algorithm is extended to solving quadratically separable classification problems.The basic idea is based on mapping the physical space to another larger one where the problem becomes linearly separable.Numerical illustrations show that few iteration steps are sufficient for convergence when classes are linearly separable.For nonlinearly separable data,given a specified maximum number of iteration steps,the algorithm returns the best hyperplane that minimizes the number of misclassified points occurring through these steps.Comparisons with other machine learning algorithms on practical and benchmark datasets are also presented,showing the performance of the proposed algorithm.

  12. Classifying EEG Signals during Stereoscopic Visualization to Estimate Visual Comfort. (United States)

    Frey, Jérémy; Appriou, Aurélien; Lotte, Fabien; Hachet, Martin


    With stereoscopic displays a sensation of depth that is too strong could impede visual comfort and may result in fatigue or pain. We used Electroencephalography (EEG) to develop a novel brain-computer interface that monitors users' states in order to reduce visual strain. We present the first system that discriminates comfortable conditions from uncomfortable ones during stereoscopic vision using EEG. In particular, we show that either changes in event-related potentials' (ERPs) amplitudes or changes in EEG oscillations power following stereoscopic objects presentation can be used to estimate visual comfort. Our system reacts within 1 s to depth variations, achieving 63% accuracy on average (up to 76%) and 74% on average when 7 consecutive variations are measured (up to 93%). Performances are stable (≈62.5%) when a simplified signal processing is used to simulate online analyses or when the number of EEG channels is lessened. This study could lead to adaptive systems that automatically suit stereoscopic displays to users and viewing conditions. For example, it could be possible to match the stereoscopic effect with users' state by modifying the overlap of left and right images according to the classifier output.

  13. Using color histograms and SPA-LDA to classify bacteria. (United States)

    de Almeida, Valber Elias; da Costa, Gean Bezerra; de Sousa Fernandes, David Douglas; Gonçalves Dias Diniz, Paulo Henrique; Brandão, Deysiane; de Medeiros, Ana Claudia Dantas; Véras, Germano


    In this work, a new approach is proposed to verify the differentiating characteristics of five bacteria (Escherichia coli, Enterococcus faecalis, Streptococcus salivarius, Streptococcus oralis, and Staphylococcus aureus) by using digital images obtained with a simple webcam and variable selection by the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). In this sense, color histograms in the red-green-blue (RGB), hue-saturation-value (HSV), and grayscale channels and their combinations were used as input data, and statistically evaluated by using different multivariate classifiers (Soft Independent Modeling by Class Analogy (SIMCA), Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA), Partial Least Squares Discriminant Analysis (PLS-DA) and Successive Projections Algorithm-Linear Discriminant Analysis (SPA-LDA)). The bacteria strains were cultivated in a nutritive blood agar base layer for 24 h by following the Brazilian Pharmacopoeia, maintaining the status of cell growth and the nature of nutrient solutions under the same conditions. The best result in classification was obtained by using RGB and SPA-LDA, which reached 94 and 100 % of classification accuracy in the training and test sets, respectively. This result is extremely positive from the viewpoint of routine clinical analyses, because it avoids bacterial identification based on phenotypic identification of the causative organism using Gram staining, culture, and biochemical proofs. Therefore, the proposed method presents inherent advantages, promoting a simpler, faster, and low-cost alternative for bacterial identification.

  14. Two-categorical bundles and their classifying spaces

    DEFF Research Database (Denmark)

    Baas, Nils A.; Bökstedt, M.; Kro, T.A.


    For a 2-category 2C we associate a notion of a principal 2C-bundle. In case of the 2-category of 2-vector spaces in the sense of M.M. Kapranov and V.A. Voevodsky this gives the the 2-vector bundles of N.A. Baas, B.I. Dundas and J. Rognes. Our main result says that the geometric nerve of a good 2......-category is a classifying space for the associated principal 2-bundles. In the process of proving this we develop a lot of powerful machinery which may be useful in further studies of 2-categorical topology. As a corollary we get a new proof of the classification of principal bundles. A calculation based...... on the main theorem shows that the principal 2-bundles associated to the 2-category of 2-vector spaces in the sense of J.C. Baez and A.S. Crans split, up to concordance, as two copies of ordinary vector bundles. When 2C is a cobordism type 2-category we get a new notion of cobordism-bundles which turns out...

  15. Removal of phosphate from aqueous solution with blast furnace slag. (United States)

    Oguz, Ensar


    Blast furnace slag was used to remove phosphate from aqueous solutions. The influence of pH, temperature, agitation rate, and blast furnace slag dosage on phosphate removal was investigated by conducting a series of batch adsorption experiments. In addition, the yield and mechanisms of phosphate removal were explained on the basis of the results of X-ray spectroscopy, measurements of zeta potential of particles, specific surface area, and images of scanning electron microscopy (SEM) of the particles before and after adsorption. The specific surface area of the blast furnace slag was 0.4m(2)g(-1). The removal of phosphate predominantly has taken place by a precipitation mechanism and weak physical interactions between the surface of adsorbent and the metallic salts of phosphate. In this study, phosphate removal in excess of 99% was obtained, and it was concluded that blast furnace slag is an efficient adsorbent for the removal of phosphate from solution.

  16. MISR Level 2 FIRSTLOOK TOA/Cloud Classifier parameters V001 (United States)

    National Aeronautics and Space Administration — This is the Level 2 FIRSTLOOK TOA/Cloud Classifiers Product. It contains the Angular Signature Cloud Mask (ASCM), Cloud Classifiers, and Support Vector Machine...

  17. 75 FR 733 - Implementation of the Executive Order, ``Classified National Security Information'' (United States)


    ... December 29, 2009 Implementation of the Executive Order, ``Classified National Security Information... entitled, ``Classified National Security Information'' (the ``order''), which substantially advances my... Information Security Oversight Office (ISOO) a copy of the department or agency regulations implementing...


    Directory of Open Access Journals (Sweden)

    M. Safish Mary


    Full Text Available Classification of large amount of data is a time consuming process but crucial for analysis and decision making. Radial Basis Function networks are widely used for classification and regression analysis. In this paper, we have studied the performance of RBF neural networks to classify the sales of cars based on the demand, using kernel density estimation algorithm which produces classification accuracy comparable to data classification accuracy provided by support vector machines. In this paper, we have proposed a new instance based data selection method where redundant instances are removed with help of a threshold thus improving the time complexity with improved classification accuracy. The instance based selection of the data set will help reduce the number of clusters formed thereby reduces the number of centers considered for building the RBF network. Further the efficiency of the training is improved by applying a hierarchical clustering technique to reduce the number of clusters formed at every step. The paper explains the algorithm used for classification and for conditioning the data. It also explains the complexities involved in classification of sales data for analysis and decision-making.

  19. Memory based active contour algorithm using pixel-level classified images for colon crypt segmentation. (United States)

    Cohen, Assaf; Rivlin, Ehud; Shimshoni, Ilan; Sabo, Edmond


    In this paper, we introduce a novel method for detection and segmentation of crypts in colon biopsies. Most of the approaches proposed in the literature try to segment the crypts using only the biopsy image without understanding the meaning of each pixel. The proposed method differs in that we segment the crypts using an automatically generated pixel-level classification image of the original biopsy image and handle the artifacts due to the sectioning process and variance in color, shape and size of the crypts. The biopsy image pixels are classified to nuclei, immune system, lumen, cytoplasm, stroma and goblet cells. The crypts are then segmented using a novel active contour approach, where the external force is determined by the semantics of each pixel and the model of the crypt. The active contour is applied for every lumen candidate detected using the pixel-level classification. Finally, a false positive crypt elimination process is performed to remove segmentation errors. This is done by measuring their adherence to the crypt model using the pixel level classification results. The method was tested on 54 biopsy images containing 4944 healthy and 2236 cancerous crypts, resulting in 87% detection of the crypts with 9% of false positive segments (segments that do not represent a crypt). The segmentation accuracy of the true positive segments is 96%.

  20. Building Keypoint Mappings on Multispectral Images by a Cascade of Classifiers with a Resurrection Mechanism

    Directory of Open Access Journals (Sweden)

    Yong Li


    Full Text Available Inspired by the boosting technique for detecting objects, this paper proposes a cascade structure with a resurrection mechanism to establish keypoint mappings on multispectral images. The cascade structure is composed of four steps by utilizing best bin first (BBF, color and intensity distribution of segment (CIDS, global information and the RANSAC process to remove outlier keypoint matchings. Initial keypoint mappings are built with the descriptors associated with keypoints; then, at each step, only a small number of keypoint mappings of a high confidence are classified to be incorrect. The unclassified keypoint mappings will be passed on to subsequent steps for determining whether they are correct. Due to the drawback of a classification rule, some correct keypoint mappings may be misclassified as incorrect at a step. Observing this, we design a resurrection mechanism, so that they will be reconsidered and evaluated by the rules utilized in subsequent steps. Experimental results show that the proposed cascade structure combined with the resurrection mechanism can effectively build more reliable keypoint mappings on multispectral images than existing methods.

  1. Hard electronics; Hard electronics

    Energy Technology Data Exchange (ETDEWEB)



    In the fields of power conversion devices and broadcasting/communication amplifiers, high power, high frequency and low losses are desirable. Further, for electronic elements in aerospace/aeronautical/geothermal surveys, etc., heat resistance to 500degC is required. Devices which respond to such hard specifications are called hard electronic devices. However, with Si which is at the core of the present electronics, the specifications cannot fully be fulfilled because of the restrictions arising from physical values. Accordingly, taking up new device materials/structures necessary to construct hard electronics, technologies to develop these to a level of IC were examined and studied. They are a technology to make devices/IC of new semiconductors such as SiC, diamond, etc. which can handle higher temperature, higher power and higher frequency than Si and also is possible of reducing losses, a technology to make devices of hard semiconducter materials such as a vacuum microelectronics technology using ultra-micro/high-luminance electronic emitter using negative electron affinity which diamond, etc. have, a technology to make devices of oxides which have various electric properties, etc. 321 refs., 194 figs., 8 tabs.

  2. Investigations in gallium removal

    Energy Technology Data Exchange (ETDEWEB)

    Philip, C.V.; Pitt, W.W. [Texas A and M Univ., College Station, TX (United States); Beard, C.A. [Amarillo National Resource Center for Plutonium, TX (United States)


    Gallium present in weapons plutonium must be removed before it can be used for the production of mixed-oxide (MOX) nuclear reactor fuel. The main goal of the preliminary studies conducted at Texas A and M University was to assist in the development of a thermal process to remove gallium from a gallium oxide/plutonium oxide matrix. This effort is being conducted in close consultation with the Los Alamos National Laboratory (LANL) personnel involved in the development of this process for the US Department of Energy (DOE). Simple experiments were performed on gallium oxide, and cerium-oxide/gallium-oxide mixtures, heated to temperatures ranging from 700--900 C in a reducing environment, and a method for collecting the gallium vapors under these conditions was demonstrated.

  3. Feature Selection and Classifier Development for Radio Frequency Device Identification (United States)


    conditioning) operations [143], smart metering [154–156], electricity theft detection [48, 157], smart homes and smart appliances [158, 159], waste-water...Kinney, R. Alfvin and J. Gilb, "IEEE 802.15 Wireless Personal Area Networks (WPANs) Operations Manual," Institute of Electrical and Electronics... electricity theft using smart meters in AMI," Seventh International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, pp. 176-182, 2012

  4. Strategies for Transporting Data Between Classified and Unclassified Networks (United States)


    Uses and Limitations of Unidirectional Network Bridges in a Secure Electronic Commerce Environment,” paper presented at the INC 2004 Conference...research. Among guards, the trusted information system Radiant Mercury appears promising. Further research is required in order to select an...Off-The-Shelf (COTS)]: Net Optics Tap 4 Guard (GOTS): Radiant Mercury 4 Guard (GOTS): Information Support Server Environment Guard 5 Guard (COTS

  5. Runaway Rubber Removal (United States)


    High hysteresis Good tract;on characteristic Poor affinity for blending Polyisoprene PI Strong wear resistance Very similar to natural rubber Low...with a "base of cresylic acid and a blend of benzene, with a synthetic detergent for a wetting agent" are recommended [8]. For AC runways, expect the runway to change after rubber is removed? Response Maintenance Operations Pilots Improved Skid Resistance! Brak inq Act ion/Fr ictior 7

  6. Facilities removal working group

    Energy Technology Data Exchange (ETDEWEB)



    This working group`s first objective is to identify major economic, technical, and regulatory constraints on operator practices and decisions relevant to offshore facilities removal. Then, the group will try to make recommendations as to regulatory and policy adjustments, additional research, or process improvements and/or technological advances, that may be needed to improve the efficiency and effectiveness of the removal process. The working group will focus primarily on issues dealing with Gulf of Mexico platform abandonments. In order to make the working group sessions as productive as possible, the Facilities Removal Working Group will focus on three topics that address a majority of the concerns and/or constraints relevant to facilities removal. The three areas are: (1) Explosive Severing and its Impact on Marine Life, (2) Pile and Conductor Severing, and (3) Deep Water Abandonments This paper will outline the current state of practice in the offshore industry, identifying current regulations and specific issues encountered when addressing each of the three main topics above. The intent of the paper is to highlight potential issues for panel discussion, not to provide a detailed review of all data relevant to the topic. Before each panel discussion, key speakers will review data and information to facilitate development and discussion of the main issues of each topic. Please refer to the attached agenda for the workshop format, key speakers, presentation topics, and panel participants. The goal of the panel discussions is to identify key issues for each of the three topics above. The working group will also make recommendations on how to proceed on these key issues.

  7. 48 CFR 52.227-10 - Filing of Patent Applications-Classified Subject Matter. (United States)


    ... Applications-Classified Subject Matter. 52.227-10 Section 52.227-10 Federal Acquisition Regulations System... Text of Provisions and Clauses 52.227-10 Filing of Patent Applications—Classified Subject Matter. As prescribed at 27.203-2, insert the following clause: Filing of Patent Applications—Classified Subject...

  8. Classification of Cancer Gene Selection Using Random Forest and Neural Network Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Jogendra Kushwah


    Full Text Available The free radical gene classification of cancer diseases is challenging job in biomedical data engineering. The improving of classification of gene selection of cancer diseases various classifier are used, but the classification of classifier are not validate. So ensemble classifier is used for cancer gene classification using neural network classifier with random forest tree. The random forest tree is ensembling technique of classifier in this technique the number of classifier ensemble of their leaf node of class of classifier. In this paper we combined neural network with random forest ensemble classifier for classification of cancer gene selection for diagnose analysis of cancer diseases. The proposed method is different from most of the methods of ensemble classifier, which follow an input output paradigm of neural network, where the members of the ensemble are selected from a set of neural network classifier. the number of classifiers is determined during the rising procedure of the forest. Furthermore, the proposed method produces an ensemble not only correct, but also assorted, ensuring the two important properties that should characterize an ensemble classifier. For empirical evaluation of our proposed method we used UCI cancer diseases data set for classification. Our experimental result shows that better result in compression of random forest tree classification.

  9. Learning Bayesian network classifiers for credit scoring using Markov Chain Monte Carlo search

    NARCIS (Netherlands)

    Baesens, B.; Egmont-Petersen, M.; Castelo, R.; Vanthienen, J.


    In this paper, we will evaluate the power and usefulness of Bayesian network classifiers for credit scoring. Various types of Bayesian network classifiers will be evaluated and contrasted including unrestricted Bayesian network classifiers learnt using Markov Chain Monte Carlo (MCMC) search. The exp

  10. Electronic Cigarettes (United States)

    ... New FDA Regulations Text Size: A A A Electronic Cigarettes Electronic cigarettes (e-cigarettes) are battery operated products designed ... more about: The latest news and events about electronic cigarettes on this FDA page Electronic cigarette basics ...

  11. A GIS semiautomatic tool for classifying and mapping wetland soils (United States)

    Moreno-Ramón, Héctor; Marqués-Mateu, Angel; Ibáñez-Asensio, Sara


    Wetlands are one of the most productive and biodiverse ecosystems in the world. Water is the main resource and controls the relationships between agents and factors that determine the quality of the wetland. However, vegetation, wildlife and soils are also essential factors to understand these environments. It is possible that soils have been the least studied resource due to their sampling problems. This feature has caused that sometimes wetland soils have been classified broadly. The traditional methodology states that homogeneous soil units should be based on the five soil forming-factors. The problem can appear when the variation of one soil-forming factor is too small to differentiate a change in soil units, or in case that there is another factor, which is not taken into account (e.g. fluctuating water table). This is the case of Albufera of Valencia, a coastal wetland located in the middle east of the Iberian Peninsula (Spain). The saline water table fluctuates throughout the year and it generates differences in soils. To solve this problem, the objectives of this study were to establish a reliable methodology to avoid that problems, and develop a GIS tool that would allow us to define homogeneous soil units in wetlands. This step is essential for the soil scientist, who has to decide the number of soil profiles in a study. The research was conducted with data from 133 soil pits of a previous study in the wetland. In that study, soil parameters of 401 samples (organic carbon, salinity, carbonates, n-value, etc.) were analysed. In a first stage, GIS layers were generated according to depth. The method employed was Bayesian Maxim Entropy. Subsequently, it was designed a program in GIS environment that was based on the decision tree algorithms. The goal of this tool was to create a single layer, for each soil variable, according to the different diagnostic criteria of Soil Taxonomy (properties, horizons and diagnostic epipedons). At the end, the program

  12. Locating and classifying defects using an hybrid data base

    Energy Technology Data Exchange (ETDEWEB)

    Luna-Aviles, A; Diaz Pineda, A [Tecnologico de Estudios Superiores de Coacalco. Av. 16 de Septiembre 54, Col. Cabecera Municipal. C.P. 55700 (Mexico); Hernandez-Gomez, L H; Urriolagoitia-Calderon, G; Urriolagoitia-Sosa, G [Instituto Politecnico Nacional. ESIME-SEPI. Unidad Profesional ' Adolfo Lopez Mateos' Edificio 5, 30 Piso, Colonia Lindavista. Gustavo A. Madero. 07738 Mexico D.F. (Mexico); Durodola, J F [School of Technology, Oxford Brookes University, Headington Campus, Gipsy Lane, Oxford OX3 0BP (United Kingdom); Beltran Fernandez, J A, E-mail:, E-mail:, E-mail:


    A computational inverse technique was used in the localization and classification of defects. Postulated voids of two different sizes (2 mm and 4 mm diameter) were introduced in PMMA bars with and without a notch. The bar dimensions are 200x20x5 mm. One half of them were plain and the other half has a notch (3 mm x 4 mm) which is close to the defect area (19 mm x 16 mm).This analysis was done with an Artificial Neural Network (ANN) and its optimization was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). A hybrid data base was developed with numerical and experimental results. Synthetic data was generated with the finite element method using SOLID95 element of ANSYS code. A parametric analysis was carried out. Only one defect in such bars was taken into account and the first five natural frequencies were calculated. 460 cases were evaluated. Half of them were plain and the other half has a notch. All the input data was classified in two groups. Each one has 230 cases and corresponds to one of the two sort of voids mentioned above. On the other hand, experimental analysis was carried on with PMMA specimens of the same size. The first two natural frequencies of 40 cases were obtained with one void. The other three frequencies were obtained numerically. 20 of these bars were plain and the others have a notch. These experimental results were introduced in the synthetic data base. 400 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. In the next stage of this work, the ANN output was optimized with ANFIS. Previous papers showed that localization and classification of defects was reduced as notches were introduced in such bars. In the case of this paper, improved results were obtained when a hybrid data base was used.

  13. Multimodal fusion of polynomial classifiers for automatic person recgonition (United States)

    Broun, Charles C.; Zhang, Xiaozheng


    With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late


    Directory of Open Access Journals (Sweden)

    B. Surendiran


    Full Text Available Breast cancer is the primary and most common disease found in women which causes second highest rate of death after lung cancer. The digital mammogram is the X-ray of breast captured for the analysis, interpretation and diagnosis. According to Breast Imaging Reporting and Data System (BIRADS benign and malignant can be differentiated using its shape, size and density, which is how radiologist visualize the mammograms. According to BIRADS mass shape characteristics, benign masses tend to have round, oval, lobular in shape and malignant masses are lobular or irregular in shape. Measuring regular and irregular shapes mathematically is found to be a difficult task, since there is no single measure to differentiate various shapes. In this paper, the malignant and benign masses present in mammogram are classified using Hue, Saturation and Value (HSV weight function based statistical measures. The weight function is robust against noise and captures the degree of gray content of the pixel. The statistical measures use gray weight value instead of gray pixel value to effectively discriminate masses. The 233 mammograms from the Digital Database for Screening Mammography (DDSM benchmark dataset have been used. The PASW data mining modeler has been used for constructing Neural Network for identifying importance of statistical measures. Based on the obtained important statistical measure, the C5.0 tree has been constructed with 60-40 data split. The experimental results are found to be encouraging. Also, the results will agree to the standard specified by the American College of Radiology-BIRADS Systems.

  15. Predicting Alzheimer's disease by classifying 3D-Brain MRI images using SVM and other well-defined classifiers (United States)

    Matoug, S.; Abdel-Dayem, A.; Passi, K.; Gross, W.; Alqarni, M.


    Alzheimer's disease (AD) is the most common form of dementia affecting seniors age 65 and over. When AD is suspected, the diagnosis is usually confirmed with behavioural assessments and cognitive tests, often followed by a brain scan. Advanced medical imaging and pattern recognition techniques are good tools to create a learning database in the first step and to predict the class label of incoming data in order to assess the development of the disease, i.e., the conversion from prodromal stages (mild cognitive impairment) to Alzheimer's disease, which is the most critical brain disease for the senior population. Advanced medical imaging such as the volumetric MRI can detect changes in the size of brain regions due to the loss of the brain tissues. Measuring regions that atrophy during the progress of Alzheimer's disease can help neurologists in detecting and staging the disease. In the present investigation, we present a pseudo-automatic scheme that reads volumetric MRI, extracts the middle slices of the brain region, performs segmentation in order to detect the region of brain's ventricle, generates a feature vector that characterizes this region, creates an SQL database that contains the generated data, and finally classifies the images based on the extracted features. For our results, we have used the MRI data sets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database.

  16. The Electron

    Energy Technology Data Exchange (ETDEWEB)

    Thomson, George


    Electrons are elementary particles of atoms that revolve around and outside the nucleus and have a negative charge. This booklet discusses how electrons relate to electricity, some applications of electrons, electrons as waves, electrons in atoms and solids, the electron microscope, among other things.

  17. Hard electronics; Hard electronics

    Energy Technology Data Exchange (ETDEWEB)



    Hard material technologies were surveyed to establish the hard electronic technology which offers superior characteristics under hard operational or environmental conditions as compared with conventional Si devices. The following technologies were separately surveyed: (1) The device and integration technologies of wide gap hard semiconductors such as SiC, diamond and nitride, (2) The technology of hard semiconductor devices for vacuum micro- electronics technology, and (3) The technology of hard new material devices for oxides. The formation technology of oxide thin films made remarkable progress after discovery of oxide superconductor materials, resulting in development of an atomic layer growth method and mist deposition method. This leading research is expected to solve such issues difficult to be easily realized by current Si technology as high-power, high-frequency and low-loss devices in power electronics, high temperature-proof and radiation-proof devices in ultimate electronics, and high-speed and dense- integrated devices in information electronics. 432 refs., 136 figs., 15 tabs.

  18. Avaliação qualitativa do efeito de agentes de limpeza na camada de lama dentinária: estudo ultra-estrutural em microscopia eletrônica de varredura Smear layer removal: a qualitative scanning electron microscopy study

    Directory of Open Access Journals (Sweden)

    Maria Auxiliadora Junho de ARAÚJO


    surface was then removed with a water-cooled high speed carbide bur # 56 in order to obtain the smear layer. Different solutions were applied on the dentin surface for 30 seconds, which was then rinsed and dried. The specimens were mounted on metallic holder stubs, prepared, and examined under the scanning electron microscope (DSM 95-Zeiss. Removal of the smear layer by the tested solutions was qualitatively observed, and the solutions, rated according to their performance in increasing order of extent of removal, as follows: 1. air/water spray; 2. 2% NaF; 3. sodium hypochloride/anodic detergent; 4. 3% H2O2; 5. sandblasting with 50 mm aluminum oxide; 6. 1.27% acidulated fluoride; 7. 25% polyacrylic acid; 8. 10% phosphoric acid. The best solutions for the removal of the smear layer were the acid ones.

  19. Using Conjugate Gradient Network to Classify Stress Level of Patients.

    Directory of Open Access Journals (Sweden)

    Er. S. Pawar


    Full Text Available Diagnosis of stress is important because it can cause many diseases e.g., heart disease, headache, migraine, sleep problems, irritability etc. Diagnosis of stress in patients often involves acquisition of biological signals for example heart rate, electrocardiogram (ECG, electromyography signals (EMG etc. Stress diagnosis using biomedical signals is difficult and since the biomedical signals are too complex to generate any rule an experienced person or expert is needed to determine stress levels. Also, it is not feasible to use all the features that are available or possible to extract from the signal. So, relevant features should be chosen from the extracted features that are capable to diagnose stress. Electronics devices are increasingly being seen in the field of medicine for diagnosis, therapy, checking of stress levels etc. The research and development work of medical electronics engineers leads to the manufacturing of sophisticated diagnostic medical equipment needed to ensure good health care. Biomedical engineering combines the design and problem solving skills of engineering with medical and biological sciences to improve health care diagnosis and treatment.

  20. Using machine learning to classify the diffuse interstellar bands

    CERN Document Server

    Baron, Dalya; Watson, Darach; Yao, Yushu; Cox, Nick L J; Prochaska, J Xavier


    Using over a million and a half extragalactic spectra we study the correlations of the Diffuse Interstellar Bands (DIBs) in the Milky Way. We measure the correlation between DIB strength and dust extinction for 142 DIBs using 24 stacked spectra in the reddening range E(B-V) < 0.2, many more lines than ever studied before. Most of the DIBs do not correlate with dust extinction. However, we find 10 weak and barely studied DIBs with correlations that are higher than 0.7 with dust extinction and confirm the high correlation of additional 5 strong DIBs. Furthermore, we find a pair of DIBs, 5925.9A and 5927.5A which exhibits significant negative correlation with dust extinction, indicating that their carrier may be depleted on dust. We use Machine Learning algorithms to divide the DIBs to spectroscopic families based on 250 stacked spectra. By removing the dust dependency we study how DIBs follow their local environment. We thus obtain 6 groups of weak DIBs, 4 of which are tightly associated with C2 or CN absorp...

  1. Optical Diagnostics for Classifying Stages of Dental Erythema (United States)

    Davis, Matthew J.; Splinter, Robert; Lockhart, Peter; Brennan, Michael; Fox, Philip C.


    Periodontal disease is a term used to describe an inflammatory disease affecting the tissues surrounding and supporting the teeth. Periodontal diseases are some of the most common chronic disorders, which affect humans in all parts of the world. Treatment usually involves the removal of plaque and calculus by scaling and polishing the tooth. In some cases a surgical reduction of hyperplastic tissue, may also be required. In addition, periodontitis is a risk factor for systemic disorders such as cardiovascular disease and diabetes. Current detection methods are qualitative, inaccurate, and often do not detect the periodontal disease in its early, reversible stages. Therefore, an early detection method should be implemented identifying the relationship of periodontal disease with erythema. In order to achieve this purpose we are developing an optical erythema meter to diagnose the periodontal disease in its reversible, gingival stage. The discrimination between healthy and diseased gum tissue was made by using the reflection of two illuminating wavelengths provided by light emitting diodes operating at wavelengths that target the absorption and reflection spectra of the highlights of each particular tissue type (healthy or diseased, and what kind of disease). Three different color gels could successfully be distinguished with a statistical significance of P < 0.05.

  2. Combination of designed immune based classifiers for ERP assessment in a P300-based GKT

    Directory of Open Access Journals (Sweden)

    Mohammad Hassan Moradi


    Full Text Available Constructing a precise classifier is an important issue in pattern recognition task. Combination the decision of several competing classifiers to achieve improved classification accuracy has become interested in many research areas. In this study, Artificial Immune system (AIS as an effective artificial intelligence technique was used for designing of several efficient classifiers. Combination of multiple immune based classifiers was tested on ERP assessment in a P300-based GKT (Guilty Knowledge Test. Experiment results showed that the proposed classifier named Compact Artificial Immune System (CAIS was a successful classification method and could be competitive to other classifiers such as K-nearest neighbourhood (KNN, Linear Discriminant Analysis (LDA and Support Vector Machine (SVM. Also, in the experiments, it was observed that using the decision fusion techniques for multiple classifier combination lead to better recognition results. The best rate of recognition by CAIS was 80.90% that has been improved in compare to other applied classification methods in our study.

  3. Spleen removal - open - adults - discharge (United States)

    Splenectomy - adult - discharge; Spleen removal - adult - discharge ... You had surgery to remove your spleen. This operation is called splenectomy . The surgeon made a cut (incision) in the middle of your belly or on the left side ...

  4. Regenerable Contaminant Removal System Project (United States)

    National Aeronautics and Space Administration — The Regenerable Contaminant Removal System (RCRS) is an innovative method to remove sulfur and halide compounds from contaminated gas streams to part-per-billion...

  5. Mower/Litter Removal (United States)


    The Burg Corporation needed to get more power out of the suction system in their Vac 'N Bag grass mower/litter remover. The president submitted a problem statement to the Marshall Space Flight Center Technology Transfer Office, which devised a way to guide heavier items of trash to a point where suction was greatest, and made changes to the impeller and the exhaust port, based on rocket propulsion technology. The improved system is used by highway departments, city governments and park authorities, reducing work time by combining the tasks of grass cutting and vacuuming trash and grass clippings.

  6. Win percentage: a novel measure for assessing the suitability of machine classifiers for biological problems (United States)


    Background Selecting an appropriate classifier for a particular biological application poses a difficult problem for researchers and practitioners alike. In particular, choosing a classifier depends heavily on the features selected. For high-throughput biomedical datasets, feature selection is often a preprocessing step that gives an unfair advantage to the classifiers built with the same modeling assumptions. In this paper, we seek classifiers that are suitable to a particular problem independent of feature selection. We propose a novel measure, called "win percentage", for assessing the suitability of machine classifiers to a particular problem. We define win percentage as the probability a classifier will perform better than its peers on a finite random sample of feature sets, giving each classifier equal opportunity to find suitable features. Results First, we illustrate the difficulty in evaluating classifiers after feature selection. We show that several classifiers can each perform statistically significantly better than their peers given the right feature set among the top 0.001% of all feature sets. We illustrate the utility of win percentage using synthetic data, and evaluate six classifiers in analyzing eight microarray datasets representing three diseases: breast cancer, multiple myeloma, and neuroblastoma. After initially using all Gaussian gene-pairs, we show that precise estimates of win percentage (within 1%) can be achieved using a smaller random sample of all feature pairs. We show that for these data no single classifier can be considered the best without knowing the feature set. Instead, win percentage captures the non-zero probability that each classifier will outperform its peers based on an empirical estimate of performance. Conclusions Fundamentally, we illustrate that the selection of the most suitable classifier (i.e., one that is more likely to perform better than its peers) not only depends on the dataset and application but also on the

  7. Experimental Plan for the Cold Demonstration (Scoping Tests) of Glass Removal Methods from a DWPF Melter

    Energy Technology Data Exchange (ETDEWEB)

    Smith, M.E.


    SRS and WVDP currently do not have the capability to size reduce, decontaminate, classify, and dispose of large, failed, highly contaminated equipment. Tanks Focus Area Task 777 was developed to address this problem. The first activity for Task 777 is to develop and demonstrate techniques suitable for removing the solid HLW glass from HLW melters. This experimental plan describes the work that will be performed for this glass removal demonstration.

  8. Mercury removal sorbents

    Energy Technology Data Exchange (ETDEWEB)

    Alptekin, Gokhan


    Sorbents and methods of using them for removing mercury from flue gases over a wide range of temperatures are disclosed. Sorbent materials of this invention comprise oxy- or hydroxyl-halogen (chlorides and bromides) of manganese, copper and calcium as the active phase for Hg.sup.0 oxidation, and are dispersed on a high surface porous supports. In addition to the powder activated carbons (PACs), this support material can be comprised of commercial ceramic supports such as silica (SiO.sub.2), alumina (Al.sub.2O.sub.3), zeolites and clays. The support material may also comprise of oxides of various metals such as iron, manganese, and calcium. The non-carbon sorbents of the invention can be easily injected into the flue gas and recovered in the Particulate Control Device (PCD) along with the fly ash without altering the properties of the by-product fly ash enabling its use as a cement additive. Sorbent materials of this invention effectively remove both elemental and oxidized forms of mercury from flue gases and can be used at elevated temperatures. The sorbent combines an oxidation catalyst and a sorbent in the same particle to both oxidize the mercury and then immobilize it.

  9. Renal Fibrosis mRNA Classifier: Validation in Experimental Lithium-Induced Interstitial Fibrosis in the Rat Kidney (United States)

    Marti, Hans-Peter; Leader, John; Leader, Catherine; Bedford, Jennifer


    Accurate diagnosis of fibrosis is of paramount clinical importance. A human fibrosis classifier based on metzincins and related genes (MARGS) was described previously. In this investigation, expression changes of MARGS genes were explored and evaluated to examine whether the MARGS-based algorithm has any diagnostic value in a rat model of lithium nephropathy. Male Wistar rats (n = 12) were divided into 2 groups (n = 6). One group was given a diet containing lithium (40 mmol/kg food for 7 days, followed by 60mmol/kg food for the rest of the experimental period), while a control group (n = 6) was fed a normal diet. After six months, animals were sacrificed and the renal cortex and medulla of both kidneys removed for analysis. Gene expression changes were analysed using 24 GeneChip® Affymetrix Rat Exon 1.0 ST arrays. Statistically relevant genes (p-value1.5, t-test) were further examined. Matrix metalloproteinase-2 (MMP2), CD44, and nephroblastoma overexpressed gene (NOV) were overexpressed in the medulla and cortex of lithium-fed rats compared to the control group. TGFβ2 was overrepresented in the cortex of lithium-fed animals 1.5-fold, and 1.3-fold in the medulla of the same animals. In Gene Set Enrichment Analysis (GSEA), both the medulla and cortex of lithium-fed animals showed an enrichment of the MARGS, TGFβ network, and extracellular matrix (ECM) gene sets, while the cortex expression signature was enriched in additional fibrosis-related-genes and the medulla was also enriched in immune response pathways. Importantly, the MARGS-based fibrosis classifier was able to classify all samples correctly. Immunohistochemistry and qPCR confirmed the up-regulation of NOV, CD44, and TGFβ2. The MARGS classifier represents a cross-organ and cross-species classifier of fibrotic conditions and may help to design a test to diagnose and to monitor fibrosis. The results also provide evidence for a common pathway in the pathogenesis of fibrosis. PMID:28002484

  10. Classifying Intersex in DSM-5: Critical Reflections on Gender Dysphoria. (United States)

    Kraus, Cynthia


    The new diagnosis of Gender Dysphoria (GD) in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (American Psychiatric Association, 2013) defines intersex, renamed "Disorders of Sex Development" (DSD), as a specifier of GD. With this formulation, the status of intersex departs from prior editions, especially from the DSM-IV texts that defined intersex as an exclusion criterion for Gender Identity Disorder. Conversely, GD--with or without a DSD--can apply in the same manner to DSD and non-DSD individuals; it subsumes the physical condition under the mental "disorder." This conceptualization, I suggest, is unprecedented in the history of the DSM. In my view, it is the most significant change in the revised diagnosis, and it raises the question of the suitability of psychiatric diagnosis for individuals with intersex/DSD. Unfortunately, this fundamental question was not raised during the revision process. This article examines, historically and conceptually, the different terms provided for intersex/DSD in the DSM in order to capture the significance of the DSD specifier, and the reasons why the risk of stigma and misdiagnosis, I argue, is increased in DSM-5 compared to DSM-IV. The DSM-5 formulation is paradoxically at variance with the clinical literature, with intersex/DSD and transgender being conceived as incommensurable terms in their diagnostic and treatment aspects. In this light, the removal of intersex/DSD from the DSM would seem a better way to achieve the purpose behind the revised diagnosis, which was to reduce stigma and the risk of misdiagnosis, and to provide the persons concerned with healthcare that caters to their specific needs.

  11. Pulsed electron beam precharger

    Energy Technology Data Exchange (ETDEWEB)

    Finney, W.C. (ed.); Shelton, W.N.


    Florida State University is investigating the concept of pulsed electron beams for fly ash precipitation. This report describes the results and data on three of the subtasks of this project and preliminary work only on the remaining five subtasks. Described are the modification of precharger for pulsed and DC energization of anode; installation of the Q/A measurement system; and modification and installation of pulsed power supply to provide both pulsed and DC energization of the anode. The other tasks include: measurement of the removal efficiency for monodisperse simulated fly ash particles; measurement of particle charge; optimization of pulse energization schedule for maximum removal efficiency; practical assessment of results; and measurement of the removal efficiency for polydisperse test particles. 15 figs., 1 tab. (CK)

  12. Multivariate models to classify Tuscan virgin olive oils by zone.

    Directory of Open Access Journals (Sweden)

    Alessandri, Stefano


    Full Text Available In order to study and classify Tuscan virgin olive oils, 179 samples were collected. They were obtained from drupes harvested during the first half of November, from three different zones of the Region. The sampling was repeated for 5 years. Fatty acids, phytol, aliphatic and triterpenic alcohols, triterpenic dialcohols, sterols, squalene and tocopherols were analyzed. A subset of variables was considered. They were selected in a preceding work as the most effective and reliable, from the univariate point of view. The analytical data were transformed (except for the cycloartenol to compensate annual variations, the mean related to the East zone was subtracted from each value, within each year. Univariate three-class models were calculated and further variables discarded. Then multivariate three-zone models were evaluated, including phytol (that was always selected and all the combinations of palmitic, palmitoleic and oleic acid, tetracosanol, cycloartenol and squalene. Models including from two to seven variables were studied. The best model shows by-zone classification errors less than 40%, by-zone within-year classification errors that are less than 45% and a global classification error equal to 30%. This model includes phytol, palmitic acid, tetracosanol and cycloartenol.

    Para estudiar y clasificar aceites de oliva vírgenes Toscanos, se utilizaron 179 muestras, que fueron obtenidas de frutos recolectados durante la primera mitad de Noviembre, de tres zonas diferentes de la Región. El muestreo fue repetido durante 5 años. Se analizaron ácidos grasos, fitol, alcoholes alifáticos y triterpénicos, dialcoholes triterpénicos, esteroles, escualeno y tocoferoles. Se consideró un subconjunto de variables que fueron seleccionadas en un trabajo anterior como el más efectivo y fiable, desde el punto de vista univariado. Los datos analíticos se transformaron (excepto para el cicloartenol para compensar las variaciones anuales, rest

  13. 扫描电镜观察不同方法去腐后牙本质玷污层的形态变化%Morphological changes of the smear layer after caries removal using different methods:An observation under scanning electron microscope

    Institute of Scientific and Technical Information of China (English)

    方玲; 朱艳莉


    BACKGROUND:The methods to remove the smear layer mainly include mechanical, Carisolv chemomechanical, laser, ozone methods and so on. But studies after caries removal are limited. At present, there is no research for the observation of the smear layer after caries removal using Er:YAG laser, Carisolv chemomechanical and traditional dental turbine. OBJECTIVE:To observe the morphologic changes of the dentin smear layer after treated with the three methods of Er:YAG laser, Carisolv chemomechanical and traditional high speed dental turbine under a scanning electron microscope. METHODS:Thirty newly removed premolars or molars with moderate caries were divided into three groups, 10 teeth in each group. The bottom surface of the tooth cavity was observed with the naked eyes after caries removal with Er:YAG laser, Carisolv chemomechanical and traditional dental turbine, respectively. Then, the surface was observed using the scanning electron microscope (magnification ×1 000 and ×2 000). RESULTS AND CONCLUSION:The texture of the tooth cavity was hard in the three groups. After caries removal, the dentin surface of Er:YAG laser group was rough and uneven, and showed a peak-like shape;The dentin surface of Carisolv chemomechanical group was dark and the bottom was flat;The dentin surface of traditional dental turbine group was smooth and bright, and there was an obvious cutting trace. It was found that the surface of Er:YAG laser group did not have the smear layer, and the dentinal tubule was clearly visible. The most dentinal tubules were visible in the Carisolv chemomechanical group, and the surface was covered with a less smear layer. The surface of the traditional dental turbine group was covered with a thick smear layer, and the dentinal tubule was unseen. The differences in dentin smear layer cleanliness were significant between the three groups. These result indicate that Er:YAG laser can effectively remove the smear layer, which is better than Carisolv

  14. Black hole hair removal (United States)

    Banerjee, Nabamita; Mandal, Ipsita; Sen, Ashoke


    Macroscopic entropy of an extremal black hole is expected to be determined completely by its near horizon geometry. Thus two black holes with identical near horizon geometries should have identical macroscopic entropy, and the expected equality between macroscopic and microscopic entropies will then imply that they have identical degeneracies of microstates. An apparent counterexample is provided by the 4D-5D lift relating BMPV black hole to a four dimensional black hole. The two black holes have identical near horizon geometries but different microscopic spectrum. We suggest that this discrepancy can be accounted for by black hole hair — degrees of freedom living outside the horizon and contributing to the degeneracies. We identify these degrees of freedom for both the four and the five dimensional black holes and show that after their contributions are removed from the microscopic degeneracies of the respective systems, the result for the four and five dimensional black holes match exactly.

  15. Black Hole Hair Removal

    CERN Document Server

    Banerjee, Nabamita; Sen, Ashoke


    Macroscopic entropy of an extremal black hole is expected to be determined completely by its near horizon geometry. Thus two black holes with identical near horizon geometries should have identical macroscopic entropy, and the expected equality between macroscopic and microscopic entropies will then imply that they have identical degeneracies of microstates. An apparent counterexample is provided by the 4D-5D lift relating BMPV black hole to a four dimensional black hole. The two black holes have identical near horizon geometries but different microscopic spectrum. We suggest that this discrepancy can be accounted for by black hole hair, -- degrees of freedom living outside the horizon and contributing to the degeneracies. We identify these degrees of freedom for both the four and the five dimensional black holes and show that after their contributions are removed from the microscopic degeneracies of the respective systems, the result for the four and five dimensional black holes match exactly.

  16. Ingrown toenail removal. (United States)

    Zuber, Thomas J


    Ingrown toenail is a common problem resulting from various etiologies including improperly trimmed nails, hyperhidrosis, and poorly fitting shoes. Patients commonly present with pain in the affected nail but with progression, drainage, infection, and difficulty walking occur. Excision of the lateral nail plate combined with lateral matricectomy is thought to provide the best chance for eradication. The lateral aspect of the nail plate is removed with preservation of the remaining healthy nail plate. Electrocautery ablation is then used to destroy the exposed nail-forming matrix, creating a new lateral nail fold. Complications of the procedure include regrowth of a nail spicule secondary to incomplete matricectomy and postoperative nail bed infection. When performed correctly, the procedure produces the greatest success in the treatment of ingrown nails. Basic soft tissue surgery and electrosurgery experience are prerequisites for learning the technique.

  17. Role of extracellular exopolymers on biological phosphorus removal

    Institute of Scientific and Technical Information of China (English)

    LIU Ya-nan; XUE Gang; YU Shui-li; ZHAO Fang-bo


    Three sequencing batch reactors supplied with different carbon sources were investigated. The system supplied with glucose gained the best enhanced biological phosphorus removal although all of the three reactors were seeded from the same sludge. With the measurement of poly-β-hydroxyalkanoate (PHA) concentration, phosphorus content in sludge and extracellular exopolymers (EPS) with scanning electron microscopy (SEM) combined with energy dispersive spectrometry (EDS), it was found that the biosorption effect of EPS played an important role in phosphorus removal and that the amount of PHA at the end of anaerobic phase was not the only key factor to determine the following phosphorus removal efficiency.

  18. Modeling of Carbon Monoxide Removal by Corona Plasma

    Institute of Scientific and Technical Information of China (English)

    FENG Jingwei; SUN Yabing; ZHAO Dayong; ZHENG Zheng; XU Yuewu; YANG Haifeng; ZHU Hongbiao; ZHOU Xiaoxia


    Modeling of carbon monoxide (CO) removal by a corona plasma was conducted in this study.The purification efficiency of CO was calculated theoretically and the factors affecting the removal of CO were analyzed.The results showed that the main removal mechanisms of CO were direct dissociation by generated high-energy electrons and indirect oxidation by generated hydroxyl radicals.The purification efficiency of CO was dependent on the plasma parameters,indoor air humidity and initial concentration of CO.Good consistency between the theoretical calculation and the experimental results was observed.

  19. Fall Detector Using Discrete Wavelet Decomposition And SVM Classifier

    Directory of Open Access Journals (Sweden)

    Wójtowicz Bartłomiej


    Full Text Available This paper presents the design process and the results of a novel fall detector designed and constructed at the Faculty of Electronics, Military University of Technology. High sensitivity and low false alarm rates were achieved by using four independent sensors of varying physical quantities and sophisticated methods of signal processing and data mining. The manuscript discusses the study background, hardware development, alternative algorithms used for the sensor data processing and fusion for identification of the most efficient solution and the final results from testing the Android application on smartphone. The test was performed in four 6-h sessions (two sessions with female participants at the age of 28 years, one session with male participants aged 28 years and one involving a man at the age of 49 years and showed correct detection of all 40 simulated falls with only three false alarms. Our results confirmed the sensitivity of the proposed algorithm to be 100% with a nominal false alarm rate (one false alarm per 8 h.

  20. Multi-Stage Feature Selection Based Intelligent Classifier for Classification of Incipient Stage Fire in Building

    Directory of Open Access Journals (Sweden)

    Allan Melvin Andrew


    Full Text Available In this study, an early fire detection algorithm has been proposed based on low cost array sensing system, utilising off- the shelf gas sensors, dust particles and ambient sensors such as temperature and humidity sensor. The odour or “smellprint” emanated from various fire sources and building construction materials at early stage are measured. For this purpose, odour profile data from five common fire sources and three common building construction materials were used to develop the classification model. Normalised feature extractions of the smell print data were performed before subjected to prediction classifier. These features represent the odour signals in the time domain. The obtained features undergo the proposed multi-stage feature selection technique and lastly, further reduced by Principal Component Analysis (PCA, a dimension reduction technique. The hybrid PCA-PNN based approach has been applied on different datasets from in-house developed system and the portable electronic nose unit. Experimental classification results show that the dimension reduction process performed by PCA has improved the classification accuracy and provided high reliability, regardless of ambient temperature and humidity variation, baseline sensor drift, the different gas concentration level and exposure towards different heating temperature range.

  1. Ambient Electronics (United States)

    Sekitani, Tsuyoshi; Someya, Takao


    We report the recent research progress and future prospects of flexible and printed electronics, focusing on molecular electronic material-based thin-film transistors, which are expected to usher in a new era of electronics.

  2. Robust Template Decomposition without Weight Restriction for Cellular Neural Networks Implementing Arbitrary Boolean Functions Using Support Vector Classifiers

    Directory of Open Access Journals (Sweden)

    Yih-Lon Lin


    Full Text Available If the given Boolean function is linearly separable, a robust uncoupled cellular neural network can be designed as a maximal margin classifier. On the other hand, if the given Boolean function is linearly separable but has a small geometric margin or it is not linearly separable, a popular approach is to find a sequence of robust uncoupled cellular neural networks implementing the given Boolean function. In the past research works using this approach, the control template parameters and thresholds are restricted to assume only a given finite set of integers, and this is certainly unnecessary for the template design. In this study, we try to remove this restriction. Minterm- and maxterm-based decomposition algorithms utilizing the soft margin and maximal margin support vector classifiers are proposed to design a sequence of robust templates implementing an arbitrary Boolean function. Several illustrative examples are simulated to demonstrate the efficiency of the proposed method by comparing our results with those produced by other decomposition methods with restricted weights.

  3. A Robust and Fast Computation Touchless Palm Print Recognition System Using LHEAT and the IFkNCN Classifier

    Directory of Open Access Journals (Sweden)

    Haryati Jaafar


    Full Text Available Mobile implementation is a current trend in biometric design. This paper proposes a new approach to palm print recognition, in which smart phones are used to capture palm print images at a distance. A touchless system was developed because of public demand for privacy and sanitation. Robust hand tracking, image enhancement, and fast computation processing algorithms are required for effective touchless and mobile-based recognition. In this project, hand tracking and the region of interest (ROI extraction method were discussed. A sliding neighborhood operation with local histogram equalization, followed by a local adaptive thresholding or LHEAT approach, was proposed in the image enhancement stage to manage low-quality palm print images. To accelerate the recognition process, a new classifier, improved fuzzy-based k nearest centroid neighbor (IFkNCN, was implemented. By removing outliers and reducing the amount of training data, this classifier exhibited faster computation. Our experimental results demonstrate that a touchless palm print system using LHEAT and IFkNCN achieves a promising recognition rate of 98.64%.

  4. Copper removal using electrosterically stabilized nanocrystalline cellulose. (United States)

    Sheikhi, Amir; Safari, Salman; Yang, Han; van de Ven, Theo G M


    Removal of heavy metal ions such as copper using an efficient and low-cost method with low ecological footprint is a critical process in wastewater treatment, which can be achieved in a liquid phase using nanoadsorbents such as inorganic nanoparticles. Recently, attention has turned toward developing sustainable and environmentally friendly nanoadsorbents to remove heavy metal ions from aqueous media. Electrosterically stabilized nanocrystalline cellulose (ENCC), which can be prepared from wood fibers through periodate/chlorite oxidation, has been shown to have a high charge content and colloidal stability. Here, we show that ENCC scavenges copper ions by different mechanisms depending on the ion concentration. When the Cu(II) concentration is low (C0≲200 ppm), agglomerates of starlike ENCC particles appear, which are broken into individual starlike entities by shear and Brownian motion, as evidenced by photometric dispersion analysis, dynamic light scattering, and transmission electron microscopy. On the other hand, at higher copper concentrations, the aggregate morphology changes from starlike to raftlike, which is probably due to the collapse of protruding dicarboxylic cellulose (DCC) chains and ENCC charge neutralization by copper adsorption. Such raftlike structures result from head-to-head and lateral aggregation of neutralized ENCCs as confirmed by transmission electron microscopy. As opposed to starlike aggregates, the raftlike structures grow gradually and are prone to sedimentation at copper concentrations C0≳500 ppm, which eliminates a costly separation step in wastewater treatment processes. Moreover, a copper removal capacity of ∼185 mg g(-1) was achieved thanks to the highly charged DCC polyanions protruding from ENCC. These properties along with the biorenewability make ENCC a promising candidate for wastewater treatment, in which fast, facile, and low-cost removal of heavy metal ions is desired most.

  5. Analysis Electronic Service Quality through E-S-Qual Scale: The Case Study of Nowshahr Hotel

    Directory of Open Access Journals (Sweden)

    Hossein Rezaei Dolatabadi


    Full Text Available The aim of this study is to analyze the electronic service quality in Arsh Hotel which is located at Nowshahr city using Kano and E-S-Qual scale. All Given the importance and position of electronic hotel service and the growing trend of electronic hotel services in the country in recent years, now is the financial and credit institutions and banks have found a good position to maintain and develop effective strategies without the utilization of scientific and practical management Information and communication is not possible. Today, hotels in order to remain competitive need to improve the quality of its electronic services to the linear view of this topic are not comprehensive. In order to study integrated model E-S-Qual and Kano is used that with removal the linear hypothesis is. In the first step towards electronic service quality factors based on the Arsh Hotel E-S-Qual model to determine the current practice of banks and provide the service expectations of customers and their vision of electronic service quality mentioned has been evaluated. In the first step factors towards electronic service quality the Arsh Hotels based on E-S-Qual model to determine the current practice of hotels and provide the service expectations of customers and their vision of electronic service quality mentioned has been evaluated. Considering the gap between customer expectations and current practice of hotels in providing these services, services to the two categories is divided into weak and strong. In the second step of research with integrating E-S-Qual and Kano model, service quality factors based on Kano model classified to determine which features of electronic service quality determined by the model E-S-Qual and evaluated, is the strategic importance in relation to customer satisfaction.

  6. Ensemble regularized linear discriminant analysis classifier for P300-based brain-computer interface. (United States)

    Onishi, Akinari; Natsume, Kiyohisa


    This paper demonstrates a better classification performance of an ensemble classifier using a regularized linear discriminant analysis (LDA) for P300-based brain-computer interface (BCI). The ensemble classifier with an LDA is sensitive to the lack of training data because covariance matrices are estimated imprecisely. One of the solution against the lack of training data is to employ a regularized LDA. Thus we employed the regularized LDA for the ensemble classifier of the P300-based BCI. The principal component analysis (PCA) was used for the dimension reduction. As a result, an ensemble regularized LDA classifier showed significantly better classification performance than an ensemble un-regularized LDA classifier. Therefore the proposed ensemble regularized LDA classifier is robust against the lack of training data.

  7. Classifier-ensemble incremental-learning procedure for nuclear transient identification at different operational conditions

    Energy Technology Data Exchange (ETDEWEB)

    Baraldi, Piero, E-mail: piero.baraldi@polimi.i [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Razavi-Far, Roozbeh [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Zio, Enrico [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Ecole Centrale Paris-Supelec, Paris (France)


    An important requirement for the practical implementation of empirical diagnostic systems is the capability of classifying transients in all plant operational conditions. The present paper proposes an approach based on an ensemble of classifiers for incrementally learning transients under different operational conditions. New classifiers are added to the ensemble where transients occurring in new operational conditions are not satisfactorily classified. The construction of the ensemble is made by bagging; the base classifier is a supervised Fuzzy C Means (FCM) classifier whose outcomes are combined by majority voting. The incremental learning procedure is applied to the identification of simulated transients in the feedwater system of a Boiling Water Reactor (BWR) under different reactor power levels.

  8. The Entire Quantile Path of a Risk-Agnostic SVM Classifier

    CERN Document Server

    Yu, Jin; Zhang, Jian


    A quantile binary classifier uses the rule: Classify x as +1 if P(Y = 1|X = x) >= t, and as -1 otherwise, for a fixed quantile parameter t {[0, 1]. It has been shown that Support Vector Machines (SVMs) in the limit are quantile classifiers with t = 1/2 . In this paper, we show that by using asymmetric cost of misclassification SVMs can be appropriately extended to recover, in the limit, the quantile binary classifier for any t. We then present a principled algorithm to solve the extended SVM classifier for all values of t simultaneously. This has two implications: First, one can recover the entire conditional distribution P(Y = 1|X = x) = t for t {[0, 1]. Second, we can build a risk-agnostic SVM classifier where the cost of misclassification need not be known apriori. Preliminary numerical experiments show the effectiveness of the proposed algorithm.

  9. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    Energy Technology Data Exchange (ETDEWEB)

    Wurtz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kaplan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)


    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  10. Combining classifiers generated by multi-gene genetic programming for protein fold recognition using genetic algorithm. (United States)

    Bardsiri, Mahshid Khatibi; Eftekhari, Mahdi; Mousavi, Reza


    In this study the problem of protein fold recognition, that is a classification task, is solved via a hybrid of evolutionary algorithms namely multi-gene Genetic Programming (GP) and Genetic Algorithm (GA). Our proposed method consists of two main stages and is performed on three datasets taken from the literature. Each dataset contains different feature groups and classes. In the first step, multi-gene GP is used for producing binary classifiers based on various feature groups for each class. Then, different classifiers obtained for each class are combined via weighted voting so that the weights are determined through GA. At the end of the first step, there is a separate binary classifier for each class. In the second stage, the obtained binary classifiers are combined via GA weighting in order to generate the overall classifier. The final obtained classifier is superior to the previous works found in the literature in terms of classification accuracy.

  11. An Active Learning Classifier for Further Reducing Diabetic Retinopathy Screening System Cost

    Directory of Open Access Journals (Sweden)

    Yinan Zhang


    Full Text Available Diabetic retinopathy (DR screening system raises a financial problem. For further reducing DR screening cost, an active learning classifier is proposed in this paper. Our approach identifies retinal images based on features extracted by anatomical part recognition and lesion detection algorithms. Kernel extreme learning machine (KELM is a rapid classifier for solving classification problems in high dimensional space. Both active learning and ensemble technique elevate performance of KELM when using small training dataset. The committee only proposes necessary manual work to doctor for saving cost. On the publicly available Messidor database, our classifier is trained with 20%–35% of labeled retinal images and comparative classifiers are trained with 80% of labeled retinal images. Results show that our classifier can achieve better classification accuracy than Classification and Regression Tree, radial basis function SVM, Multilayer Perceptron SVM, Linear SVM, and K Nearest Neighbor. Empirical experiments suggest that our active learning classifier is efficient for further reducing DR screening cost.

  12. Hardware Removal in Craniomaxillofacial Trauma (United States)

    Cahill, Thomas J.; Gandhi, Rikesh; Allori, Alexander C.; Marcus, Jeffrey R.; Powers, David; Erdmann, Detlev; Hollenbeck, Scott T.; Levinson, Howard


    Background Craniomaxillofacial (CMF) fractures are typically treated with open reduction and internal fixation. Open reduction and internal fixation can be complicated by hardware exposure or infection. The literature often does not differentiate between these 2 entities; so for this study, we have considered all hardware exposures as hardware infections. Approximately 5% of adults with CMF trauma are thought to develop hardware infections. Management consists of either removing the hardware versus leaving it in situ. The optimal approach has not been investigated. Thus, a systematic review of the literature was undertaken and a resultant evidence-based approach to the treatment and management of CMF hardware infections was devised. Materials and Methods A comprehensive search of journal articles was performed in parallel using MEDLINE, Web of Science, and ScienceDirect electronic databases. Keywords and phrases used were maxillofacial injuries; facial bones; wounds and injuries; fracture fixation, internal; wound infection; and infection. Our search yielded 529 articles. To focus on CMF fractures with hardware infections, the full text of English-language articles was reviewed to identify articles focusing on the evaluation and management of infected hardware in CMF trauma. Each article’s reference list was manually reviewed and citation analysis performed to identify articles missed by the search strategy. There were 259 articles that met the full inclusion criteria and form the basis of this systematic review. The articles were rated based on the level of evidence. There were 81 grade II articles included in the meta-analysis. Result Our meta-analysis revealed that 7503 patients were treated with hardware for CMF fractures in the 81 grade II articles. Hardware infection occurred in 510 (6.8%) of these patients. Of those infections, hardware removal occurred in 264 (51.8%) patients; hardware was left in place in 166 (32.6%) patients; and in 80 (15.6%) cases

  13. Electron cooling (United States)

    Meshkov, I.; Sidorin, A.


    The brief review of the most significant and interesting achievements in electron cooling method, which took place during last two years, is presented. The description of the electron cooling facilities-storage rings and traps being in operation or under development-is given. The applications of the electron cooling method are considered. The following modern fields of the method development are discussed: crystalline beam formation, expansion into middle and high energy electron cooling (the Fermilab Recycler Electron Cooler, the BNL cooler-recuperator, cooling with circulating electron beam, the GSI project), electron cooling in traps, antihydrogen generation, electron cooling of positrons (the LEPTA project).

  14. Design and evaluation of neural classifiers application to skin lesion classification

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan


    Addresses design and evaluation of neural classifiers for the problem of skin lesion classification. By using Gauss Newton optimization for the entropic cost function in conjunction with pruning by Optimal Brain Damage and a new test error estimate, the authors show that this scheme is capable...... of optimizing the architecture of neural classifiers. Furthermore, error-reject tradeoff theory indicates, that the resulting neural classifiers for the skin lesion classification problem are near-optimal...

  15. A self-growing Bayesian network classifier for online learning of human motion patterns


    Yung, NHC; Chen, Z


    This paper proposes a new self-growing Bayesian network classifier for online learning of human motion patterns (HMPs) in dynamically changing environments. The proposed classifier is designed to represent HMP classes based on a set of historical trajectories labeled by unsupervised clustering. It then assigns HMP class labels to current trajectories. Parameters of the proposed classifier are recalculated based on the augmented dataset of labeled trajectories and all HMP classes are according...

  16. An explanatory study on electronic commerce for reverse logistics.

    NARCIS (Netherlands)

    A.I. Kokkinaki; R. Dekker (Rommert); J.A.E.E. van Nunen (Jo); C.P. Pappis (Costas)


    textabstractIn this paper we consider the role Electronic Commerce plays and can play for Reverse Logistics. After short introductions to electronic commerce and reverse logistics, we give an overview of existing internet sites for reverse logistics. These sites can be classified as electronic m

  17. An explanatory study on electronic commerce for reverse logistics.


    Kokkinaki, A.I.; Dekker, Rommert; Nunen, Jo; Pappis, Costas


    textabstractIn this paper we consider the role Electronic Commerce plays and can play for Reverse Logistics. After short introductions to electronic commerce and reverse logistics, we give an overview of existing internet sites for reverse logistics. These sites can be classified as electronic markets, supply of used parts and complete reverse logistic solutions. Finally we draw some lines to the future.

  18. Removal of coatings and surfaces on metallic, mineral and ceramic materials

    Energy Technology Data Exchange (ETDEWEB)

    Bach, F.W.; Redeker, C. [Dortmund Univ. (Germany). Inst. for Materials Engineering


    Various techniques for use in decontamination in decommissioning of nuclear facilities are presented. The methods may be classified by their physical effects, namely chemical electrochemical, mechanical and thermal. A main issue is the dryice-laserbeam-blasting process. By dryice-laserbeam-blasting surfaces of concrete and ceramic materials can be removed. (orig.)

  19. Construction of Classifier Based on MPCA and QSA and Its Application on Classification of Pancreatic Diseases

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang


    Full Text Available A novel method is proposed to establish the classifier which can classify the pancreatic images into normal or abnormal. Firstly, the brightness feature is used to construct high-order tensors, then using multilinear principal component analysis (MPCA extracts the eigentensors, and finally, the classifier is constructed based on support vector machine (SVM and the classifier parameters are optimized with quantum simulated annealing algorithm (QSA. In order to verify the effectiveness of the proposed algorithm, the normal SVM method has been chosen as comparing algorithm. The experimental results show that the proposed method can effectively extract the eigenfeatures and improve the classification accuracy of pancreatic images.

  20. Face Recognition Based on Support Vector Machine and Nearest Neighbor Classifier

    Institute of Scientific and Technical Information of China (English)

    张燕昆; 刘重庆


    Support vector machine (SVM), as a novel approach in pattern recognition, has demonstrated a success in face detection and face recognition. In this paper, a face recognition approach based on the SVM classifier with the nearest neighbor classifier (NNC) is proposed. The principal component analysis (PCA) is used to reduce the dimension and extract features. Then one-against-all stratedy is used to train the SVM classifiers. At the testing stage, we propose an algorithm by combining SVM classifier with NNC to improve the correct recognition rate. We conduct the experiment on the Cambridge ORL face database. The result shows that our approach outperforms the standard eigenface approach and some other approaches.

  1. Bagged ensemble of Fuzzy C-Means classifiers for nuclear transient identification

    Energy Technology Data Exchange (ETDEWEB)

    Baraldi, Piero; Razavi-Far, Roozbeh [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy); Zio, Enrico, E-mail: [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy); Ecole Centrale Paris-Supelec, Paris (France)


    Research highlights: > A bagged ensemble of classifiers is applied for nuclear transient identification. > Fuzzy C-Means classifiers are used as base classifiers of the ensemble. > Transients are simulated in the feedwater system of a boiling water reactor. > Ensemble is compared with a supervised, evolutionary-optimized FCM classifier. > Ensemble improves classification accuracy in cases of large or very small sizes data. - Abstract: This paper presents an ensemble-based scheme for nuclear transient identification. The approach adopted to construct the ensemble of classifiers is bagging; the novelty consists in using supervised fuzzy C-means (FCM) classifiers as base classifiers of the ensemble. The performance of the proposed classification scheme has been verified by comparison with a single supervised, evolutionary-optimized FCM classifier with respect of the task of classifying artificial datasets. The results obtained indicate that in the cases of datasets of large or very small sizes and/or complex decision boundaries, the bagging ensembles can improve classification accuracy. Then, the approach has been applied to the identification of simulated transients in the feedwater system of a boiling water reactor (BWR).

  2. Evolving a Bayesian Classifier for ECG-based Age Classification in Medical Applications. (United States)

    Wiggins, M; Saad, A; Litt, B; Vachtsevanos, G


    OBJECTIVE: To classify patients by age based upon information extracted from their electro-cardiograms (ECGs). To develop and compare the performance of Bayesian classifiers. METHODS AND MATERIAL: We present a methodology for classifying patients according to statistical features extracted from their ECG signals using a genetically evolved Bayesian network classifier. Continuous signal feature variables are converted to a discrete symbolic form by thresholding, to lower the dimensionality of the signal. This simplifies calculation of conditional probability tables for the classifier, and makes the tables smaller. Two methods of network discovery from data were developed and compared: the first using a greedy hill-climb search and the second employed evolutionary computing using a genetic algorithm (GA). RESULTS AND CONCLUSIONS: The evolved Bayesian network performed better (86.25% AUC) than both the one developed using the greedy algorithm (65% AUC) and the naïve Bayesian classifier (84.75% AUC). The methodology for evolving the Bayesian classifier can be used to evolve Bayesian networks in general thereby identifying the dependencies among the variables of interest. Those dependencies are assumed to be non-existent by naïve Bayesian classifiers. Such a classifier can then be used for medical applications for diagnosis and prediction purposes.

  3. Classification of Cancer Gene Selection Using Random Forest and Neural Network Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Jogendra Kushwah


    Full Text Available The free radical gene classification of cancerdiseasesis challenging job in biomedical dataengineering. The improving of classification of geneselection of cancer diseases various classifier areused, but the classification of classifier are notvalidate. So ensemble classifier is used for cancergene classification using neural network classifierwith random forest tree. The random forest tree isensembling technique of classifier in this techniquethe number of classifier ensemble of their leaf nodeof class of classifier. In this paper we combinedneuralnetwork with random forest ensembleclassifier for classification of cancer gene selectionfor diagnose analysis of cancer diseases.Theproposed method is different from most of themethods of ensemble classifier, which follow aninput output paradigm ofneural network, where themembers of the ensemble are selected from a set ofneural network classifier. the number of classifiersis determined during the rising procedure of theforest. Furthermore, the proposed method producesan ensemble not only correct, but also assorted,ensuring the two important properties that shouldcharacterize an ensemble classifier. For empiricalevaluation of our proposed method we used UCIcancer diseases data set for classification. Ourexperimental result shows that betterresult incompression of random forest tree classification

  4. Application of SVM classifier in thermographic image classification for early detection of breast cancer (United States)

    Oleszkiewicz, Witold; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał


    This article presents the application of machine learning algorithms for early detection of breast cancer on the basis of thermographic images. Supervised learning model: Support vector machine (SVM) and Sequential Minimal Optimization algorithm (SMO) for the training of SVM classifier were implemented. The SVM classifier was included in a client-server application which enables to create a training set of examinations and to apply classifiers (including SVM) for the diagnosis and early detection of the breast cancer. The sensitivity and specificity of SVM classifier were calculated based on the thermographic images from studies. Furthermore, the heuristic method for SVM's parameters tuning was proposed.

  5. Gaseous Electronics Tables, Atoms, and Molecules

    CERN Document Server

    Raju, Gorur Govinda


    With the constant emergence of new research and application possibilities, gaseous electronics is more important than ever in disciplines including engineering (electrical, power, mechanical, electronics, and environmental), physics, and electronics. The first resource of its kind, Gaseous Electronics: Tables, Atoms, and Molecules fulfills the author's vision of a stand-alone reference to condense 100 years of research on electron-neutral collision data into one easily searchable volume. It presents most--if not all--of the properly classified experimental results that scientists, researchers,

  6. 基于多线性分类器拟合的攻击模拟算法%Attack Simulation Algorithm Based on Multi-linear Classifier Fitting

    Institute of Scientific and Technical Information of China (English)

    吴玮斌; 刘功申


    为提高分类器在对抗性环境和训练阶段的抗攻击性,提出一种新的攻击模拟算法。通过拟合成员分类器模拟并获取最差情况攻击使用的决策边界,根据阈值设定去除性能较差的成员分类器,使最终攻击结果优于模仿攻击算法。实验结果表明,该算法无需获取目标分类器的具体信息,在保证分类准确率的同时具有较高的安全性。%In order to improve the anti-aggressive capability of classifiers in the adversarial environment and training stage,this paper proposes a new attack simulation algorithm.The member classifiers are fitted to simulate and get the decision boundary used by the worst case attack,and the member classifier with poor performance is removed according to threshold setting.The final result of the proposed algorithm is superior to that of the mimicry attack algorithm. Experimantal result shows that this algorithm has no need to get the specific information of the target classifier,and it has higher security while maintaining the accuracy of classification.

  7. Removal of Dental Biofilms with an Ultrasonically Activated Water Stream. (United States)

    Howlin, R P; Fabbri, S; Offin, D G; Symonds, N; Kiang, K S; Knee, R J; Yoganantham, D C; Webb, J S; Birkin, P R; Leighton, T G; Stoodley, P


    Acidogenic bacteria within dental plaque biofilms are the causative agents of caries. Consequently, maintenance of a healthy oral environment with efficient biofilm removal strategies is important to limit caries, as well as halt progression to gingivitis and periodontitis. Recently, a novel cleaning device has been described using an ultrasonically activated stream (UAS) to generate a cavitation cloud of bubbles in a freely flowing water stream that has demonstrated the capacity to be effective at biofilm removal. In this study, UAS was evaluated for its ability to remove biofilms of the cariogenic pathogen Streptococcus mutans UA159, as well as Actinomyces naeslundii ATCC 12104 and Streptococcus oralis ATCC 9811, grown on machine-etched glass slides to generate a reproducible complex surface and artificial teeth from a typodont training model. Biofilm removal was assessed both visually and microscopically using high-speed videography, confocal scanning laser microscopy (CSLM), and scanning electron microscopy (SEM). Analysis by CSLM demonstrated a statistically significant 99.9% removal of S. mutans biofilms exposed to the UAS for 10 s, relative to both untreated control biofilms and biofilms exposed to the water stream alone without ultrasonic activation (P biofilm removal. The UAS was also highly effective at S. mutans, A. naeslundii, and S. oralis biofilm removal from machine-etched glass and S. mutans from typodont surfaces with complex topography. Consequently, UAS technology represents a potentially effective method for biofilm removal and improved oral hygiene.

  8. Carbon Nanotube Electron Sources for Air Purification Project (United States)

    National Aeronautics and Space Administration — The innovation proposed here focuses on cleansing air with high energy electrons. Bombardment by electrons has proven to be effective in removing a wide spectrum of...

  9. Ingrown toenail removal – discharge (United States)

    Onychocryptosis surgery; Onychomycosis; Unguis incarnates surgery; Ingrown toenail removal; Toenail ... PA: Elsevier Saunders; 2014:chap 51. Pollock M. Ingrown toenails. In: Pfenninger JL, Fowler GC, eds. Pfenninger and ...

  10. Region 9 Removal Sites 2012 (United States)

    U.S. Environmental Protection Agency — Point geospatial dataset representing locations of CERCLA (Superfund) Removal sites. CERCLA (Comprehensive Environmental Response, Compensation, and Liability Act)...

  11. Investigation and in situ removal of spatter generated during laser ablation of aluminium composites (United States)

    Popescu, A. C.; Delval, C.; Shadman, S.; Leparoux, M.


    Spatter generated during laser irradiation of an aluminium alloy nanocomposite (AlMg5 reinforced with Al2O3 nanoparticles) was monitored by high speed imaging. Droplets trajectory and speed were assessed by computerized image analysis. The effects of laser peak power and laser focusing on the plume expansion and expulsed droplet speeds were studied in air or under argon flow. It was found that the velocity of visible droplets expulsed laterally or at the end of the plume emission from the metal surface was not dependent on the plasma plume speed. The neighbouring area of irradiation sites was studied by optical and scanning electron microscopy. Droplets deposited on the surface were classified according to their size and counted using a digital image processing software. It was observed that the number of droplets on surface was 1.5-3 times higher when the laser beam was focused in depth as compared to focused beams, even though the populations average diameter were comparable. Three methods were selected for removing droplets in situ, during plume expansion: an argon gas jet crossing the plasma plume, a fused silica plate collector transparent to the laser wavelength placed parallel to the irradiated surface and a mask placed onto the aluminium composite surface. The argon gas jet was efficient only for low power irradiation conditions, the fused silica plate failed in all tested conditions and the mask was successful for all irradiation regimes.

  12. Risk and prognostic factors for non-specific musculoskeletal pain: A synthesis of evidence from systematic reviews classified into ICF dimensions



    A wide variety of risk factors for the occurrence and prognostic factors for persistence of non-specific musculoskeletal pain (MSP) are mentioned in the literature. A systematic review of all these factors is not available. Thus a systematic review was conducted to evaluate MSP risk factors and prognostic factors, classified according to the dimensions of the International Classification of Functioning, Disability and Health. Candidate systematic reviews were identified in electronic medical ...

  13. Adaptation in P300 braincomputer interfaces: A two-classifier cotraining approach

    DEFF Research Database (Denmark)

    Panicker, Rajesh C.; Sun, Ying; Puthusserypady, Sadasivan


    A cotraining-based approach is introduced for constructing high-performance classifiers for P300-based braincomputer interfaces (BCIs), which were trained from very little data. It uses two classifiers: Fishers linear discriminant analysis and Bayesian linear discriminant analysis progressively...

  14. The Impact of Inappropriate Modeling of Cross-Classified Data Structures (United States)

    Meyers, Jason L.; Beretvas, S. Natasha


    Cross-classified random effects modeling (CCREM) is used to model multilevel data from nonhierarchical contexts. These models are widely discussed but infrequently used in social science research. Because little research exists assessing when it is necessary to use CCREM, 2 studies were conducted. A real data set with a cross-classified structure…

  15. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems. (United States)

    Foo, Brian; van der Schaar, Mihaela


    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  16. RIT-CIA Case Study: Classified Research in a University Context. (United States)

    Carl, W. John, III

    A controversy at the Rochester Institute of Technology (RIT) in New York State over that institution's involvement with classified research for the Central Intelligence Agency (CIA) raised issues regarding classified research and institutional leadership. In 1991 M. Richard Rose, then president of RIT, took a 4-month sabbatical to work for the…

  17. 3 CFR - Implementation of the Executive Order, “Classified National Security Information” (United States)


    ... 3 The President 1 2010-01-01 2010-01-01 false Implementation of the Executive Order, âClassified National Security Informationâ Presidential Documents Other Presidential Documents Memorandum of December 29, 2009 Implementation of the Executive Order, “Classified National Security Information” Memorandum for the Heads of Executive Departments...

  18. The iPhyClassifier, an interactive online tool for phytoplasma classification and taxonomic assignment (United States)

    The iPhyClassifier is an Internet-based research tool for quick identification and classification of diverse phytoplasmas. The iPhyClassifier simulates laboratory restriction enzyme digestions and subsequent gel electrophoresis and generates virtual restriction fragment length polymorphism (RFLP) p...

  19. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Israël, Menno; Broek, van den Egon L.; Putten, van der Peter; Khan, L.; Petrushin, V.A.


    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  20. 41 CFR 102-34.45 - How are passenger automobiles classified? (United States)


    ... MANAGEMENT Obtaining Fuel Efficient Motor Vehicles § 102-34.45 How are passenger automobiles classified... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false How are passenger automobiles classified? 102-34.45 Section 102-34.45 Public Contracts and Property Management Federal...