WorldWideScience

Sample records for washing machine method

  1. Environmental diagnosis of the washing machine motor

    DEFF Research Database (Denmark)

    Erichsen, Hanne K. Linnet

    1997-01-01

    An environmental diagnosis of the washing machine focusing on the motor is performed. The goal of the diagnosis is to designate environmental focus points in the product. The LCA of the washing machine showed impact potentials from the life cycle of the product (see: LCA of a washing machine). Th...... up 2%, Manually disassembling and recycling of metals, Reuse of motor in a new washing machine, aluminium wire instead of copper wire in the motor....

  2. Washing method of filter

    International Nuclear Information System (INIS)

    Izumidani, Masakiyo; Tanno, Kazuo.

    1978-01-01

    Purpose: To enable automatic filter operation and facilitate back-washing operation by back-washing filters used in a bwr nuclear power plant utilizing an exhaust gas from a ventilator or air conditioner. Method: Exhaust gas from an exhaust pipe of an ventilator or air conditioner is pressurized in a compressor and then introduced in a back-washing gas tank. Then, the exhaust gas pressurized to a predetermined pressure is blown from the inside to the outside of a filter to thereby separate impurities collected on the filter elements and introduce them to a waste tank. (Furukawa, Y.)

  3. Towards a durability test for washing-machines.

    Science.gov (United States)

    Stamminger, Rainer; Tecchio, Paolo; Ardente, Fulvio; Mathieux, Fabrice; Niestrath, Phoebe

    2018-04-01

    Durability plays a key role in enhancing resource conservation and contributing to waste minimization. The washing-machine product group represents a relevant case study for the development of a durability test and as a potential trigger to systematically address durability in the design of products. We developed a procedure to test the durability performance of washing-machines as a main objective of this research. The research method consisted of an analysis of available durability standards and procedures to test products and components, followed by an analysis of relevant references related to frequent failures. Finally, we defined the criteria and the conditions for a repeatable, relatively fast and relevant endurance test. The durability test considered the whole product tested under conditions of stress. A series of spinning cycles with fixed imbalanced loads was run on two washing-machines to observe failures and performance changes during the test. Even though no hard failures occurred, results clearly showed that not all washing-machines can sustain such a test without abrasion or performance deterioration. However, the attempt to reproduce the stress induced on a washing-machine by carrying out a high number of pure spinning cycles with fixed loads did not allow equal testing conditions: the actions of the control procedure regarding imbalanced loads differ from machine to machine. The outcomes of this research can be used as grounds to develop standardised durability tests and to, hence, contribute to the development of future product policy measures.

  4. 33 CFR 157.124 - COW tank washing machines.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false COW tank washing machines. 157... OIL IN BULK Crude Oil Washing (COW) System on Tank Vessels Design, Equipment, and Installation § 157.124 COW tank washing machines. (a) COW machines must be permanently mounted in each cargo tank. (b...

  5. Evaluation of washing machine load potential for smart grid integration

    NARCIS (Netherlands)

    Klaassen, E.A.M.; Kobus, C.B.A.; Huijkelom, M.; Frunt, J.; Slootweg, J.G.

    2013-01-01

    This paper presents the lessons learned from a smart wash pilot, conducted with 24 employees of distribution system operator Enexis, who were equipped with an energy computer, smart washing machine, photovoltaic panels and smart meter. The pilot goal was to gain experience and knowledge about the

  6. Radiotracer study of wash load movement in a drum-type fabric washing machine using a gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Balt, A.P.; Brekel, L.D.M. van den; Vandecasteele, C.; Kolar, Z.

    1987-01-01

    A study was made of the movement of the wash loads in a drum-type washing machine. For this purpose a sup(99m)Tc source was attached to one or two separate textile pieces and the subsequent source positions were determined by means of a gamma-camera. The wash load movement pattern appears to depend on the type of textile material and its amount, as well as on the volume of water present in the washing machine.

  7. Radiotracer study of wash load movement in a drum-type fabric washing machine using a gamma camera

    International Nuclear Information System (INIS)

    Balt, A.P.; Brekel, L.D.M. van den; Vandecasteele, C.; Kolar, Z.

    1987-01-01

    A study was made of the movement of the wash loads in a drum-type washing machine. For this purpose a sup(99m)Tc source was attached to one or two separate textile pieces and the subsequent source positions were determined by means of a gamma-camera. The wash load movement pattern appears to depend on the type of textile material and its amount, as well as on the volume of water present in the washing machine. (author)

  8. 78 FR 28577 - Notification of Proposed Production Activity: Whirlpool Corporation Subzone 8I; (Washing Machines...

    Science.gov (United States)

    2013-05-15

    ... machine parts to the list of approved finished products and would also add imported components to the...; refrigeration parts; dishwashing machine parts; drying machine parts; water inlet valves; AC/DC fan motors; AC... Activity: Whirlpool Corporation Subzone 8I; (Washing Machines): Clyde and Green Springs, Ohio Whirlpool...

  9. Washing Habits and Machine with Intake of hot and cold Water

    DEFF Research Database (Denmark)

    Christensen, Bente Lis; Nørgaard, Jørgen

    1997-01-01

    with slightly adapted washing habits, or 17% of normal today. If the heat is supplied from combined heat and power production as in the actual experiment, CO2-emission is reduced by 81%. With hot water from oil or gas heaters the reduction will be slightly lower, while with solar hot water it will be larger.......Domestic washing machines typically spend around 80% of the electricity on heating water. Most of this can be replaced by more appropriate heat sources like district heat from combined heat and power production, or gas heating system. In recent years some washing machine manufacturers have marketed...... machines which can take in both hot and cold water and mix it to the temperature wanted. Such one machine has been tested in daily household use over 5 months, with habits of very few hot water washes. The result is an electricity consumption corresponding to 67 kWh per year for an average household...

  10. A method for treating clayless wash fluids

    Energy Technology Data Exchange (ETDEWEB)

    Deykalo, T A; Dzhumagaliyev, T N; Skvortsov, D S

    1980-02-18

    To increase the heat and salt resistance of a wash fluid, monoethanolamine processed waste of licorice production - grist in a volume of 5-8% by weight, is introduced into it as the disperse phase. The processing of the grist is conducted for 1-2 hours at 20-100/sup 0/C and the volume of the monoethanolamine is 0.05-0.1% by weight. The properties of the washing fluids treated by the grist with the introduction of 20% CaC1/sub 2/ into them were not deteriorated, while complete coagulation was achieved with its introduction into washing fluids on the basis of KMTs. Grist washing liquids do not deteriorate their own properties to a temperature of 200/sup 0/C, do not cause equipment corrosion, are inert to swelling clay rocks and with the introduction of KMTs at a temperature above 130-140/sup 0/C cause insignificant destruction of the reagent which is accompanied by a change in the color of the solutions and a drop in the degree of polymerization and viscosity.

  11. 78 FR 54449 - Subzone 8I, Authorization of Production Activity, Whirlpool Corporation (Washing Machines); Clyde...

    Science.gov (United States)

    2013-09-04

    ... DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [B-43-2013] Subzone 8I, Authorization of Production Activity, Whirlpool Corporation (Washing Machines); Clyde and Green Springs, Ohio On May 1, 2013, Whirlpool Corporation (Whirlpool) submitted a notification of proposed production activity to the Foreign...

  12. Inappropriate shock delivery by implantable cardioverter defibrillator due to electrical interference with washing machine.

    Science.gov (United States)

    Chongtham, Dhanaraj Singh; Bahl, Ajay; Kumar, Rohit Manoj; Talwar, K K

    2007-05-31

    We report a patient with hypertrophic cardiomyopathy who received an inappropriate implantable cardioverter defibrillator shock due to electrical interference from a washing machine. This electrical interference was detected as an episode of ventricular fibrillation with delivery of shock without warning symptoms.

  13. MI (2-methyl-4-isothiazolin-3-one) contained in detergents is not detectable in machine washed textiles

    DEFF Research Database (Denmark)

    Hofmann, Maja A; Giménez-Arnau, Ana; Aberer, Werner

    2018-01-01

    hazard as a strong contact allergen, the risk depends on exposure. Regarding the risk of exposure levels for the consumer to MI in clothes it can be stated that the use of MI in laundry detergents is safe for the consumer if these products are used according to the instructions in the normal household......Background: European legislation has banned the preservative methylisothiazolinone (MI) from inclusion in leave-on cosmetics. However, the risk for allergic reactions depends on exposure. The aim of this study was to determine the risk of MI in laundry detergents for household machine washing....... Methods: Different formulations of laundry detergents with commercial MI levels, up to one thousand ppm were used and three different types of clothes were washed in a normal household machine setting one time and 10 times. The level of MI was measured by HPLC. Results: While MI could be retrieved...

  14. Candida and Fusarium species known as opportunistic human pathogens from customer-accessible parts of residential washing machines.

    Science.gov (United States)

    Babič, Monika Novak; Zalar, Polona; Ženko, Bernard; Schroers, Hans-Josef; Džeroski, Sašo; Gunde-Cimerman, Nina

    2015-03-01

    Energy constraints have altered consumer practice regarding the use of household washing machines. Washing machines were developed that use lower washing temperatures, smaller amounts of water and biodegradable detergents. These conditions may favour the enrichment of opportunistic human pathogenic fungi. We focused on the isolation of fungi from two user-accessible parts of washing machines that often contain microbial biofilms: drawers for detergents and rubber door seals. Out of 70 residential washing machines sampled in Slovenia, 79% were positive for fungi. In total, 72 strains belonging to 12 genera and 26 species were isolated. Among these, members of the Fusarium oxysporum and Fusarium solani species complexes, Candida parapsilosis and Exophiala phaeomuriformis represented 44% of fungi detected. These species are known as opportunistic human pathogens and can cause skin, nail or eye infections also in healthy humans. A machine learning analysis revealed that presence of detergents and softeners followed by washing temperature, represent most critical factors for fungal colonization. Three washing machines with persisting malodour that resulted in bad smelling laundry were analysed for the presence of fungi and bacteria. In these cases, fungi were isolated in low numbers (7.5 %), while bacteria Micrococcus luteus, Pseudomonas aeruginosa, and Sphingomonas species prevailed. Copyright © 2014 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  15. Removal of uranium from gravel using soil washing method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ilgook; Kim, Kye-Nam; Kim, Seung-Soo; Choi, Jong-Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The development of nuclear technology has led to increasing radioactive waste containing uranium being released and disposed in the nuclear sites. Fine grained soils with a size of less than 4 mm are normally decontaminated using soil washing and electro-kinetic technologies. However, there have been few studies on the decontamination of gravels with a size of more than 4 mm. Therefore, it is necessary to study the decontamination of gravel contaminated with radionuclides. The main objective of the present study on soil washing was to define the optimal condition for acid treatment of uranium-polluted gravel. In this study, soil washing method was applied to remove uranium from gravel. The gravel was crushed and classified as particle sizes. The gravel particles were treated with sulfuric acid in a shaking incubator at 60 .deg. C and 150 rpm for 3 h. The optimal particle size of gravel for soil washing in removal of uranium was between 0.45 and 2.0 mm.

  16. Release of synthetic microplastic plastic fibres from domestic washing machines: Effects of fabric type and washing conditions.

    Science.gov (United States)

    Napper, Imogen E; Thompson, Richard C

    2016-11-15

    Washing clothes made from synthetic materials has been identified as a potentially important source of microscopic fibres to the environment. This study examined the release of fibres from polyester, polyester-cotton blend and acrylic fabrics. These fabrics were laundered under various conditions of temperature, detergent and conditioner. Fibres from waste effluent were examined and the mass, abundance and fibre size compared between treatments. Average fibre size ranged between 11.9 and 17.7μm in diameter, and 5.0 and 7.8mm in length. Polyester-cotton fabric consistently shed significantly fewer fibres than either polyester or acrylic. However, fibre release varied according to wash treatment with various complex interactions. We estimate over 700,000 fibres could be released from an average 6kg wash load of acrylic fabric. As fibres have been reported in effluent from sewage treatment plants, our data indicates fibres released by washing of clothing could be an important source of microplastics to aquatic habitats. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Electrochemical advanced oxidation processes as decentralized water treatment technologies to remediate domestic washing machine effluents.

    Science.gov (United States)

    Dos Santos, Alexsandro Jhones; Costa, Emily Cintia Tossi de Araújo; da Silva, Djalma Ribeiro; Garcia-Segura, Sergi; Martínez-Huitle, Carlos Alberto

    2018-03-01

    Water scarcity is one of the major concerns worldwide. In order to secure this appreciated natural resource, management and development of water treatment technologies are mandatory. One feasible alternative is the consideration of water recycling/reuse at the household scale. Here, the treatment of actual washing machine effluent by electrochemical advanced oxidation processes was considered. Electrochemical oxidation and electro-Fenton technologies can be applied as decentralized small-scale water treatment devices. Therefore, efficient decolorization and total organic abatement have been followed. The results demonstrate the promising performance of solar photoelectro-Fenton process, where complete color and organic removal was attained after 240 min of treatment under optimum conditions by applying a current density of 66.6 mA cm -2 . Thus, electrochemical technologies emerge as promising water-sustainable approaches.

  18. Status and progress in sludge washing: A pivotal pretreatment method

    International Nuclear Information System (INIS)

    Barton, W.B.; MacLean, G.T.; Meng, C.D.; Winkler, C.M.

    1995-01-01

    Separation of the bulk soluble chemical salts from the insoluble metal hydroxides and radionuclides is central to the strategy of disposing Hanford tank waste. Sludge washing and caustic leaching have been selected as the primary methods for processing the 230 million L (61,000,000 gal) of Hanford tank waste. These processes are very similar to those selected for processing waste at the West Valley Site in New York and the Savannah River Site in South Carolina. The purpose of sludge washing is to dissolve and remove the soluble salts in the waste. Leaching of the insoluble solids with caustic will be used to dissolve aluminum hydroxide and chromium hydroxide, and convert insoluble bismuth phosphate to soluble phosphate. The waste will be separated into a high-level solids fraction and a liquid fraction that can be disposed of as low-level waste after cesium removal. The washing and leaching operations involve batchwise mixing, settling, and decanting within the existing underground storage tanks

  19. 'If an Eye Is Washed Properly, It Means It Would See Clearly': A Mixed Methods Study of Face Washing Knowledge, Attitudes, and Behaviors in Rural Ethiopia.

    Directory of Open Access Journals (Sweden)

    Kristen Aiemjoy

    2016-10-01

    Full Text Available Face cleanliness is a core component of the SAFE (Surgery, Antibiotics, Facial cleanliness, and Environmental improvements strategy for trachoma control. Understanding knowledge, attitudes, and behaviors related to face washing may be helpful for designing effective interventions for improving facial cleanliness.In April 2014, a mixed methods study including focus groups and a quantitative cross-sectional study was conducted in the East Gojjam zone of the Amhara region of Ethiopia. Participants were asked about face washing practices, motivations for face washing, use of soap (which may reduce bacterial load, and fly control strategies.Overall, both knowledge and reported practice of face washing was high. Participants reported they knew that washing their own face and their children's faces daily was important for hygiene and infection control. Although participants reported high knowledge of the importance of soap for face washing, quantitative data revealed strong variations by community in the use of soap for face washing, ranging from 4.4% to 82.2% of households reporting using soap for face washing. Cost and forgetfulness were cited as barriers to the use of soap for face washing. Keeping flies from landing on children was a commonly cited motivator for regular face washing, as was trachoma prevention.Interventions aiming to improve facial cleanliness for trachoma prevention should focus on habit formation (to address forgetfulness and address barriers to the use of soap, such as reducing cost. Interventions that focus solely on improving knowledge may not be effective for changing face-washing behaviors.

  20. Machine learning methods for planning

    CERN Document Server

    Minton, Steven

    1993-01-01

    Machine Learning Methods for Planning provides information pertinent to learning methods for planning and scheduling. This book covers a wide variety of learning methods and learning architectures, including analogical, case-based, decision-tree, explanation-based, and reinforcement learning.Organized into 15 chapters, this book begins with an overview of planning and scheduling and describes some representative learning systems that have been developed for these tasks. This text then describes a learning apprentice for calendar management. Other chapters consider the problem of temporal credi

  1. Tracking by Machine Learning Methods

    CERN Document Server

    Jofrehei, Arash

    2015-01-01

    Current track reconstructing methods start with two points and then for each layer loop through all possible hits to find proper hits to add to that track. Another idea would be to use this large number of already reconstructed events and/or simulated data and train a machine on this data to find tracks given hit pixels. Training time could be long but real time tracking is really fast Simulation might not be as realistic as real data but tacking has been done for that with 100 percent efficiency while by using real data we would probably be limited to current efficiency.

  2. Prevalence of β-lactamase genes in domestic washing machines and dishwashers and the impact of laundering processes on antibiotic-resistant bacteria.

    Science.gov (United States)

    Rehberg, L; Frontzek, A; Melhus, Å; Bockmühl, D P

    2017-12-01

    To investigate the prevalence of β-lactamase genes in domestic washing machines and dishwashers, and the decontamination efficacy of laundering. For the first investigation, swab samples from washing machines (n = 29) and dishwashers (n = 24) were analysed by real-time quantitative PCR to detect genes encoding β-lactamases. To test the impact of laundering on resistant bacteria, cotton test swatches were artificially contaminated with susceptible and resistant strains of Pseudomonas aeruginosa, Klebsiella pneumoniae and Staphylococcus aureus within a second investigation. They were washed in a domestic washing machine with or without activated oxygen bleach (AOB)-containing detergent at 20-50°C. β-Lactamase genes (most commonly of the AmpC- and OXA-type) were detected in 79% of the washing machines and in 96% of the dishwashers and Pseudomonadaceae dominated the microbiota. The level of bacterial reduction after laundering was ≥80% for all Ps. aeruginosa and Kl. pneumoniae strains, while it was only 37-61% for the methicillin-resistant Staph. aureus outbreak strain. In general, the reduction was tendentially higher for susceptible bacteria than for the resistant outbreak strains, especially for Staph. aureus. β-Lactamase genes seem to be frequently present in domestic appliances and may pose a potential risk for cross-contamination and horizontal transfer of genes encoding resistance against clinically important β-lactams. In general, higher temperatures and the use of AOB can improve the reduction of antibiotic-resistant bacteria, including Staph. aureus which appears to be less susceptible to the decontamination effect of laundering. Data on the presence of antibiotic-resistant bacteria in the domestic environment are limited. This study suggests that β-lactamase genes in washing machines and dishwashers are frequent, and that antibiotic-resistant strains are generally more resistant to the used washing conditions. © 2017 The Society for

  3. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Science.gov (United States)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  4. Mathematical method to build an empirical model for inhaled anesthetic agent wash-in

    Directory of Open Access Journals (Sweden)

    Grouls René EJ

    2011-06-01

    Full Text Available Abstract Background The wide range of fresh gas flow - vaporizer setting (FGF - FD combinations used by different anesthesiologists during the wash-in period of inhaled anesthetics indicates that the selection of FGF and FD is based on habit and personal experience. An empirical model could rationalize FGF - FD selection during wash-in. Methods During model derivation, 50 ASA PS I-II patients received desflurane in O2 with an ADU® anesthesia machine with a random combination of a fixed FGF - FD setting. The resulting course of the end-expired desflurane concentration (FA was modeled with Excel Solver, with patient age, height, and weight as covariates; NONMEM was used to check for parsimony. The resulting equation was solved for FD, and prospectively tested by having the formula calculate FD to be used by the anesthesiologist after randomly selecting a FGF, a target FA (FAt, and a specified time interval (1 - 5 min after turning on the vaporizer after which FAt had to be reached. The following targets were tested: desflurane FAt 3.5% after 3.5 min (n = 40, 5% after 5 min (n = 37, and 6% after 4.5 min (n = 37. Results Solving the equation derived during model development for FD yields FD=-(e(-FGF*-0.23+FGF*0.24*(e(FGF*-0.23*FAt*Ht*0.1-e(FGF*-0.23*FGF*2.55+40.46-e(FGF*-0.23*40.46+e(FGF*-0.23+Time/-4.08*40.46-e(Time/-4.08*40.46/((-1+e(FGF*0.24*(-1+e(Time/-4.08*39.29. Only height (Ht could be withheld as a significant covariate. Median performance error and median absolute performance error were -2.9 and 7.0% in the 3.5% after 3.5 min group, -3.4 and 11.4% in the 5% after 5 min group, and -16.2 and 16.2% in the 6% after 4.5 min groups, respectively. Conclusions An empirical model can be used to predict the FGF - FD combinations that attain a target end-expired anesthetic agent concentration with clinically acceptable accuracy within the first 5 min of the start of administration. The sequences are easily calculated in an Excel file and simple to

  5. Loads of green washing—can behavioural economics increase willingness-to-pay for efficient washing machines in the UK?

    International Nuclear Information System (INIS)

    Bull, Joe

    2012-01-01

    The EU energy efficiency labelling scheme has successfully increased demand for efficient appliances by providing a prominent letter rating to consumers during their product search. Behavioural economics (BE) suggests the adoption of efficient appliances could be accelerated further by: (i) monetising efficiency, (ii) directly communicating the link between efficiency and carbon emissions, (iii) communicating the long-term (e.g., operational life) financial and environmental impact of product choices, and, (iv) framing the information as avoided losses (‘extra’) rather than gains (‘savings’). This study tested these hypotheses via an online, stated preference survey of 465 UK residents using actual market pricing and efficiency data for washing machines. The provision of information about either running costs or running emissions did encourage selection of more expensive and efficient products (strongly correlated variables in the choice set and actual market) (p<.001, r=.30–.41) with no statistically significant difference between their impact. Providing lifetime running cost information also nudged respondents towards more expensive, efficient products than annual information (p=.006–.027, r=.15–.18). Finally, loss-framed running emissions information encouraged selection of more expensive, efficient products than gain-framed information (p<.05, r=.13–.16). The EU label is an impracticable means of communication, but digital communication channels (e.g., price comparison websites) may offer potential for inexpensive localisation and personalisation of the information. The results suggest that successful communication could benefit the private sector whilst making a contribution to reducing carbon emissions. - Highlights: ► 465 UK residents made a hypothetical washing machine purchase via an online survey. ► Respondents were exposed to either running cost or running emissions information. ► Running cost information increased willingness

  6. Study of electroflotation method for treatment of wastewater from washing soil contaminated by heavy metals

    OpenAIRE

    de Oliveira da Mota, Izabel; de Castro, José Adilson; de Góes Casqueira, Rui; de Oliveira Junior, Angelo Gomes

    2015-01-01

    Electroflotation method (EFM) for treatment of synthetic solutions simulating wastewater from washing soil contaminated by drilling fluids from oil wells was investigated in this paper. Experiments were carried out to examine the effects of the operating conditions on the removal of lead, barium and zinc from solutions containing 15 mg dm−3 for each metal representing a typical concentration of wastewater generated in the washing soil in this treatment. The experimental results showed that it...

  7. Electrical machining method of insulating ceramics

    International Nuclear Information System (INIS)

    Fukuzawa, Y.; Mohri, N.; Tani, T.

    1999-01-01

    This paper describes a new electrical discharge machining method for insulating ceramics using an assisting electrode with either a sinking electrical discharge machine or a wire electrical discharge machine. In this method, the metal sheet or mesh is attached to the ceramic surface as an assisting material for the discharge generation around the insulator surface. When the machining condition changes from the attached material to the workpiece, a cracked carbon layer is formed on the workpiece surface. As this layer has an electrical conductivity, electrical discharge occurs in working oil between the tool electrode and the surface of the workpiece. The carbon is formed from the working oil during this electrical discharge. Even after the material is machined, an electrical discharge occurs in the gap region between the tool electrode and the ceramic because an electrically conductive layer is generated continuously. Insulating ceramics can be machined by the electrical discharge machining method using the above mentioned surface modification phenomenon. In this paper the authors show a machined example demonstrating that the proposed method is available for machining a complex shape on insulating ceramics. Copyright (1999) AD-TECH - International Foundation for the Advancement of Technology Ltd

  8. Finite Element Method in Machining Processes

    CERN Document Server

    Markopoulos, Angelos P

    2013-01-01

    Finite Element Method in Machining Processes provides a concise study on the way the Finite Element Method (FEM) is used in the case of manufacturing processes, primarily in machining. The basics of this kind of modeling are detailed to create a reference that will provide guidelines for those who start to study this method now, but also for scientists already involved in FEM and want to expand their research. A discussion on FEM, formulations and techniques currently in use is followed up by machining case studies. Orthogonal cutting, oblique cutting, 3D simulations for turning and milling, grinding, and state-of-the-art topics such as high speed machining and micromachining are explained with relevant examples. This is all supported by a literature review and a reference list for further study. As FEM is a key method for researchers in the manufacturing and especially in the machining sector, Finite Element Method in Machining Processes is a key reference for students studying manufacturing processes but al...

  9. Preservation Method and Phosphate Buffered Saline Washing Affect the Acute Myeloid Leukemia Proteome

    Directory of Open Access Journals (Sweden)

    Rebecca Wangen

    2018-01-01

    Full Text Available Acute myeloid leukemia (AML primary cells can be isolated from peripheral blood, suspended with media containing bovine serum and cryoprotectant, and stored in liquid nitrogen before being processed for proteomic analysis by mass spectrometry (MS. The presence of bovine serum and human blood proteins in AML samples can hamper the identifications of proteins, and thereby reduce the proteome coverage of the study. Herein, we have established the effect of phosphate buffered saline (PBS washing on AML patient samples stored in media. Although PBS washes effectively removed serum and blood contaminants, the saline wash resulted in cell burst and remarkable protein material loss. We also compared different methods to preserve the AML proteome from THP-1 and Molm-13 cell lines before MS analysis: (1 stored in media containing bovine serum and dimethyl sulfoxide (DMSO; (2 stored as dried cell pellets; and (3 stored as cell lysates in 4% sodium dodecyl sulfate (SDS. MS analysis of differently preserved AML cell samples shows that preservation with DMSO produce a high number of fragile cells that will burst during freezing and thawing. Our studies encourage the use of alternative preservation methods for future MS analysis of the AML proteome.

  10. Soil washing

    International Nuclear Information System (INIS)

    Neuman, R.S.; Diel, B.N.; Halpern, Y.

    1992-01-01

    Disposal of soils or sludges contaminated with organic and inorganic compounds is a major problem for environmental remedial activities, hazardous waste generators, and the disposal industry. This paper reports that many of these wastes can be effectively treated utilizing soil washing technology. CWM has been developing soil washing technology over the past few years, with extensive work being conducted on the bench scale. These studies have demonstrated consistently high removal efficiencies (95-99%) for a wide variety of PCB and petroleum hydrocarbon contaminated waste. Recently, a comprehensive study examining the removal of both organic and inorganic contraminants from two different types of surrogate soil matrices was completed. In addition to establishing the range of contaminants that can be removed from soil, a method for surfactant/water separation was evaluated. For example, using a thermal phase separation method, approximately 90% of the surfactant could be recovered from the water

  11. Study of electroflotation method for treatment of wastewater from washing soil contaminated by heavy metals

    Directory of Open Access Journals (Sweden)

    Izabel de Oliveira da Mota

    2015-04-01

    Full Text Available Electroflotation method (EFM for treatment of synthetic solutions simulating wastewater from washing soil contaminated by drilling fluids from oil wells was investigated in this paper. Experiments were carried out to examine the effects of the operating conditions on the removal of lead, barium and zinc from solutions containing 15 mg dm−3 for each metal representing a typical concentration of wastewater generated in the washing soil in this treatment. The experimental results showed that it is possible to remove these heavy metals by electrocoagulation/electroflotation (ECF attaining 97% of removal using stainless steel mesh electrodes with a power consumption of 14 kWh m−3. The optimal conditions of treatment were sodium dodecyl sulfate (SDS in a molar ratio 3:1, current density around 350 A m−2, ionic strength 3.2 × 10−3 M, pH = 10.0 and 20 min of ECF. This study newly indicated that the proposed method is adequate to simultaneously treat the common heavy metals found in the drilling fluids oil wells.

  12. Comparison of Different Methods of Denim Stone Washing by Pumice Stone, Acid Cellulases and Neutral Cellulases

    Directory of Open Access Journals (Sweden)

    M. Montazer

    2007-10-01

    Full Text Available Denim is a casual garment normally used by young people and extremely influential in shaping the fashion industry. Among various garments, these garments are subjected to innovations. This work is an attempt to compare the different methods of stone washing using pumice stones, acid cellulases and neutral cellulases or in combination of these methods. The effects of different processing conditions on the garment are compared and reported. Color differences of samples are probed by reflective colorimeter on the front side as well as the backside and also the white pocket of the garment.The abrasion resistance, tensile strength and crease recovery angle of samples are also reported. The XRD spectra are used to calculate the crystalline degrees of the selected samples. Moreover, fiber surfaces of some treated samples have been observed by SEM. The results indicate that treatment of denim with pumice stone with equal weight of garment causes a small color differences. The addition of cellulases to the washing, however, accelerates the color fading. Also, lower staining observed on the white pocket when the garment was treated with cellulases. However, the neutralcellulases increase the garment fading and decrease the staining on the white pocket. It is also observed that pumice stone with cellulases damages the fabric surface, although it is of a minimal damage.

  13. Parallelization of the ROOT Machine Learning Methods

    CERN Document Server

    Vakilipourtakalou, Pourya

    2016-01-01

    Today computation is an inseparable part of scientific research. Specially in Particle Physics when there is a classification problem like discrimination of Signals from Backgrounds originating from the collisions of particles. On the other hand, Monte Carlo simulations can be used in order to generate a known data set of Signals and Backgrounds based on theoretical physics. The aim of Machine Learning is to train some algorithms on known data set and then apply these trained algorithms to the unknown data sets. However, the most common framework for data analysis in Particle Physics is ROOT. In order to use Machine Learning methods, a Toolkit for Multivariate Data Analysis (TMVA) has been added to ROOT. The major consideration in this report is the parallelization of some TMVA methods, specially Cross-Validation and BDT.

  14. Methods of control the machining process

    Directory of Open Access Journals (Sweden)

    Yu.V. Petrakov

    2017-12-01

    Full Text Available Presents control methods, differentiated by the time of receipt of information used: a priori, a posteriori and current. When used a priori information to determine the mode of cutting is carried out by simulation the process of cutting allowance, where the shape of the workpiece and the details are presented in the form of wireframes. The office for current information provides for a system of adaptive control and modernization of CNC machine, where in the input of the unit shall be computed by using established optimization software. For the control by a posteriori information of the proposed method of correction of shape-generating trajectory in the second pass measurement surface of the workpiece formed by the first pass. Developed programs that automatically design the adjusted file for machining.

  15. Machine Learning Methods for Production Cases Analysis

    Science.gov (United States)

    Mokrova, Nataliya V.; Mokrov, Alexander M.; Safonova, Alexandra V.; Vishnyakov, Igor V.

    2018-03-01

    Approach to analysis of events occurring during the production process were proposed. Described machine learning system is able to solve classification tasks related to production control and hazard identification at an early stage. Descriptors of the internal production network data were used for training and testing of applied models. k-Nearest Neighbors and Random forest methods were used to illustrate and analyze proposed solution. The quality of the developed classifiers was estimated using standard statistical metrics, such as precision, recall and accuracy.

  16. Finding protein sites using machine learning methods

    Directory of Open Access Journals (Sweden)

    Jaime Leonardo Bobadilla Molina

    2003-07-01

    Full Text Available The increasing amount of protein three-dimensional (3D structures determined by x-ray and NMR technologies as well as structures predicted by computational methods results in the need for automated methods to provide inital annotations. We have developed a new method for recognizing sites in three-dimensional protein structures. Our method is based on a previosly reported algorithm for creating descriptions of protein microenviroments using physical and chemical properties at multiple levels of detail. The recognition method takes three inputs: 1. A set of control nonsites that share some structural or functional role. 2. A set of control nonsites that lack this role. 3. A single query site. A support vector machine classifier is built using feature vectors where each component represents a property in a given volume. Validation against an independent test set shows that this recognition approach has high sensitivity and specificity. We also describe the results of scanning four calcium binding proteins (with the calcium removed using a three dimensional grid of probe points at 1.25 angstrom spacing. The system finds the sites in the proteins giving points at or near the blinding sites. Our results show that property based descriptions along with support vector machines can be used for recognizing protein sites in unannotated structures.

  17. Study of on-machine error identification and compensation methods for micro machine tools

    International Nuclear Information System (INIS)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-01-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  18. Effect of centrifugation and washing on adipose graft viability: a new method to improve graft efficiency.

    Science.gov (United States)

    Hoareau, Laurence; Bencharif, Karima; Girard, Anne-Claire; Gence, Lydie; Delarue, Pierre; Hulard, Olivier; Festy, Franck; Roche, Regis

    2013-05-01

    Adipose tissue grafting is a promising method in the field of surgical filling. We studied the effect of centrifugation on fat grafts, and we propose an optimised protocol for the improvement of adipose tissue viability. Adipose tissue was subjected to different centrifugations, and the volumes of interstitial liquid and oil released were measured to choose the optimal condition. Tissue from this condition was then compared to tissue obtained from two traditional techniques: strong centrifugation (commonly 3 min at 3000 rpm/900 g), and decantation, by injecting into immunodeficient mice. The cytokine interleukin-6 (IL-6) and chemokine monocyte chemotactic protein-1 (MCP-1) were assayed 24 h post-injection, and after 1 month of grafting the state of the lipografts was evaluated through macroscopic and histological analysis, with oil gap area measurement. Strong centrifugation (900 g, 1800 g) is deleterious for adipose tissue because it leads to until threefold more adipocyte death compared to low centrifugation (100 g, 400 g). In addition, mice injected with strong centrifuged and non-centrifuged adipose tissue have higher rates of blood IL-6 and MCP-1, compared to those grafted with soft centrifuged fat. Moreover, extensive lipid vacuoles were detectable on histological sections of the non-centrifuged lipografts, whereas lipografts from soft centrifugation contain a higher amount of connective tissue containing collagen fibres. It is necessary to wash and centrifuge adipose tissue before reinjection in order to remove infiltration liquid and associated toxic molecules, which in the long term are deleterious for the graft. However, strong centrifugation is not recommended since it leads very quickly to greater adipocyte death. Thus, soft centrifugation (400 g/1 min), preceded by washings, seems to be the most appropriate protocol for the reinjection of adipose tissue. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published

  19. Machine learning methods for metabolic pathway prediction

    Directory of Open Access Journals (Sweden)

    Karp Peter D

    2010-01-01

    Full Text Available Abstract Background A key challenge in systems biology is the reconstruction of an organism's metabolic network from its genome sequence. One strategy for addressing this problem is to predict which metabolic pathways, from a reference database of known pathways, are present in the organism, based on the annotated genome of the organism. Results To quantitatively validate methods for pathway prediction, we developed a large "gold standard" dataset of 5,610 pathway instances known to be present or absent in curated metabolic pathway databases for six organisms. We defined a collection of 123 pathway features, whose information content we evaluated with respect to the gold standard. Feature data were used as input to an extensive collection of machine learning (ML methods, including naïve Bayes, decision trees, and logistic regression, together with feature selection and ensemble methods. We compared the ML methods to the previous PathoLogic algorithm for pathway prediction using the gold standard dataset. We found that ML-based prediction methods can match the performance of the PathoLogic algorithm. PathoLogic achieved an accuracy of 91% and an F-measure of 0.786. The ML-based prediction methods achieved accuracy as high as 91.2% and F-measure as high as 0.787. The ML-based methods output a probability for each predicted pathway, whereas PathoLogic does not, which provides more information to the user and facilitates filtering of predicted pathways. Conclusions ML methods for pathway prediction perform as well as existing methods, and have qualitative advantages in terms of extensibility, tunability, and explainability. More advanced prediction methods and/or more sophisticated input features may improve the performance of ML methods. However, pathway prediction performance appears to be limited largely by the ability to correctly match enzymes to the reactions they catalyze based on genome annotations.

  20. Machine learning methods for metabolic pathway prediction

    Science.gov (United States)

    2010-01-01

    Background A key challenge in systems biology is the reconstruction of an organism's metabolic network from its genome sequence. One strategy for addressing this problem is to predict which metabolic pathways, from a reference database of known pathways, are present in the organism, based on the annotated genome of the organism. Results To quantitatively validate methods for pathway prediction, we developed a large "gold standard" dataset of 5,610 pathway instances known to be present or absent in curated metabolic pathway databases for six organisms. We defined a collection of 123 pathway features, whose information content we evaluated with respect to the gold standard. Feature data were used as input to an extensive collection of machine learning (ML) methods, including naïve Bayes, decision trees, and logistic regression, together with feature selection and ensemble methods. We compared the ML methods to the previous PathoLogic algorithm for pathway prediction using the gold standard dataset. We found that ML-based prediction methods can match the performance of the PathoLogic algorithm. PathoLogic achieved an accuracy of 91% and an F-measure of 0.786. The ML-based prediction methods achieved accuracy as high as 91.2% and F-measure as high as 0.787. The ML-based methods output a probability for each predicted pathway, whereas PathoLogic does not, which provides more information to the user and facilitates filtering of predicted pathways. Conclusions ML methods for pathway prediction perform as well as existing methods, and have qualitative advantages in terms of extensibility, tunability, and explainability. More advanced prediction methods and/or more sophisticated input features may improve the performance of ML methods. However, pathway prediction performance appears to be limited largely by the ability to correctly match enzymes to the reactions they catalyze based on genome annotations. PMID:20064214

  1. Comparison of three different fat graft preparation methods: gravity separation, centrifugation, and simultaneous washing with filtration in a closed system.

    Science.gov (United States)

    Zhu, Min; Cohen, Steven R; Hicok, Kevin C; Shanahan, Rob K; Strem, Brian M; Yu, Johnson C; Arm, Douglas M; Fraser, John K

    2013-04-01

    Successful long-term volume retention of an autologous fat graft is problematic. The presence of contaminating cells, tumescent fluid, and free lipid in the graft contributes to disparate outcomes. Better preparation methods for the fat graft before transplantation may significantly improve results. Subcutaneous fat from 22 donors was divided and processed using various graft preparation methods: (1) no manipulation control, (2) gravity separation, (3) Coleman centrifugation, and (4) simultaneous washing with filtration using a commercially available system (Puregraft; Cytori Therapeutics, Inc., San Diego, Calif.). Fat grafts from various preparation methods were examined for free lipid, aqueous liquid, viable tissue, and blood cell content. Adipose tissue viability was determined by measuring glycerol release after agonist induction of lipolysis. All test graft preparation methods exhibited significantly less aqueous fluid and blood cell content compared with the control. Grafts prepared by washing with filtration exhibited significantly reduced blood cell and free lipid content, with significantly greater adipose tissue viability than other methods. Washing with filtration within a closed system produces a fat graft with higher tissue viability and lower presence of contaminants compared with grafts prepared by alternate methods.

  2. Methods and systems for micro machines

    Energy Technology Data Exchange (ETDEWEB)

    Stalford, Harold L.

    2018-03-06

    A micro machine may be in or less than the micrometer domain. The micro machine may include a micro actuator and a micro shaft coupled to the micro actuator. The micro shaft is operable to be driven by the micro actuator. A tool is coupled to the micro shaft and is operable to perform work in response to at least motion of the micro shaft.

  3. A Method for Design of Modular Reconfigurable Machine Tools

    Directory of Open Access Journals (Sweden)

    Zhengyi Xu

    2017-02-01

    Full Text Available Presented in this paper is a method for the design of modular reconfigurable machine tools (MRMTs. An MRMT is capable of using a minimal number of modules through reconfiguration to perform the required machining tasks for a family of parts. The proposed method consists of three steps: module identification, module determination, and layout synthesis. In the first step, the module components are collected from a family of general-purpose machines to establish a module library. In the second step, for a given family of parts to be machined, a set of needed modules are selected from the module library to construct a desired reconfigurable machine tool. In the third step, a final machine layout is decided though evaluation by considering a number of performance indices. Based on this method, a software package has been developed that can design an MRMT for a given part family.

  4. Recovery of kaolinite from tailings of Zonouz kaolin-washing plant by flotation-flocculation method

    OpenAIRE

    Kianoush Barani; Masoud Kalantari

    2018-01-01

    The traditional processing of kaolin is achieved by dispersion of the mined ore and classification by multistage hydrocyclone plants. The inefficiencies inherent to cyclones produce a middling product that is commonly disposed back into the quarry. In this research, recovery of kaolinite from tailings of Zonouz kaolin washing plant, which is located in Iran was investigated by flotation and flotation- flocculation. Flotation experiments show that the flotation of kaolinite from the tailings i...

  5. Methods and apparatus for controlling rotary machines

    Science.gov (United States)

    Bagepalli, Bharat Sampathkumaran [Niskayuna, NY; Jansen, Patrick Lee [Scotia, NY; Barnes, Gary R [Delanson, NY; Fric, Thomas Frank [Greer, SC; Lyons, James Patrick Francis [Niskayuna, NY; Pierce, Kirk Gee [Simpsonville, SC; Holley, William Edwin [Greer, SC; Barbu, Corneliu [Guilderland, NY

    2009-09-01

    A control system for a rotary machine is provided. The rotary machine has at least one rotating member and at least one substantially stationary member positioned such that a clearance gap is defined between a portion of the rotating member and a portion of the substantially stationary member. The control system includes at least one clearance gap dimension measurement apparatus and at least one clearance gap adjustment assembly. The adjustment assembly is coupled in electronic data communication with the measurement apparatus. The control system is configured to process a clearance gap dimension signal and modulate the clearance gap dimension.

  6. Application of machine learning methods in bioinformatics

    Science.gov (United States)

    Yang, Haoyu; An, Zheng; Zhou, Haotian; Hou, Yawen

    2018-05-01

    Faced with the development of bioinformatics, high-throughput genomic technology have enabled biology to enter the era of big data. [1] Bioinformatics is an interdisciplinary, including the acquisition, management, analysis, interpretation and application of biological information, etc. It derives from the Human Genome Project. The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets.[2]. This paper analyzes and compares various algorithms of machine learning and their applications in bioinformatics.

  7. LCA of a washing machine

    DEFF Research Database (Denmark)

    Erichsen, Hanne K. Linnet

    1997-01-01

    Task 2.1 has the aim to generate inforamtion of the present situation for electrical motors through life cycle assessment (LCA). When assessing the environmental burdens through out the life cycle of the motor it is necessary to look at the life cycle of the product in which the motor is used...

  8. Comparative characterization of the quaternary deposits of Northwest Peninsular Malaysia by wash boring and CPTU site investigation methods

    Science.gov (United States)

    Selamat, Mohamad R.; Rosli, Ros N.; Ramli, Muhd H.; Azmi, Mastura; Kumaravelu, Prakash; Govindasamy, Darvintharen

    2017-10-01

    A site investigation by wash boring method was carried out in the compound of the Engineering Campus, Universiti Sains Malaysia, in Northwest Peninsular Malaysia. The subsurface soils of the region are known to be comprised of quaternary deposits including the prominent marine clays of the Gula formation. The conventional Standard Penetration Test, or SPT, was carried out for every 1.5m depth. The 18 disturbed samples collected were tested for grain size distribution, Atterberg limits, and specific gravity; and categorized according to the unified soil classification system. The results indicate low SPT numbers, or SPTN, i.e. SPTN8 for depths deeper than 12m. Samples from shallower than 12m were mostly classified as low plasticity clay, or CL, which represents the soft marine clay while the underlying materials were mostly sands, namely poorly graded sand, or SP, silty sand, or SM, clayey sand, or SC, and well graded sand, or SW. Another site investigation by the CPTU method was also carried out 5m from the wash boring position, penetrating the entire soft clay stratum, thus giving another marine clay characterization in addition to the one given by the first method. The CPTU results generally gave SPTN≤2 for depths lesser than 8m, 2≤;SPTN≤4 for depths between 8 and 12m, and SPTN=23 at 12.5m, where the CPTU test terminated. The CPTU method classified the soft materials as sensitive fine grains, clays, and clayey silts to silty clays. Thus the CPTU results did not agree very well with the conventional wash boring SPT results in terms of the SPTN obtained. Nevertheless both methods equally identified the prominent presence of the soft marine clay stratum in the top 12.5m and the underlying strong sandy bed with higher bearing capacity values.

  9. Machine Learning Methods to Predict Diabetes Complications.

    Science.gov (United States)

    Dagliati, Arianna; Marini, Simone; Sacchi, Lucia; Cogni, Giulia; Teliti, Marsida; Tibollo, Valentina; De Cata, Pasquale; Chiovato, Luca; Bellazzi, Riccardo

    2018-03-01

    One of the areas where Artificial Intelligence is having more impact is machine learning, which develops algorithms able to learn patterns and decision rules from data. Machine learning algorithms have been embedded into data mining pipelines, which can combine them with classical statistical strategies, to extract knowledge from data. Within the EU-funded MOSAIC project, a data mining pipeline has been used to derive a set of predictive models of type 2 diabetes mellitus (T2DM) complications based on electronic health record data of nearly one thousand patients. Such pipeline comprises clinical center profiling, predictive model targeting, predictive model construction and model validation. After having dealt with missing data by means of random forest (RF) and having applied suitable strategies to handle class imbalance, we have used Logistic Regression with stepwise feature selection to predict the onset of retinopathy, neuropathy, or nephropathy, at different time scenarios, at 3, 5, and 7 years from the first visit at the Hospital Center for Diabetes (not from the diagnosis). Considered variables are gender, age, time from diagnosis, body mass index (BMI), glycated hemoglobin (HbA1c), hypertension, and smoking habit. Final models, tailored in accordance with the complications, provided an accuracy up to 0.838. Different variables were selected for each complication and time scenario, leading to specialized models easy to translate to the clinical practice.

  10. Studying depression using imaging and machine learning methods

    Directory of Open Access Journals (Sweden)

    Meenal J. Patel

    2016-01-01

    Full Text Available Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1 presents a background on depression, imaging, and machine learning methodologies; (2 reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3 suggests directions for future depression-related studies.

  11. Studying depression using imaging and machine learning methods.

    Science.gov (United States)

    Patel, Meenal J; Khalaf, Alexander; Aizenstein, Howard J

    2016-01-01

    Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1) presents a background on depression, imaging, and machine learning methodologies; (2) reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3) suggests directions for future depression-related studies.

  12. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  13. In silico machine learning methods in drug development.

    Science.gov (United States)

    Dobchev, Dimitar A; Pillai, Girinath G; Karelson, Mati

    2014-01-01

    Machine learning (ML) computational methods for predicting compounds with pharmacological activity, specific pharmacodynamic and ADMET (absorption, distribution, metabolism, excretion and toxicity) properties are being increasingly applied in drug discovery and evaluation. Recently, machine learning techniques such as artificial neural networks, support vector machines and genetic programming have been explored for predicting inhibitors, antagonists, blockers, agonists, activators and substrates of proteins related to specific therapeutic targets. These methods are particularly useful for screening compound libraries of diverse chemical structures, "noisy" and high-dimensional data to complement QSAR methods, and in cases of unavailable receptor 3D structure to complement structure-based methods. A variety of studies have demonstrated the potential of machine-learning methods for predicting compounds as potential drug candidates. The present review is intended to give an overview of the strategies and current progress in using machine learning methods for drug design and the potential of the respective model development tools. We also regard a number of applications of the machine learning algorithms based on common classes of diseases.

  14. Ensemble Machine Learning Methods and Applications

    CERN Document Server

    Ma, Yunqian

    2012-01-01

    It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed “ensemble learning” by researchers in computational intelligence and machine learning, it is known to improve a decision system’s robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as “boosting” and “random forest” facilitate solutions to key computational issues such as face detection and are now being applied in areas as diverse as object trackingand bioinformatics.   Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including various contributions from researchers in leading industrial research labs. At once a solid theoretical study and a practical guide, the volume is a windfall for r...

  15. MSD Recombination Method in Statistical Machine Translation

    Science.gov (United States)

    Gros, Jerneja Žganec

    2008-11-01

    Freely available tools and language resources were used to build the VoiceTRAN statistical machine translation (SMT) system. Various configuration variations of the system are presented and evaluated. The VoiceTRAN SMT system outperformed the baseline conventional rule-based MT system in all English-Slovenian in-domain test setups. To further increase the generalization capability of the translation model for lower-coverage out-of-domain test sentences, an "MSD-recombination" approach was proposed. This approach not only allows a better exploitation of conventional translation models, but also performs well in the more demanding translation direction; that is, into a highly inflectional language. Using this approach in the out-of-domain setup of the English-Slovenian JRC-ACQUIS task, we have achieved significant improvements in translation quality.

  16. A Review of Design Optimization Methods for Electrical Machines

    Directory of Open Access Journals (Sweden)

    Gang Lei

    2017-11-01

    Full Text Available Electrical machines are the hearts of many appliances, industrial equipment and systems. In the context of global sustainability, they must fulfill various requirements, not only physically and technologically but also environmentally. Therefore, their design optimization process becomes more and more complex as more engineering disciplines/domains and constraints are involved, such as electromagnetics, structural mechanics and heat transfer. This paper aims to present a review of the design optimization methods for electrical machines, including design analysis methods and models, optimization models, algorithms and methods/strategies. Several efficient optimization methods/strategies are highlighted with comments, including surrogate-model based and multi-level optimization methods. In addition, two promising and challenging topics in both academic and industrial communities are discussed, and two novel optimization methods are introduced for advanced design optimization of electrical machines. First, a system-level design optimization method is introduced for the development of advanced electric drive systems. Second, a robust design optimization method based on the design for six-sigma technique is introduced for high-quality manufacturing of electrical machines in production. Meanwhile, a proposal is presented for the development of a robust design optimization service based on industrial big data and cloud computing services. Finally, five future directions are proposed, including smart design optimization method for future intelligent design and production of electrical machines.

  17. Microbiological evaluation of ready-to-eat iceberg lettuce during shelf-life and effectiveness of household washing methods

    Directory of Open Access Journals (Sweden)

    Daniela Bencardino

    2018-04-01

    Full Text Available The aim of this study was to assess the microbiological quality of ready-to-eat (RTE iceberg lettuce. Our investigation was based on the consumption tendency of university students considered a target market for this product. A total of 78 RTE samples were collected from chain supermarkets and analysed for the enumeration of aerobic mesophilic count (AMC, Escherichia coli and the detection of Salmonella spp. and Listeria monocytogenes. All samples were negative for the presence of pathogens. The mean value of AMC at the beginning, in the middle and after the expiration date was: 6.88, 8.51 and 8.72 log CFU g-1, respectively. The same investigation was performed on 12 samples of fresh iceberg lettuce samples. No pathogens were found and the mean value of AMC was lower than the RTE category (5.73 log CFU g-1; P<0.05. The effectiveness of 5 washing methods was determined on 15 samples of both fresh and RTE iceberg lettuce. Samples were washed for 15’ and 30’ in tap water (500 mL, tap water with NaCl (4 g/500 mL, tap water with bicarbonate (8 g/500 mL, tap water with vinegar (10 mL/500 mL and tap water with chlorine-based disinfectant (10 mL/500 mL. A significant bacterial load reduction was recorded for vinegar and disinfectant after 30’ and 15’, respectively. Overall, these results showed that RTE iceberg lettuce is more contaminated than the fresh product. Also, the consumption in the first few days of packaging and after washing with disinfectants reduces the risk for health consumers.

  18. Method of treating the waste liquid of a washing containing a radioactive substance

    International Nuclear Information System (INIS)

    Sawaguchi, Yusuke; Tsuyuki, Takashi; Kaneko, Masato; Sato, Yasuhiko; Yamaguchi, Takashi.

    1975-01-01

    Object: To separate waste liquid resulting from washing and which contains a radioactive substance and surface active agent into high purity water and a solid waste substance containing a small quantity of surface active agent. Structure: To waste liquid from a waste liquid tank is added a pH adjusting agent for adjusting the pH to 5.5, and the resultant liquid is sent to an agglomeration reaction tank, in which an inorganic agglomerating agent is added to the waste liquid to cause a major proportion of the radioactive substance and surface active agent to form flocks produced through agglomeration. Then, the waste liquid is sent from the agglomeration reaction tank to a froth separation tank, to which air is supplied through a perforated plate to cause frothing. The over-flowing liquid is de-frothed, and then the insoluble matter is separated as sludge, followed by hydroextraction and drying for solidification. The treated liquid extracted from a froth separation tank is sent to an agglomerating agent recovery tank for separation of the agglomeration agent, and then the residual surface active agent is removed by adsorption in an active carbon adsorption tower, followed by concentration by evaporation in an evaporating can. The concentrated liquid is extracted and then solidified with cement or asphalt. (Kamimura, M.)

  19. Recovery of kaolinite from tailings of Zonouz kaolin-washing plant by flotation-flocculation method

    Directory of Open Access Journals (Sweden)

    Kianoush Barani

    2018-04-01

    Full Text Available The traditional processing of kaolin is achieved by dispersion of the mined ore and classification by multistage hydrocyclone plants. The inefficiencies inherent to cyclones produce a middling product that is commonly disposed back into the quarry. In this research, recovery of kaolinite from tailings of Zonouz kaolin washing plant, which is located in Iran was investigated by flotation and flotation- flocculation. Flotation experiments show that the flotation of kaolinite from the tailings is better in an acidic than in an alkaline medium containing cationic collectors. Flotation under acidic condition causes problems such as equipment corrosion at industrial scale. As a result, the cationic flotation of kaolinite is enhanced by addition of polyacrylamide as a flocculant. The results showed flocculation by polyacrylamide improved flotation of kaolinite within a range of pH. With 300 g/t dodecylamine, 500 g/t aluminum chloride, 50 g/t pine oil (frother, 15 g/t polyacrylamide, at pH = 7 and without de-slimming a product has 37.19% Al2O3, 54.19% SiO2 and 34.43% mass recovery was archived. Keywords: Kaolinite, Flotation, Flocculation, Cetylpyridinium chloride, Dodecylamine, Aluminum chloride, Polyacrylamide

  20. Efficient forced vibration reanalysis method for rotating electric machines

    Science.gov (United States)

    Saito, Akira; Suzuki, Hiromitsu; Kuroishi, Masakatsu; Nakai, Hideo

    2015-01-01

    Rotating electric machines are subject to forced vibration by magnetic force excitation with wide-band frequency spectrum that are dependent on the operating conditions. Therefore, when designing the electric machines, it is inevitable to compute the vibration response of the machines at various operating conditions efficiently and accurately. This paper presents an efficient frequency-domain vibration analysis method for the electric machines. The method enables the efficient re-analysis of the vibration response of electric machines at various operating conditions without the necessity to re-compute the harmonic response by finite element analyses. Theoretical background of the proposed method is provided, which is based on the modal reduction of the magnetic force excitation by a set of amplitude-modulated standing-waves. The method is applied to the forced response vibration of the interior permanent magnet motor at a fixed operating condition. The results computed by the proposed method agree very well with those computed by the conventional harmonic response analysis by the FEA. The proposed method is then applied to the spin-up test condition to demonstrate its applicability to various operating conditions. It is observed that the proposed method can successfully be applied to the spin-up test conditions, and the measured dominant frequency peaks in the frequency response can be well captured by the proposed approach.

  1. Shaft Boring Machine: A method of mechanized vertical shaft excavation

    International Nuclear Information System (INIS)

    Goodell, T.M.

    1991-01-01

    The Shaft Boring Machine (SBM) is a vertical application of proven rock boring technology. The machine applies a rotating cutter wheel with disk cutters for shaft excavation. The wheel is thrust against the rock by hydraulic cylinders and slews about the shaft bottom as it rotates. Cuttings are removed by a clam shell device similar to conventional shaft mucking and the muck is hoisted by buckets. The entire machine moves down (and up) the shaft through the use of a system of grippers thrust against the shaft wall. These grippers and their associated cylinders also provide the means to maintain verticality and stability of the machine. The machine applies the same principles as tunnel boring machines but in a vertical mode. Other shaft construction activities such as rock bolting, utility installation and shaft concrete lining can be accomplished concurrent with shaft boring. The method is comparable in cost to conventional sinking to a depth of about 460 meters (1500 feet) beyond which the SBM has a clear host advantage. The SBM has a greater advantage in productivity in that it can excavate significantly faster than drill and blast methods

  2. Manufacturing Methods for Cutting, Machining and Drilling Composites. Volume 1. Composites Machining Handbook

    Science.gov (United States)

    1978-08-01

    12°±30’ 1180±2° OPTIONAL .0005 IN./IN. BACK TAPER 015 RAD LIPS TO BE WITHIN .002 OF TRUE ANGULAR POSITION NOTES: 1. LAND WIDTH: 28% ± .005... horoscope and dye-penetrant requirements. 79 PHASE 1 PHASE II PHASE III PHASE IV CUTTING DRILLING MACHINING NONDESTRUCTIVE EVALUATION METHOD MATERIAL

  3. Machine cost analysis using the traditional machine-rate method and ChargeOut!

    Science.gov (United States)

    E. M. (Ted) Bilek

    2009-01-01

    Forestry operations require ever more use of expensive capital equipment. Mechanization is frequently necessary to perform cost-effective and safe operations. Increased capital should mean more sophisticated capital costing methodologies. However the machine rate method, which is the costing methodology most frequently used, dates back to 1942. CHARGEOUT!, a recently...

  4. Comparative analysis of various methods for modelling permanent magnet machines

    NARCIS (Netherlands)

    Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.

    2017-01-01

    In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air

  5. Quantifying the effect of hand wash duration, soap use, ground beef debris, and drying methods on the removal of Enterobacter aerogenes on hands.

    Science.gov (United States)

    Jensen, Dane A; Danyluk, Michelle D; Harris, Linda J; Schaffner, Donald W

    2015-04-01

    Hand washing is recognized as a crucial step in preventing foodborne disease transmission by mitigating crosscontamination among hands, surfaces, and foods. This research was undertaken to establish the importance of several keys factors (soap, soil, time, and drying method) in reducing microorganisms during hand washing. A nonpathogenic nalidixic acid-resistant Enterobacter aerogenes surrogate for Salmonella was used to assess the efficacy of using soap or no soap for 5 or 20 s on hands with or without ground beef debris and drying with paper towel or air. Each experiment consisted of 20 replicates, each from a different individual with ∼ 6 log CFU/ml E. aerogenes on their hands. A reduction of 1.0 ± 0.4 and 1.7 ± 0.8 log CFU of E. aerogenes was observed for a 5-s wash with no soap and a 20-s wash with soap, respectively. When there was no debris on the hands, there was no significant difference between washing with and without soap for 20 s (P > 0.05). Likewise, there was no significant difference in the reductions achieved when washing without soap, whether or not debris was on the hands (P > 0.05). A significantly greater reduction (P soap when there was ground beef debris on the hands. The greatest difference (1.1 log CFU greater average reduction) in effectiveness occurred when ground beef debris was on the hands and a 20-s wash with water was compared with a 20-s wash with soap. Significantly greater (P 4.0 log CFU per towel) when hands are highly contaminated. Our results support future quantitative microbial risk assessments needed to effectively manage risks of foodborne illness in which food workers' hands are a primary cause.

  6. A defect-driven diagnostic method for machine tool spindles.

    Science.gov (United States)

    Vogl, Gregory W; Donmez, M Alkan

    2015-01-01

    Simple vibration-based metrics are, in many cases, insufficient to diagnose machine tool spindle condition. These metrics couple defect-based motion with spindle dynamics; diagnostics should be defect-driven. A new method and spindle condition estimation device (SCED) were developed to acquire data and to separate system dynamics from defect geometry. Based on this method, a spindle condition metric relying only on defect geometry is proposed. Application of the SCED on various milling and turning spindles shows that the new approach is robust for diagnosing the machine tool spindle condition.

  7. Radioisotope method potentialities in machine reliability and durability enhancement

    International Nuclear Information System (INIS)

    Postnikov, V.I.

    1975-01-01

    The development of a surface activation method is reviewed with regard to wear of machine parts. Examples demonstrating the highly promising aspects and practical application of the method are cited. The use of high-sensitivity instruments and variation of activation depth from 10 um to 0.5 mm allows to perform the investigations at a sensitivity of 0.05 um and to estimate the linear values of machine wear. Standard diagrams are presented for measuring the wear of different machine parts by means of surface activation. Investigations performed at several Soviet technological institutes afford a set of dependences, which characterize the distribution of radioactive isotopes in depth under different conditions of activation of diverse metals and alloys and permit to study the wear of any metal

  8. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  9. Kernel Methods for Machine Learning with Life Science Applications

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie

    Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear a...

  10. Evaluation of man-machine systems - methods and problems

    International Nuclear Information System (INIS)

    1985-01-01

    The symposium gives a survey of the methods of evaluation which permit as quantitive an assessment as possible of the collaboration between men and machines. This complex of problems is of great current significance in many areas of application. The systems to be evaluated are aircraft, land vehicles and watercraft as well as process control systems. (orig./GL) [de

  11. Design improvement of automated gallon washing machine to minimize musculoskeletal disorders (MSDs) in CV Barokah Abadi using ergonomic function deployment (EFD) approach

    Science.gov (United States)

    Fakhriza, Z.; Rahayu, M.; Iqbal, M.

    2017-12-01

    In the production activity of Bottled Drinking Water (AMDK) in CV Barokah Abadi there is a gallon washing station. At the work station it involves three stages of activity such as washing and rinsing the outside of the gallon, spraying the inside of the gallon and rubbing the inside of the gallon which is done in a separate place. Distribution of Nordic Body Map (NBM) questionnaires showing employee complaints data at gallon washing stations where workers complained of pain in the right upper arm, right forearm and right wrist respectively 88% and workers also complained of pain in the waist and The right hand respectively by 81%. Ergonomic gallon washer is one way to minimize the risk of MSDs. The design begins with an ergonomic evaluation of the existing conditions and the concept of the initial design of the gallon washer. The evaluation is utilized for consideration of design improvements with the utilization of Ergonomic Function Deployment (EFD) in order for the product concept to conform to the ECSHE principle (Effective, Comfortable, Safe, Healthy and Efficient). The tool improvement design can minimize the risk of MSDs seen from the worker’s posture while using an ergonomic washer.

  12. Machine learning methods without tears: a primer for ecologists.

    Science.gov (United States)

    Olden, Julian D; Lawler, Joshua J; Poff, N LeRoy

    2008-06-01

    Machine learning methods, a family of statistical techniques with origins in the field of artificial intelligence, are recognized as holding great promise for the advancement of understanding and prediction about ecological phenomena. These modeling techniques are flexible enough to handle complex problems with multiple interacting elements and typically outcompete traditional approaches (e.g., generalized linear models), making them ideal for modeling ecological systems. Despite their inherent advantages, a review of the literature reveals only a modest use of these approaches in ecology as compared to other disciplines. One potential explanation for this lack of interest is that machine learning techniques do not fall neatly into the class of statistical modeling approaches with which most ecologists are familiar. In this paper, we provide an introduction to three machine learning approaches that can be broadly used by ecologists: classification and regression trees, artificial neural networks, and evolutionary computation. For each approach, we provide a brief background to the methodology, give examples of its application in ecology, describe model development and implementation, discuss strengths and weaknesses, explore the availability of statistical software, and provide an illustrative example. Although the ecological application of machine learning approaches has increased, there remains considerable skepticism with respect to the role of these techniques in ecology. Our review encourages a greater understanding of machin learning approaches and promotes their future application and utilization, while also providing a basis from which ecologists can make informed decisions about whether to select or avoid these approaches in their future modeling endeavors.

  13. Comparison of Machine Learning Methods for the Arterial Hypertension Diagnostics

    Directory of Open Access Journals (Sweden)

    Vladimir S. Kublanov

    2017-01-01

    Full Text Available The paper presents results of machine learning approach accuracy applied analysis of cardiac activity. The study evaluates the diagnostics possibilities of the arterial hypertension by means of the short-term heart rate variability signals. Two groups were studied: 30 relatively healthy volunteers and 40 patients suffering from the arterial hypertension of II-III degree. The following machine learning approaches were studied: linear and quadratic discriminant analysis, k-nearest neighbors, support vector machine with radial basis, decision trees, and naive Bayes classifier. Moreover, in the study, different methods of feature extraction are analyzed: statistical, spectral, wavelet, and multifractal. All in all, 53 features were investigated. Investigation results show that discriminant analysis achieves the highest classification accuracy. The suggested approach of noncorrelated feature set search achieved higher results than data set based on the principal components.

  14. Rapid washing of filter paper discs in a solid-phase radioimmunoassay with a constant flow washing device

    International Nuclear Information System (INIS)

    Kemeny, D.M.; West, F.B.

    1982-01-01

    A machine has been developed for the rapid washing of the cellulose filter paper discs that are used in a number of radioimmunoassays. The machine is simple in design, easy to use, and is capable of washing 96 filter paper discs simultaneously. The efficiency of the machine is demonstrated by a RAST assay for measuring IgE antibodies to the venom. Time taken to wash the discs was reduced 3-fold without loss of sensitivity or reproducibility. (Auth.)

  15. New Cogging Torque Reduction Methods for Permanent Magnet Machine

    Science.gov (United States)

    Bahrim, F. S.; Sulaiman, E.; Kumar, R.; Jusoh, L. I.

    2017-08-01

    Permanent magnet type motors (PMs) especially permanent magnet synchronous motor (PMSM) are expanding its limbs in industrial application system and widely used in various applications. The key features of this machine include high power and torque density, extending speed range, high efficiency, better dynamic performance and good flux-weakening capability. Nevertheless, high in cogging torque, which may cause noise and vibration, is one of the threat of the machine performance. Therefore, with the aid of 3-D finite element analysis (FEA) and simulation using JMAG Designer, this paper proposed new method for cogging torque reduction. Based on the simulation, methods of combining the skewing with radial pole pairing method and skewing with axial pole pairing method reduces the cogging torque effect up to 71.86% and 65.69% simultaneously.

  16. Studying depression using imaging and machine learning methods

    OpenAIRE

    Patel, Meenal J.; Khalaf, Alexander; Aizenstein, Howard J.

    2015-01-01

    Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1) presen...

  17. Nasal Wash Treatment

    Science.gov (United States)

    ... Medications Alternative Therapies Nasal Wash Treatment Nasal Wash Treatment Make an Appointment Ask a Question Refer Patient The Centers for Disease Control (CDC) guidelines for preparing water used in a nasal wash are listed below. Many ...

  18. Unsupervised process monitoring and fault diagnosis with machine learning methods

    CERN Document Server

    Aldrich, Chris

    2013-01-01

    This unique text/reference describes in detail the latest advances in unsupervised process monitoring and fault diagnosis with machine learning methods. Abundant case studies throughout the text demonstrate the efficacy of each method in real-world settings. The broad coverage examines such cutting-edge topics as the use of information theory to enhance unsupervised learning in tree-based methods, the extension of kernel methods to multiple kernel learning for feature extraction from data, and the incremental training of multilayer perceptrons to construct deep architectures for enhanced data

  19. Machine Learning and Data Mining Methods in Diabetes Research.

    Science.gov (United States)

    Kavakiotis, Ioannis; Tsave, Olga; Salifoglou, Athanasios; Maglaveras, Nicos; Vlahavas, Ioannis; Chouvarda, Ioanna

    2017-01-01

    The remarkable advances in biotechnology and health sciences have led to a significant production of data, such as high throughput genetic data and clinical information, generated from large Electronic Health Records (EHRs). To this end, application of machine learning and data mining methods in biosciences is presently, more than ever before, vital and indispensable in efforts to transform intelligently all available information into valuable knowledge. Diabetes mellitus (DM) is defined as a group of metabolic disorders exerting significant pressure on human health worldwide. Extensive research in all aspects of diabetes (diagnosis, etiopathophysiology, therapy, etc.) has led to the generation of huge amounts of data. The aim of the present study is to conduct a systematic review of the applications of machine learning, data mining techniques and tools in the field of diabetes research with respect to a) Prediction and Diagnosis, b) Diabetic Complications, c) Genetic Background and Environment, and e) Health Care and Management with the first category appearing to be the most popular. A wide range of machine learning algorithms were employed. In general, 85% of those used were characterized by supervised learning approaches and 15% by unsupervised ones, and more specifically, association rules. Support vector machines (SVM) arise as the most successful and widely used algorithm. Concerning the type of data, clinical datasets were mainly used. The title applications in the selected articles project the usefulness of extracting valuable knowledge leading to new hypotheses targeting deeper understanding and further investigation in DM.

  20. Machine learning methods for clinical forms analysis in mental health.

    Science.gov (United States)

    Strauss, John; Peguero, Arturo Martinez; Hirst, Graeme

    2013-01-01

    In preparation for a clinical information system implementation, the Centre for Addiction and Mental Health (CAMH) Clinical Information Transformation project completed multiple preparation steps. An automated process was desired to supplement the onerous task of manual analysis of clinical forms. We used natural language processing (NLP) and machine learning (ML) methods for a series of 266 separate clinical forms. For the investigation, documents were represented by feature vectors. We used four ML algorithms for our examination of the forms: cluster analysis, k-nearest neigh-bours (kNN), decision trees and support vector machines (SVM). Parameters for each algorithm were optimized. SVM had the best performance with a precision of 64.6%. Though we did not find any method sufficiently accurate for practical use, to our knowledge this approach to forms has not been used previously in mental health.

  1. A Photometric Machine-Learning Method to Infer Stellar Metallicity

    Science.gov (United States)

    Miller, Adam A.

    2015-01-01

    Following its formation, a star's metal content is one of the few factors that can significantly alter its evolution. Measurements of stellar metallicity ([Fe/H]) typically require a spectrum, but spectroscopic surveys are limited to a few x 10(exp 6) targets; photometric surveys, on the other hand, have detected > 10(exp 9) stars. I present a new machine-learning method to predict [Fe/H] from photometric colors measured by the Sloan Digital Sky Survey (SDSS). The training set consists of approx. 120,000 stars with SDSS photometry and reliable [Fe/H] measurements from the SEGUE Stellar Parameters Pipeline (SSPP). For bright stars (g' machine-learning method is similar to the scatter in [Fe/H] measurements from low-resolution spectra..

  2. Housing Value Forecasting Based on Machine Learning Methods

    OpenAIRE

    Mu, Jingyi; Wu, Fang; Zhang, Aihua

    2014-01-01

    In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing...

  3. Extremely Randomized Machine Learning Methods for Compound Activity Prediction

    Directory of Open Access Journals (Sweden)

    Wojciech M. Czarnecki

    2015-11-01

    Full Text Available Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called ‘extremely randomized methods’—Extreme Entropy Machine and Extremely Randomized Trees—for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their ‘non-extreme’ competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.

  4. Housing Value Forecasting Based on Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Jingyi Mu

    2014-01-01

    Full Text Available In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing the real estate on corresponding regions or not. In this paper, support vector machine (SVM, least squares support vector machine (LSSVM, and partial least squares (PLS methods are used to forecast the home values. And these algorithms are compared according to the predicted results. Experiment shows that although the data set exists serious nonlinearity, the experiment result also show SVM and LSSVM methods are superior to PLS on dealing with the problem of nonlinearity. The global optimal solution can be found and best forecasting effect can be achieved by SVM because of solving a quadratic programming problem. In this paper, the different computation efficiencies of the algorithms are compared according to the computing times of relevant algorithms.

  5. Data Mining and Machine Learning Methods for Dementia Research.

    Science.gov (United States)

    Li, Rui

    2018-01-01

    Patient data in clinical research often includes large amounts of structured information, such as neuroimaging data, neuropsychological test results, and demographic variables. Given the various sources of information, we can develop computerized methods that can be a great help to clinicians to discover hidden patterns in the data. The computerized methods often employ data mining and machine learning algorithms, lending themselves as the computer-aided diagnosis (CAD) tool that assists clinicians in making diagnostic decisions. In this chapter, we review state-of-the-art methods used in dementia research, and briefly introduce some recently proposed algorithms subsequently.

  6. Simplified Hybrid-Secondary Uncluttered Machine And Method

    Science.gov (United States)

    Hsu, John S [Oak Ridge, TN

    2005-05-10

    An electric machine (40, 40') has a stator (43) and a rotor (46) and a primary air gap (48) has secondary coils (47c, 47d) separated from the rotor (46) by a secondary air gap (49) so as to induce a slip current in the secondary coils (47c, 47d). The rotor (46, 76) has magnetic brushes (A, B, C, D) or wires (80) which couple flux in through the rotor (46) to the secondary coils (47c, 47d) without inducing a current in the rotor (46) and without coupling a stator rotational energy component to the secondary coils (47c, 47d). The machine can be operated as a motor or a generator in multi-phase or single-phase embodiments. A method of providing a slip energy controller is also disclosed.

  7. Hybrid-secondary uncluttered permanent magnet machine and method

    Science.gov (United States)

    Hsu, John S.

    2005-12-20

    An electric machine (40) has a stator (43), a permanent magnet rotor (38) with permanent magnets (39) and a magnetic coupling uncluttered rotor (46) for inducing a slip energy current in secondary coils (47). A dc flux can be produced in the uncluttered rotor when the secondary coils are fed with dc currents. The magnetic coupling uncluttered rotor (46) has magnetic brushes (A, B, C, D) which couple flux in through the rotor (46) to the secondary coils (47c, 47d) without inducing a current in the rotor (46) and without coupling a stator rotational energy component to the secondary coils (47c, 47d). The machine can be operated as a motor or a generator in multi-phase or single-phase embodiments and is applicable to the hybrid electric vehicle. A method of providing a slip energy controller is also disclosed.

  8. Effect of washing process on the magnetic properties of Nd-Fe-B nanoparticles prepared by reduction-diffusion method

    Science.gov (United States)

    Wang, Y.; Ahn, J.; Kim, D.; Ren, W. J.; Liu, W.; Zhang, Z. D.; Choi, C. J.

    2017-10-01

    Nd-Fe-B nanoparticles with a particle size below 50 nm and excellent magnetic properties were obtained via a novel route which makes use of both spray drying and reduction-diffusion processes. Uniform Nd-Fe-B particles were formed by the optimization of Ca amount as a reducing agent and additional washing by milling in ethanol media. Especially, we implemented a two-step washing process which contributed to the excellent magnetic properties with high remanence and coercivity. After the removal of CaO by novel washing process, the maximum energy product (BH)max of the particles showed 22.1 MGOe. This value is superior to those reported in reduction-diffusion process. We used Henkel plot to assume the mechanism of magnetic interactions of the Nd-Fe-B nanoparticles.

  9. 141Ce as a tracer to find the efficient washing method for hair in the analysis of rare earth elements

    International Nuclear Information System (INIS)

    Tong Shilu

    1989-01-01

    Various procedures were compared in relation to the removal of 141 Ce exogenously contaminated as well as endogenously incorporated in hair. Extraction efficiency of cerium with hot EDTA was the greatest among the four washing procedures. However, no procedure can effectively remove all exogenous cerium without reducing the endogenous cerium. Among the four procedures evaluated, washing with 0.1 mol/l EDTA (50 deg C, 10 minutes) is recommended before the analysis of rare earth elements in hair because of its efficiency in removing exogenous cerium and causing less loss on endogenous cerium

  10. MACHINE LEARNING METHODS IN DIGITAL AGRICULTURE: ALGORITHMS AND CASES

    Directory of Open Access Journals (Sweden)

    Aleksandr Vasilyevich Koshkarov

    2018-05-01

    Full Text Available Ensuring food security is a major challenge in many countries. With a growing global population, the issues of improving the efficiency of agriculture have become most relevant. Farmers are looking for new ways to increase yields, and governments of different countries are developing new programs to support agriculture. This contributes to a more active implementation of digital technologies in agriculture, helping farmers to make better decisions, increase yields and take care of the environment. The central point is the collection and analysis of data. In the industry of agriculture, data can be collected from different sources and may contain useful patterns that identify potential problems or opportunities. Data should be analyzed using machine learning algorithms to extract useful insights. Such methods of precision farming allow the farmer to monitor individual parts of the field, optimize the consumption of water and chemicals, and identify problems quickly. Purpose: to make an overview of the machine learning algorithms used for data analysis in agriculture. Methodology: an overview of the relevant literature; a survey of farmers. Results: relevant algorithms of machine learning for the analysis of data in agriculture at various levels were identified: soil analysis (soil assessment, soil classification, soil fertility predictions, weather forecast (simulation of climate change, temperature and precipitation prediction, and analysis of vegetation (weed identification, vegetation classification, plant disease identification, crop forecasting. Practical implications: agriculture, crop production.

  11. Machine Learning-Empowered Biometric Methods for Biomedicine Applications

    Directory of Open Access Journals (Sweden)

    Qingxue Zhang

    2017-07-01

    Full Text Available Nowadays, pervasive computing technologies are paving a promising way for advanced smart health applications. However, a key impediment faced by wide deployment of these assistive smart devices, is the increasing privacy and security issue, such as how to protect access to sensitive patient data in the health record. Focusing on this challenge, biometrics are attracting intense attention in terms of effective user identification to enable confidential health applications. In this paper, we take special interest in two bio-potential-based biometric modalities, electrocardiogram (ECG and electroencephalogram (EEG, considering that they are both unique to individuals, and more reliable than token (identity card and knowledge-based (username/password methods. After extracting effective features in multiple domains from ECG/EEG signals, several advanced machine learning algorithms are introduced to perform the user identification task, including Neural Network, K-nearest Neighbor, Bagging, Random Forest and AdaBoost. Experimental results on two public ECG and EEG datasets show that ECG is a more robust biometric modality compared to EEG, leveraging a higher signal to noise ratio and also more distinguishable morphological patterns. Among different machine learning classifiers, the random forest greatly outperforms the others and owns an identification rate as high as 98%. This study is expected to demonstrate that properly selected biometric empowered by an effective machine learner owns a great potential, to enable confidential biomedicine applications in the era of smart digital health.

  12. Man-machine communication in reactor control using AI methods

    International Nuclear Information System (INIS)

    Klebau, J.; Lindner, A.; Fiedler, U.

    1987-01-01

    In the last years the interest in process control has expecially focused on problems of man-machine communication. It depends on its great importance to process performance and user acceptance. Advanced computerized operator aids, e.g. in nuclear power plants, are as well as their man-machine interface. In the Central Institute for Nuclear Research in Rossendorf a computerized operator support system for nuclear power plants is designed, which is involved in a decentralized process automation system. A similar but simpler system, the Hierarchical Informational System (HIS) at the Rossendorf Research Reactor, works with a computer controlled man-machine interface, based on menu. In the special case of the disturbance analysis program SAAP-2, which is included in the HIS, the limits of menu techniques are obviously. Therefore it seems to be necessary and with extended hard- and software possible to realize an user controlled natural language interface using Artificial Intelligence (AI) methods. The draft of such a system is described. It should be able to learn during a teaching phase all phrases and their meanings. The system will work on the basis of a self-organizing, associative data structure. It is used to recognize a great amount of words which are used in language analysis. Error recognition and, if possible, correction is done by means of a distance function in the word set. Language analysis should be carried out with a simplified word class controlled functional analysis. With this interface it is supposed to get experience in intelligent man-machine communication to enhance operational safety in future. (author)

  13. Estimating building energy consumption using extreme learning machine method

    International Nuclear Information System (INIS)

    Naji, Sareh; Keivani, Afram; Shamshirband, Shahaboddin; Alengaram, U. Johnson; Jumaat, Mohd Zamin; Mansor, Zulkefli; Lee, Malrey

    2016-01-01

    The current energy requirements of buildings comprise a large percentage of the total energy consumed around the world. The demand of energy, as well as the construction materials used in buildings, are becoming increasingly problematic for the earth's sustainable future, and thus have led to alarming concern. The energy efficiency of buildings can be improved, and in order to do so, their operational energy usage should be estimated early in the design phase, so that buildings are as sustainable as possible. An early energy estimate can greatly help architects and engineers create sustainable structures. This study proposes a novel method to estimate building energy consumption based on the ELM (Extreme Learning Machine) method. This method is applied to building material thicknesses and their thermal insulation capability (K-value). For this purpose up to 180 simulations are carried out for different material thicknesses and insulation properties, using the EnergyPlus software application. The estimation and prediction obtained by the ELM model are compared with GP (genetic programming) and ANNs (artificial neural network) models for accuracy. The simulation results indicate that an improvement in predictive accuracy is achievable with the ELM approach in comparison with GP and ANN. - Highlights: • Buildings consume huge amounts of energy for operation. • Envelope materials and insulation influence building energy consumption. • Extreme learning machine is used to estimate energy usage of a sample building. • The key effective factors in this study are insulation thickness and K-value.

  14. DNS Tunneling Detection Method Based on Multilabel Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Ahmed Almusawi

    2018-01-01

    Full Text Available DNS tunneling is a method used by malicious users who intend to bypass the firewall to send or receive commands and data. This has a significant impact on revealing or releasing classified information. Several researchers have examined the use of machine learning in terms of detecting DNS tunneling. However, these studies have treated the problem of DNS tunneling as a binary classification where the class label is either legitimate or tunnel. In fact, there are different types of DNS tunneling such as FTP-DNS tunneling, HTTP-DNS tunneling, HTTPS-DNS tunneling, and POP3-DNS tunneling. Therefore, there is a vital demand to not only detect the DNS tunneling but rather classify such tunnel. This study aims to propose a multilabel support vector machine in order to detect and classify the DNS tunneling. The proposed method has been evaluated using a benchmark dataset that contains numerous DNS queries and is compared with a multilabel Bayesian classifier based on the number of corrected classified DNS tunneling instances. Experimental results demonstrate the efficacy of the proposed SVM classification method by obtaining an f-measure of 0.80.

  15. Comparisons of likelihood and machine learning methods of individual classification

    Science.gov (United States)

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of

  16. A Photometric Machine-Learning Method to Infer Stellar Metallicity

    Science.gov (United States)

    Miller, Adam A.

    2015-01-01

    Following its formation, a star's metal content is one of the few factors that can significantly alter its evolution. Measurements of stellar metallicity ([Fe/H]) typically require a spectrum, but spectroscopic surveys are limited to a few x 10(exp 6) targets; photometric surveys, on the other hand, have detected > 10(exp 9) stars. I present a new machine-learning method to predict [Fe/H] from photometric colors measured by the Sloan Digital Sky Survey (SDSS). The training set consists of approx. 120,000 stars with SDSS photometry and reliable [Fe/H] measurements from the SEGUE Stellar Parameters Pipeline (SSPP). For bright stars (g' < or = 18 mag), with 4500 K < or = Teff < or = 7000 K, corresponding to those with the most reliable SSPP estimates, I find that the model predicts [Fe/H] values with a root-mean-squared-error (RMSE) of approx.0.27 dex. The RMSE from this machine-learning method is similar to the scatter in [Fe/H] measurements from low-resolution spectra..

  17. Machine Learning Methods for Attack Detection in the Smart Grid.

    Science.gov (United States)

    Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent

    2016-08-01

    Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.

  18. A method of numerically controlled machine part programming

    Science.gov (United States)

    1970-01-01

    Computer program is designed for automatically programmed tools. Preprocessor computes desired tool path and postprocessor computes actual commands causing machine tool to follow specific path. It is used on a Cincinnati ATC-430 numerically controlled machine tool.

  19. A Photometric Machine-Learning Method to Infer Stellar Metallicity

    Science.gov (United States)

    Miller, Adam A.

    2015-01-01

    Following its formation, a star's metal content is one of the few factors that can significantly alter its evolution. Measurements of stellar metallicity ([Fe/H]) typically require a spectrum, but spectroscopic surveys are limited to a few x 10(exp 6) targets; photometric surveys, on the other hand, have detected > 10(exp 9) stars. I present a new machine-learning method to predict [Fe/H] from photometric colors measured by the Sloan Digital Sky Survey (SDSS). The training set consists of approx. 120,000 stars with SDSS photometry and reliable [Fe/H] measurements from the SEGUE Stellar Parameters Pipeline (SSPP). For bright stars (g' learning method is similar to the scatter in [Fe/H] measurements from low-resolution spectra..

  20. Kernel methods for interpretable machine learning of order parameters

    Science.gov (United States)

    Ponte, Pedro; Melko, Roger G.

    2017-11-01

    Machine learning is capable of discriminating phases of matter, and finding associated phase transitions, directly from large data sets of raw state configurations. In the context of condensed matter physics, most progress in the field of supervised learning has come from employing neural networks as classifiers. Although very powerful, such algorithms suffer from a lack of interpretability, which is usually desired in scientific applications in order to associate learned features with physical phenomena. In this paper, we explore support vector machines (SVMs), which are a class of supervised kernel methods that provide interpretable decision functions. We find that SVMs can learn the mathematical form of physical discriminators, such as order parameters and Hamiltonian constraints, for a set of two-dimensional spin models: the ferromagnetic Ising model, a conserved-order-parameter Ising model, and the Ising gauge theory. The ability of SVMs to provide interpretable classification highlights their potential for automating feature detection in both synthetic and experimental data sets for condensed matter and other many-body systems.

  1. An RTT-Aware Virtual Machine Placement Method

    Directory of Open Access Journals (Sweden)

    Li Quan

    2017-12-01

    Full Text Available Virtualization is a key technology for mobile cloud computing (MCC and the virtual machine (VM is a core component of virtualization. VM provides a relatively independent running environment for different applications. Therefore, the VM placement problem focuses on how to place VMs on optimal physical machines, which ensures efficient use of resources and the quality of service, etc. Most previous work focuses on energy consumption, network traffic between VMs and so on and rarely consider the delay for end users’ requests. In contrast, the latency between requests and VMs is considered in this paper for the scenario of optimal VM placement in MCC. In order to minimize average RTT for all requests, the round-trip time (RTT is first used as the metric for the latency of requests. Based on our proposed RTT metric, an RTT-Aware VM placement algorithm is then proposed to minimize the average RTT. Furthermore, the case in which one of the core switches does not work is considered. A VM rescheduling algorithm is proposed to keep the average RTT lower and reduce the fluctuation of the average RTT. Finally, in the simulation study, our algorithm shows its advantage over existing methods, including random placement, the traffic-aware VM placement algorithm and the remaining utilization-aware algorithm.

  2. Modeling Music Emotion Judgments Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Naresh N. Vempala

    2018-01-01

    Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  3. Object-Oriented Support for Adaptive Methods on Paranel Machines

    Directory of Open Access Journals (Sweden)

    Sandeep Bhatt

    1993-01-01

    Full Text Available This article reports on experiments from our ongoing project whose goal is to develop a C++ library which supports adaptive and irregular data structures on distributed memory supercomputers. We demonstrate the use of our abstractions in implementing "tree codes" for large-scale N-body simulations. These algorithms require dynamically evolving treelike data structures, as well as load-balancing, both of which are widely believed to make the application difficult and cumbersome to program for distributed-memory machines. The ease of writing the application code on top of our C++ library abstractions (which themselves are application independent, and the low overhead of the resulting C++ code (over hand-crafted C code supports our belief that object-oriented approaches are eminently suited to programming distributed-memory machines in a manner that (to the applications programmer is architecture-independent. Our contribution in parallel programming methodology is to identify and encapsulate general classes of communication and load-balancing strategies useful across applications and MIMD architectures. This article reports experimental results from simulations of half a million particles using multiple methods.

  4. Method to Increase the Coupling Force in a Construction Machine

    Directory of Open Access Journals (Sweden)

    Tsipurskij Il’ja

    2017-01-01

    Full Text Available This paper discusses a possible method to increase the coupling tractive force track-wheel locomotion of construction machines. Sufficient tractive coupling force allows organizing translational displacement of the machine under above-medium load modes during operation of overburden chain excavators, tower cranes and gantry cranes in outdoors environments. A mechanism is examined to convert rotary motion into rectilinear motion using the example of a gear and rail, with kinematic calculations quoted. Analysis of the “force couple” system is proposed to identify free traction forces. Factors are established that influence the machine’s working movements. Equations to calculate tractive forces in track-wheel locomotion are described. A laboratory complex is presented where students of mechanical engineering gain practical skills in mastering the production process of soil excavation and the influence of the coupling tractive force during the machine’s operation. As practical recommendation, the paper describes a device made of a balancing lever, drive cogwheel and tractive chain to implement the required tractive force of the trolley in coupling; this solution’s efficiency is demonstrated for experimental works on hard soils with high coefficient of difficulty.

  5. Late Washing efficiency

    International Nuclear Information System (INIS)

    Morrissey, M.F.

    1992-01-01

    Interim Waste Technology has demonstrated the Late Washing concept on the Experimental Laboratory Filter (ELF) at TNX. In two tests, washing reduced the [NO 2 - ] from 0.08 M to approximately 0.01 M on slurries with 2 year equivalent radiation exposures and 9.5 wt. % solids. For both washes, the [NO 2 - ] decreased at rates near theoretical for a constant volume stirred vessel, indicating approximately l00% washing efficiency. Permeate flux was greater than 0.05 gpm/ft 2 for both washes at a transmembrane pressure of 50 psi and flow velocity of 9 ft/sec

  6. A particle finite element method for machining simulations

    Science.gov (United States)

    Sabel, Matthias; Sator, Christian; Müller, Ralf

    2014-07-01

    The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.

  7. Advanced methods in NDE using machine learning approaches

    Science.gov (United States)

    Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank

    2018-04-01

    Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability

  8. Employing Machine-Learning Methods to Study Young Stellar Objects

    Science.gov (United States)

    Moore, Nicholas

    2018-01-01

    Vast amounts of data exist in the astronomical data archives, and yet a large number of sources remain unclassified. We developed a multi-wavelength pipeline to classify infrared sources. The pipeline uses supervised machine learning methods to classify objects into the appropriate categories. The program is fed data that is already classified to train it, and is then applied to unknown catalogues. The primary use for such a pipeline is the rapid classification and cataloging of data that would take a much longer time to classify otherwise. While our primary goal is to study young stellar objects (YSOs), the applications extend beyond the scope of this project. We present preliminary results from our analysis and discuss future applications.

  9. BEBP: An Poisoning Method Against Machine Learning Based IDSs

    OpenAIRE

    Li, Pan; Liu, Qiang; Zhao, Wentao; Wang, Dongxu; Wang, Siqi

    2018-01-01

    In big data era, machine learning is one of fundamental techniques in intrusion detection systems (IDSs). However, practical IDSs generally update their decision module by feeding new data then retraining learning models in a periodical way. Hence, some attacks that comprise the data for training or testing classifiers significantly challenge the detecting capability of machine learning-based IDSs. Poisoning attack, which is one of the most recognized security threats towards machine learning...

  10. Statistical and Machine Learning forecasting methods: Concerns and ways forward.

    Science.gov (United States)

    Makridakis, Spyros; Spiliotis, Evangelos; Assimakopoulos, Vassilios

    2018-01-01

    Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions.

  11. Statistical and Machine Learning forecasting methods: Concerns and ways forward

    Science.gov (United States)

    Makridakis, Spyros; Assimakopoulos, Vassilios

    2018-01-01

    Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784

  12. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  13. The research progress of perforating gun inner wall blind hole machining method

    Science.gov (United States)

    Wang, Zhe; Shen, Hongbing

    2018-04-01

    Blind hole processing method has been a concerned technical problem in oil, electronics, aviation and other fields. This paper introduces different methods for blind hole machining, focus on machining method for perforating gun inner wall blind hole processing. Besides, the advantages and disadvantages of different methods are also discussed, and the development trend of blind hole processing were introduced significantly.

  14. A review for detecting gene-gene interactions using machine learning methods in genetic epidemiology.

    Science.gov (United States)

    Koo, Ching Lee; Liew, Mei Jing; Mohamad, Mohd Saberi; Salleh, Abdul Hakim Mohamed

    2013-01-01

    Recently, the greatest statistical computational challenge in genetic epidemiology is to identify and characterize the genes that interact with other genes and environment factors that bring the effect on complex multifactorial disease. These gene-gene interactions are also denoted as epitasis in which this phenomenon cannot be solved by traditional statistical method due to the high dimensionality of the data and the occurrence of multiple polymorphism. Hence, there are several machine learning methods to solve such problems by identifying such susceptibility gene which are neural networks (NNs), support vector machine (SVM), and random forests (RFs) in such common and multifactorial disease. This paper gives an overview on machine learning methods, describing the methodology of each machine learning methods and its application in detecting gene-gene and gene-environment interactions. Lastly, this paper discussed each machine learning method and presents the strengths and weaknesses of each machine learning method in detecting gene-gene interactions in complex human disease.

  15. Computerization of Hungarian reforestation manual with machine learning methods

    Science.gov (United States)

    Czimber, Kornél; Gálos, Borbála; Mátyás, Csaba; Bidló, András; Gribovszki, Zoltán

    2017-04-01

    Hungarian forests are highly sensitive to the changing climate, especially to the available precipitation amount. Over the past two decades several drought damages were observed for tree species which are in the lower xeric limit of their distribution. From year to year these affected forest stands become more difficult to reforest with the same native species because these are not able to adapt to the increasing probability of droughts. The climate related parameter set of the Hungarian forest stand database needs updates. Air humidity that was formerly used to define the forest climate zones is not measured anymore and its value based on climate model outputs is highly uncertain. The aim was to develop a novel computerized and objective method to describe the species-specific climate conditions that is essential for survival, growth and optimal production of the forest ecosystems. The method is expected to project the species spatial distribution until 2100 on the basis of regional climate model simulations. Until now, Hungarian forest managers have been using a carefully edited spreadsheet for reforestation purposes. Applying binding regulations this spreadsheet prescribes the stand-forming and admixed tree species and their expected growth rate for each forest site types. We are going to present a new machine learning based method to replace the former spreadsheet. We took into great consideration of various methods, such as maximum likelihood, Bayesian networks, Fuzzy logic. The method calculates distributions, setups classification, which can be validated and modified by experts if necessary. Projected climate change conditions makes necessary to include into this system an additional climate zone that does not exist in our region now, as well as new options for potential tree species. In addition to or instead of the existing ones, the influence of further limiting parameters (climatic extremes, soil water retention) are also investigated. Results will be

  16. Machine Learning Methods for Prediction of CDK-Inhibitors

    Science.gov (United States)

    Ramana, Jayashree; Gupta, Dinesh

    2010-01-01

    Progression through the cell cycle involves the coordinated activities of a suite of cyclin/cyclin-dependent kinase (CDK) complexes. The activities of the complexes are regulated by CDK inhibitors (CDKIs). Apart from its role as cell cycle regulators, CDKIs are involved in apoptosis, transcriptional regulation, cell fate determination, cell migration and cytoskeletal dynamics. As the complexes perform crucial and diverse functions, these are important drug targets for tumour and stem cell therapeutic interventions. However, CDKIs are represented by proteins with considerable sequence heterogeneity and may fail to be identified by simple similarity search methods. In this work we have evaluated and developed machine learning methods for identification of CDKIs. We used different compositional features and evolutionary information in the form of PSSMs, from CDKIs and non-CDKIs for generating SVM and ANN classifiers. In the first stage, both the ANN and SVM models were evaluated using Leave-One-Out Cross-Validation and in the second stage these were tested on independent data sets. The PSSM-based SVM model emerged as the best classifier in both the stages and is publicly available through a user-friendly web interface at http://bioinfo.icgeb.res.in/cdkipred. PMID:20967128

  17. Machine-learning methods in the classification of water bodies

    Directory of Open Access Journals (Sweden)

    Sołtysiak Marek

    2016-06-01

    Full Text Available Amphibian species have been considered as useful ecological indicators. They are used as indicators of environmental contamination, ecosystem health and habitat quality., Amphibian species are sensitive to changes in the aquatic environment and therefore, may form the basis for the classification of water bodies. Water bodies in which there are a large number of amphibian species are especially valuable even if they are located in urban areas. The automation of the classification process allows for a faster evaluation of the presence of amphibian species in the water bodies. Three machine-learning methods (artificial neural networks, decision trees and the k-nearest neighbours algorithm have been used to classify water bodies in Chorzów – one of 19 cities in the Upper Silesia Agglomeration. In this case, classification is a supervised data mining method consisting of several stages such as building the model, the testing phase and the prediction. Seven natural and anthropogenic features of water bodies (e.g. the type of water body, aquatic plants, the purpose of the water body (destination, position of the water body in relation to any possible buildings, condition of the water body, the degree of littering, the shore type and fishing activities have been taken into account in the classification. The data set used in this study involved information about 71 different water bodies and 9 amphibian species living in them. The results showed that the best average classification accuracy was obtained with the multilayer perceptron neural network.

  18. Recent Advances in Conotoxin Classification by Using Machine Learning Methods.

    Science.gov (United States)

    Dao, Fu-Ying; Yang, Hui; Su, Zhen-Dong; Yang, Wuritu; Wu, Yun; Hui, Ding; Chen, Wei; Tang, Hua; Lin, Hao

    2017-06-25

    Conotoxins are disulfide-rich small peptides, which are invaluable peptides that target ion channel and neuronal receptors. Conotoxins have been demonstrated as potent pharmaceuticals in the treatment of a series of diseases, such as Alzheimer's disease, Parkinson's disease, and epilepsy. In addition, conotoxins are also ideal molecular templates for the development of new drug lead compounds and play important roles in neurobiological research as well. Thus, the accurate identification of conotoxin types will provide key clues for the biological research and clinical medicine. Generally, conotoxin types are confirmed when their sequence, structure, and function are experimentally validated. However, it is time-consuming and costly to acquire the structure and function information by using biochemical experiments. Therefore, it is important to develop computational tools for efficiently and effectively recognizing conotoxin types based on sequence information. In this work, we reviewed the current progress in computational identification of conotoxins in the following aspects: (i) construction of benchmark dataset; (ii) strategies for extracting sequence features; (iii) feature selection techniques; (iv) machine learning methods for classifying conotoxins; (v) the results obtained by these methods and the published tools; and (vi) future perspectives on conotoxin classification. The paper provides the basis for in-depth study of conotoxins and drug therapy research.

  19. Study of technical and economic methods for improving anthracite washing. Estudio de las posibilidades tecnico-economicas de la mejora del lavado de carbones de antracita

    Energy Technology Data Exchange (ETDEWEB)

    1988-10-01

    This project was conceived and financed by Ocicarbon in order to help improve methods for washing indigenous anthracite with the aim of maximising its economic and energy benefits. The project has been carried out jointly by the National Institute for Coal and its subsidiaries, Francisco Pintado Fe, and ENADIMSA (Adaro Mining Research). The article summarises the programme of work and the results obtained. The project was divided into three phases: selection; technical analysis of the plant; and a basic engineering and feasibility study. 4 tabs.

  20. Method and apparatus for characterizing and enhancing the functional performance of machine tools

    Science.gov (United States)

    Barkman, William E; Babelay, Jr., Edwin F; Smith, Kevin Scott; Assaid, Thomas S; McFarland, Justin T; Tursky, David A; Woody, Bethany; Adams, David

    2013-04-30

    Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include workpiece surface finish, and the ability to generate chips of the desired length.

  1. Fault Diagnosis of Batch Reactor Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Sujatha Subramanian

    2014-01-01

    Full Text Available Fault diagnosis of a batch reactor gives the early detection of fault and minimizes the risk of thermal runaway. It provides superior performance and helps to improve safety and consistency. It has become more vital in this technical era. In this paper, support vector machine (SVM is used to estimate the heat release (Qr of the batch reactor both normal and faulty conditions. The signature of the residual, which is obtained from the difference between nominal and estimated faulty Qr values, characterizes the different natures of faults occurring in the batch reactor. Appropriate statistical and geometric features are extracted from the residual signature and the total numbers of features are reduced using SVM attribute selection filter and principle component analysis (PCA techniques. artificial neural network (ANN classifiers like multilayer perceptron (MLP, radial basis function (RBF, and Bayes net are used to classify the different types of faults from the reduced features. It is observed from the result of the comparative study that the proposed method for fault diagnosis with limited number of features extracted from only one estimated parameter (Qr shows that it is more efficient and fast for diagnosing the typical faults.

  2. A method of size inspection for fruit with machine vision

    Science.gov (United States)

    Rao, Xiuqin; Ying, Yibin

    2005-11-01

    A real time machine vision system for fruit quality inspection was developed, which consists of rollers, an encoder, a lighting chamber, a TMS-7DSP CCD camera (PULNIX Inc.), a computer (P4 1.8G, 128M) and a set of grading controller. An image was binary, and the edge was detected with line-scanned based digit image description, and the MER was applied to detected size of the fruit, but failed. The reason for the result was that the test point with MER was different from which was done with vernier caliper. An improved method was developed, which was called as software vernier caliper. A line between weight O of the fruit and a point A on the edge was drawn, and then the crossed point between line OA and the edge was calculated, which was noted as B, a point C between AB was selected, and the point D on the other side was searched by a way to make CD was vertical to AB, by move the point C between point A and B, A new point D was searched. The maximum length of CD was recorded as an extremum value. By move point A from start to the half point on the edge, a serial of CD was gotten. 80 navel oranges were tested, the maximum error of the diameter was less than 1mm.

  3. Sparse Machine Learning Methods for Understanding Large Text Corpora

    Data.gov (United States)

    National Aeronautics and Space Administration — Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional data with high degree of interpretability, at low computational...

  4. Micro transport machine and methods for using same

    Science.gov (United States)

    Stalford, Harold

    2015-10-13

    A micro transport machine may include a substrate and a movable device comprising a drive component responsive to a wireless power source. The movable device is operable to move between a plurality of disparate areas on the substrate.

  5. Machine learning methods can replace 3D profile method in classification of amyloidogenic hexapeptides

    Directory of Open Access Journals (Sweden)

    Stanislawski Jerzy

    2013-01-01

    Full Text Available Abstract Background Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. Results We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%. The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile to 0.5 CPU-hours (simplified 3D profile to seconds (machine learning. Conclusions We showed that the simplified profile generation method does not introduce an error with regard to the original method, while

  6. Machine learning methods can replace 3D profile method in classification of amyloidogenic hexapeptides.

    Science.gov (United States)

    Stanislawski, Jerzy; Kotulska, Malgorzata; Unold, Olgierd

    2013-01-17

    Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%). The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile) to 0.5 CPU-hours (simplified 3D profile) to seconds (machine learning). We showed that the simplified profile generation method does not introduce an error with regard to the original method, while increasing the computational efficiency. Our new dataset

  7. Integrating Heuristic and Machine-Learning Methods for Efficient Virtual Machine Allocation in Data Centers

    OpenAIRE

    Pahlevan, Ali; Qu, Xiaoyu; Zapater Sancho, Marina; Atienza Alonso, David

    2017-01-01

    Modern cloud data centers (DCs) need to tackle efficiently the increasing demand for computing resources and address the energy efficiency challenge. Therefore, it is essential to develop resource provisioning policies that are aware of virtual machine (VM) characteristics, such as CPU utilization and data communication, and applicable in dynamic scenarios. Traditional approaches fall short in terms of flexibility and applicability for large-scale DC scenarios. In this paper we propose a heur...

  8. Rotating electrical machines part 2 : methods for determining losses and efficiency of rotating electrical machinery form tests (excl. machines for traction vehicles)

    CERN Document Server

    International Electrotechnical Commission. Geneva

    1972-01-01

    Applies to d.c. machines and to a.c. synchronous and induction machines. The principles can be applied to other types of machines such as rotary converters, a.c. commutator motors and single-phase induction motors for which other methods of determining losses are used.

  9. Predicting Solar Activity Using Machine-Learning Methods

    Science.gov (United States)

    Bobra, M.

    2017-12-01

    Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections. However, we do not, as of yet, fully understand the physical mechanism that triggers solar eruptions. A machine-learning algorithm, which is favorable in cases where the amount of data is large, is one way to [1] empirically determine the signatures of this mechanism in solar image data and [2] use them to predict solar activity. In this talk, we discuss the application of various machine learning algorithms - specifically, a Support Vector Machine, a sparse linear regression (Lasso), and Convolutional Neural Network - to image data from the photosphere, chromosphere, transition region, and corona taken by instruments aboard the Solar Dynamics Observatory in order to predict solar activity on a variety of time scales. Such an approach may be useful since, at the present time, there are no physical models of flares available for real-time prediction. We discuss our results (Bobra and Couvidat, 2015; Bobra and Ilonidis, 2016; Jonas et al., 2017) as well as other attempts to predict flares using machine-learning (e.g. Ahmed et al., 2013; Nishizuka et al. 2017) and compare these results with the more traditional techniques used by the NOAA Space Weather Prediction Center (Crown, 2012). We also discuss some of the challenges in using machine-learning algorithms for space science applications.

  10. Deep learning versus traditional machine learning methods for aggregated energy demand prediction

    NARCIS (Netherlands)

    Paterakis, N.G.; Mocanu, E.; Gibescu, M.; Stappers, B.; van Alst, W.

    2018-01-01

    In this paper the more advanced, in comparison with traditional machine learning approaches, deep learning methods are explored with the purpose of accurately predicting the aggregated energy consumption. Despite the fact that a wide range of machine learning methods have been applied to

  11. Evaluation of machining effect for the residual stress of SA508 by hole drilling method

    International Nuclear Information System (INIS)

    Lee, Jeong Kun; Lee, Kyoung Soo; Song, Ki O; Kim, Young Shin

    2009-01-01

    Residual stresses on a surface of the material are welcome or undesirable since it's direction, compression or tensile. But especially for the fatigue, it is not negligible effect on the material strength. These residual stresses developed during the manufacturing processes involving material deformation, heat treatment, machining. The object of this paper is verifying the effect of machining what is mostly used for SA508. For verifying the effect of machining, three different kind of machining have been achieved, milling, grinding, wire cutting. Also to measure the residual stress, hole drill method and indentation method are used.

  12. Evaluation of Machine Learning Methods for LHC Optics Measurements and Corrections Software

    CERN Document Server

    AUTHOR|(CDS)2206853; Henning, Peter

    The field of artificial intelligence is driven by the goal to provide machines with human-like intelligence. However modern science is currently facing problems with high complexity that cannot be solved by humans in the same timescale as by machines. Therefore there is a demand on automation of complex tasks. To identify the category of tasks which can be performed by machines in the domain of optics measurements and correction on the Large Hadron Collider (LHC) is one of the central research subjects of this thesis. The application of machine learning methods and concepts of artificial intelligence can be found in various industry and scientific branches. In High Energy Physics these concepts are mostly used in offline analysis of experiments data and to perform regression tasks. In Accelerator Physics the machine learning approach has not found a wide application yet. Therefore potential tasks for machine learning solutions can be specified in this domain. The appropriate methods and their suitability for...

  13. APPLICATION OF THE PERFORMANCE SELECTION INDEX METHOD FOR SOLVING MACHINING MCDM PROBLEMS

    Directory of Open Access Journals (Sweden)

    Dušan Petković

    2017-04-01

    Full Text Available Complex nature of machining processes requires the use of different methods and techniques for process optimization. Over the past few years a number of different optimization methods have been proposed for solving continuous machining optimization problems. In manufacturing environment, engineers are also facing a number of discrete machining optimization problems. In order to help decision makers in solving this type of optimization problems a number of multi criteria decision making (MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. performance selection index (PSI method for solving machining MCDM problems. The main motivation for using the PSI method is that it is not necessary to determine criteria weights as in other MCDM methods. Applicability and effectiveness of the PSI method have been demonstrated while solving two case studies dealing with machinability of materials and selection of the most suitable cutting fluid for the given machining application. The obtained rankings have good correlation with those derived by the past researchers using other MCDM methods which validate the usefulness of this method for solving machining MCDM problems.

  14. Assessing and comparison of different machine learning methods in parent-offspring trios for genotype imputation.

    Science.gov (United States)

    Mikhchi, Abbas; Honarvar, Mahmood; Kashan, Nasser Emam Jomeh; Aminafshar, Mehdi

    2016-06-21

    Genotype imputation is an important tool for prediction of unknown genotypes for both unrelated individuals and parent-offspring trios. Several imputation methods are available and can either employ universal machine learning methods, or deploy algorithms dedicated to infer missing genotypes. In this research the performance of eight machine learning methods: Support Vector Machine, K-Nearest Neighbors, Extreme Learning Machine, Radial Basis Function, Random Forest, AdaBoost, LogitBoost, and TotalBoost compared in terms of the imputation accuracy, computation time and the factors affecting imputation accuracy. The methods employed using real and simulated datasets to impute the un-typed SNPs in parent-offspring trios. The tested methods show that imputation of parent-offspring trios can be accurate. The Random Forest and Support Vector Machine were more accurate than the other machine learning methods. The TotalBoost performed slightly worse than the other methods.The running times were different between methods. The ELM was always most fast algorithm. In case of increasing the sample size, the RBF requires long imputation time.The tested methods in this research can be an alternative for imputation of un-typed SNPs in low missing rate of data. However, it is recommended that other machine learning methods to be used for imputation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. An Improved Optimization Method for the Relevance Voxel Machine

    DEFF Research Database (Denmark)

    Ganz, Melanie; Sabuncu, M. R.; Van Leemput, Koen

    2013-01-01

    In this paper, we will re-visit the Relevance Voxel Machine (RVoxM), a recently developed sparse Bayesian framework used for predicting biological markers, e.g., presence of disease, from high-dimensional image data, e.g., brain MRI volumes. The proposed improvement, called IRVoxM, mitigates the ...

  16. Sensor Data Air Pollution Prediction by Machine Learning Methods

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    submitted 25. 1. (2018) ISSN 1530-437X R&D Projects: GA ČR GA15-18108S Grant - others:GA MŠk(CZ) LM2015042 Institutional support: RVO:67985807 Keywords : machine learning * sensors * air pollution * deep neural networks * regularization networks Subject RIV: IN - Informatics, Computer Science Impact factor: 2.512, year: 2016

  17. Classification of carcinogenic and mutagenic properties using machine learning method

    DEFF Research Database (Denmark)

    Moorthy, N. S.Hari Narayana; Kumar, Surendra; Poongavanam, Vasanthanathan

    2017-01-01

    An accurate calculation of carcinogenicity of chemicals became a serious challenge for the health assessment authority around the globe because of not only increased cost for experiments but also various ethical issues exist using animal models. In this study, we provide machine learning...

  18. Modelling Methods of Magnetohydrodynamic Phenomena Occurring in a Channel of the Device Used to Wash Out the Spent Automotive Catalyst by a Liquid Metal

    Directory of Open Access Journals (Sweden)

    Fornalczyk A.

    2016-06-01

    Full Text Available The recovery of precious metals is necessary for environmental and economic reasons. Spent catalysts from automotive industry containing precious metals are very attractive recyclable material as the devices have to be periodically renovated and eventually replaced. This paper presents the method of removing platinum from the spent catalytic converters applying lead as a collector metal in a device used to wash out by using mangetohydrodynamic stirrer. The article includes the description of the methods used for modeling of magnetohydrodynamic phenomena (coupled analysis of the electromagnetic, temperature and flow fields occurring in this particular device. The paper describes the general phenomena and ways of coupling the various physical fields for this type of calculation. The basic computational techniques with a discussion of their advantages and disadvantages are presented.

  19. Improved Saturated Hydraulic Conductivity Pedotransfer Functions Using Machine Learning Methods

    Science.gov (United States)

    Araya, S. N.; Ghezzehei, T. A.

    2017-12-01

    Saturated hydraulic conductivity (Ks) is one of the fundamental hydraulic properties of soils. Its measurement, however, is cumbersome and instead pedotransfer functions (PTFs) are often used to estimate it. Despite a lot of progress over the years, generic PTFs that estimate hydraulic conductivity generally don't have a good performance. We develop significantly improved PTFs by applying state of the art machine learning techniques coupled with high-performance computing on a large database of over 20,000 soils—USKSAT and the Florida Soil Characterization databases. We compared the performance of four machine learning algorithms (k-nearest neighbors, gradient boosted model, support vector machine, and relevance vector machine) and evaluated the relative importance of several soil properties in explaining Ks. An attempt is also made to better account for soil structural properties; we evaluated the importance of variables derived from transformations of soil water retention characteristics and other soil properties. The gradient boosted models gave the best performance with root mean square errors less than 0.7 and mean errors in the order of 0.01 on a log scale of Ks [cm/h]. The effective particle size, D10, was found to be the single most important predictor. Other important predictors included percent clay, bulk density, organic carbon percent, coefficient of uniformity and values derived from water retention characteristics. Model performances were consistently better for Ks values greater than 10 cm/h. This study maximizes the extraction of information from a large database to develop generic machine learning based PTFs to estimate Ks. The study also evaluates the importance of various soil properties and their transformations in explaining Ks.

  20. Different protein-protein interface patterns predicted by different machine learning methods.

    Science.gov (United States)

    Wang, Wei; Yang, Yongxiao; Yin, Jianxin; Gong, Xinqi

    2017-11-22

    Different types of protein-protein interactions make different protein-protein interface patterns. Different machine learning methods are suitable to deal with different types of data. Then, is it the same situation that different interface patterns are preferred for prediction by different machine learning methods? Here, four different machine learning methods were employed to predict protein-protein interface residue pairs on different interface patterns. The performances of the methods for different types of proteins are different, which suggest that different machine learning methods tend to predict different protein-protein interface patterns. We made use of ANOVA and variable selection to prove our result. Our proposed methods taking advantages of different single methods also got a good prediction result compared to single methods. In addition to the prediction of protein-protein interactions, this idea can be extended to other research areas such as protein structure prediction and design.

  1. Phase Modulation Method for Control Systems of Rotary Machine Parameters

    Directory of Open Access Journals (Sweden)

    V. V. Sychev

    2014-01-01

    Full Text Available Traditionally, vibration-based diagnostics takes the main place in a large complex of technical control means of rotary machine operation. It allows us to control the onset of extreme limit states of operating construction and its elements. However, vibration-based diagnostics is incapable to provide differentiated information about the condition of particular units, type of fault and point of its occurrence.From the practical experience of optoelectronic sensors development, methods of phase coding information about the behavior of the investigated object are known. They allow us to overcome the abovementioned disadvantage of vibration-based diagnostics through the modulation of the reflected radiation from the object. This phase modulation is performed with the image analyzers, in which the modulating raster (alternating transparent and nontransparent sectors is designed so, that the carrier frequency of oscillations is absent (suppressed in frequency spectrum, and all useful information can be found in the side frequencies.Carrier frequency suppression appears for two complete turns of the modulating raster. Each time during this process oscillations have a 180° phase shift (hop relatively to the initial oscillation on the boundary of each turn. It leads to a substantial increase in signal/noise ratio and possibility to conduct high-accuracy diagnostics.The principle of the pseudo inversion is used for measurements to suppress an adverse effect of various factors in dynamic control system. For this principle the leaving and returned beams practically go on the same way with small spatial shift. This shift occurs then the leaving beam reflects from a basic surface and the reflected – from the measured surface of the object. Therefore the measurements become insensitive to any other errors of system, except relative position of system «model-object».The main advantages of such measurements are the following:- system steadiness to error

  2. Machine learning methods in predicting the student academic motivation

    Directory of Open Access Journals (Sweden)

    Ivana Đurđević Babić

    2017-01-01

    Full Text Available Academic motivation is closely related to academic performance. For educators, it is equally important to detect early students with a lack of academic motivation as it is to detect those with a high level of academic motivation. In endeavouring to develop a classification model for predicting student academic motivation based on their behaviour in learning management system (LMS courses, this paper intends to establish links between the predicted student academic motivation and their behaviour in the LMS course. Students from all years at the Faculty of Education in Osijek participated in this research. Three machine learning classifiers (neural networks, decision trees, and support vector machines were used. To establish whether a significant difference in the performance of models exists, a t-test of the difference in proportions was used. Although, all classifiers were successful, the neural network model was shown to be the most successful in detecting the student academic motivation based on their behaviour in LMS course.

  3. Survey of Machine Learning Methods for Database Security

    Science.gov (United States)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  4. A method and machine for forming pleated and bellow tubes

    International Nuclear Information System (INIS)

    Banks, J.W.

    1975-01-01

    In a machine, the rollers outside the rough tube are rigidly supported for assuring the accurate forming of each turn of the pleated tube, the latter being position-indexed independently of the already formed turns. An inner roller is supported by a device for adjusting and indexing the position thereof on a carriage. The thus obtained tubes are suitable, in particular, for forming expansion sealing joints for power generators or nuclear reactors [fr

  5. Alternative Antimicrobial Commercial Egg Washing Procedures.

    Science.gov (United States)

    Hudson, Lauren K; Harrison, Mark A; Berrang, Mark E; Jones, Deana R

    2016-07-01

    Commercial table eggs are washed prior to packaging. Standard wash procedures use an alkaline pH and warm water. If a cool water method could be developed that would still provide a microbiologically safe egg, the industry may save energy costs associated with water heating. Four wash procedures were evaluated for Salmonella reduction: pH 11 at 48.9°C (industry standard), pH 11 at ambient temperature (∼20°C), pH 6 at 48.9°C, and pH 6 at ambient temperature. Alkaline washes contained potassium hydroxide-based detergent, while pH 6 washes contained approximately 200 ppm of chlorine and a proprietary chlorine stabilizer (T-128). When eggs were inoculated by immersion in a cell suspension of Salmonella Enteritidis and Salmonella Typhimurium, all treatments resulted in a slight and similar reduction of Salmonella numbers (approximately 0.77 log CFU/ml of shell emulsion reduction). When eggs were inoculated by droplet on the shell surface, Salmonella counts were reduced by approximately 5 log CFU when washed with chlorine plus the chlorine stabilizer at both temperatures and with the alkaline wash at the high temperature. The reductions in Salmonella by these treatments were not significantly (P > 0.05) different from each other but were significantly (P pH 11 warm water wash and may be a viable option to reduce cost, increase shelf life, and slow pathogen growth in and on shell eggs.

  6. Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries.

    Science.gov (United States)

    Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z

    2009-05-01

    Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.

  7. Counter current decantation washing of HLW sludge

    International Nuclear Information System (INIS)

    Brooke, J.N.; Peterson, R.A.

    1997-01-01

    The Savannah River Site (SRS) has 51 High Level Waste (HLW) tanks with typical dimensions 25.9 meters (85 feet) diameter and 10 meters (33 feet) high. Nearly 114 million liters (30 M gallons) of HLW waste is stored in these tanks in the form of insoluble solids called sludge, crystallized salt called salt cake, and salt solutions. This waste is being converted to waste forms stable for long term storage. In one of the processes, soluble salts are washed from HLW sludge in preparation for vitrification. At present, sludge is batch washed in a waste tank with one or no reuse of the wash water. Sodium hydroxide and sodium nitrite are added to the wash water for tank corrosion protection; the large volumes of spent wash water are recycled to the evaporator system; additional salt cake is produced; and sodium carbonate is formed in the washed sludge during storage by reaction with CO 2 from the air. High costs and operational concerns with the current washing process prompts DOE and WSRC to seek an improved washing method. A new method should take full advantage of the physical/chemical properties of sludge, experience from other technical disciplines, processing rate requirements, inherent process safety, and use of proven processes and equipment. Counter current solids washing is a common process in the minerals processing and chemical industries. Washing circuits can be designed using thickeners, filters or centrifuges. Realizing the special needs of nuclear work and the low processing rates required, a Counter Current Decantation (CCD) circuit is proposed using small thickeners and fluidic pumps

  8. Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling.

    Science.gov (United States)

    Cuperlovic-Culf, Miroslava

    2018-01-11

    Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies.

  9. Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling

    Science.gov (United States)

    Cuperlovic-Culf, Miroslava

    2018-01-01

    Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies. PMID:29324649

  10. Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods

    OpenAIRE

    Shan, Min

    2017-01-01

    With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...

  11. Machine Learning Methods for Identifying Composition of Uranium Deposits in Kazakhstan

    Directory of Open Access Journals (Sweden)

    Kuchin Yan

    2017-12-01

    Full Text Available The paper explores geophysical methods of wells survey, as well as their role in the development of Kazakhstan’s uranium deposit mining efforts. An analysis of the existing methods for solving the problem of interpreting geophysical data using machine learning in petroleum geophysics is made. The requirements and possible applications of machine learning methods in regard to uranium deposits of Kazakhstan are formulated in the paper.

  12. Research on intelligent machine self-perception method based on LSTM

    Science.gov (United States)

    Wang, Qiang; Cheng, Tao

    2018-05-01

    In this paper, we use the advantages of LSTM in feature extraction and processing high-dimensional and complex nonlinear data, and apply it to the autonomous perception of intelligent machines. Compared with the traditional multi-layer neural network, this model has memory, can handle time series information of any length. Since the multi-physical domain signals of processing machines have a certain timing relationship, and there is a contextual relationship between states and states, using this deep learning method to realize the self-perception of intelligent processing machines has strong versatility and adaptability. The experiment results show that the method proposed in this paper can obviously improve the sensing accuracy under various working conditions of the intelligent machine, and also shows that the algorithm can well support the intelligent processing machine to realize self-perception.

  13. Chatter suppression methods of a robot machine for ITER vacuum vessel assembly and maintenance

    International Nuclear Information System (INIS)

    Wu, Huapeng; Wang, Yongbo; Li, Ming; Al-Saedi, Mazin; Handroos, Heikki

    2014-01-01

    Highlights: •A redundant 10-DOF serial-parallel hybrid robot for ITER assembly and maintains is presented. •A dynamic model of the robot is developed. •A feedback and feedforward controller is presented to suppress machining vibration of the robot. -- Abstract: In the process of assembly and maintenance of ITER vacuum vessel (ITER VV), various machining tasks including threading, milling, welding-defects cutting and flexible hose boring are required to be performed from inside of ITER VV by on-site machining tools. Robot machine is a promising option for these tasks, but great chatter (machine vibration) would happen in the machining process. The chatter vibration will deteriorate the robot accuracy and surface quality, and even cause some damages on the end-effector tools and the robot structure itself. This paper introduces two vibration control methods, one is passive and another is active vibration control. For the passive vibration control, a parallel mechanism is presented to increase the stiffness of robot machine; for the active vibration control, a hybrid control method combining feedforward controller and nonlinear feedback controller is introduced for chatter suppression. A dynamic model and its chatter vibration phenomena of a hybrid robot is demonstrated. Simulation results are given based on the proposed hybrid robot machine which is developed for the ITER VV assembly and maintenance

  14. Chatter suppression methods of a robot machine for ITER vacuum vessel assembly and maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Huapeng; Wang, Yongbo, E-mail: yongbo.wang@lut.fi; Li, Ming; Al-Saedi, Mazin; Handroos, Heikki

    2014-10-15

    Highlights: •A redundant 10-DOF serial-parallel hybrid robot for ITER assembly and maintains is presented. •A dynamic model of the robot is developed. •A feedback and feedforward controller is presented to suppress machining vibration of the robot. -- Abstract: In the process of assembly and maintenance of ITER vacuum vessel (ITER VV), various machining tasks including threading, milling, welding-defects cutting and flexible hose boring are required to be performed from inside of ITER VV by on-site machining tools. Robot machine is a promising option for these tasks, but great chatter (machine vibration) would happen in the machining process. The chatter vibration will deteriorate the robot accuracy and surface quality, and even cause some damages on the end-effector tools and the robot structure itself. This paper introduces two vibration control methods, one is passive and another is active vibration control. For the passive vibration control, a parallel mechanism is presented to increase the stiffness of robot machine; for the active vibration control, a hybrid control method combining feedforward controller and nonlinear feedback controller is introduced for chatter suppression. A dynamic model and its chatter vibration phenomena of a hybrid robot is demonstrated. Simulation results are given based on the proposed hybrid robot machine which is developed for the ITER VV assembly and maintenance.

  15. Comparison of the laboratory standard washing using CIPAC washing agent and the domestic washing on three recommended types of long-lasting insecticidal mosquito nets.

    Science.gov (United States)

    Ouattara, Jean Pierre Nabléni; Louwagie, Johanna; Pigeon, Olivier; Spanoghe, Pieter

    2013-01-01

    One of the best ways to prevent malaria is the use of insecticide-treated bed nets. Manufacturers pursue easier, safer and more efficient nets. Hence, many studies on the efficacy and wash resistance using World Health Organization standards have been reported. The commonly used detergent is "Savon de Marseille", because it closely resembles actually used soaps. At the 54(th) Collaborative International Pesticides Analytical Council (CIPAC) Technical Meeting in 2010, it was suggested to replace it by a standardized "CIPAC washing agent". The aim of this study was to investigate the difference between a laboratory hand washing simulation using the CIPAC washing agent (method-1) and a domestic washing (method-2) on different bed nets, as well as the effect of the drying process on the release of active ingredient. Interceptor®, Permanet®2.0 and Netprotect® nets were used in three treatments, each repeated 20 times. The first treatment included method-1 washing and indoor drying. The second treatment included method-2 washing and indoor drying. The third treatment used method-2 washing and UV-drying. The residual insecticide contents were determined using gas chromatography. The washing procedure and the number of washes have a significant effect on the release of active ingredient. Statistically, the two washing methods have the same effect on removing the active ingredient from the Interceptor® and Permanet®2.0 net, but a significantly different influence on the Netprotect® nets. The drying process has no significant effect on the insecticide. Both washing procedures affected the amount of insecticide remaining on nets independently of the impregnation technology. The active ingredient decreases with the number of washing cycles following an exponential or logarithmic model for coated nets. The laboratory hand washing simulation had more impact on the decrease of active ingredient content of the Netprotect® nets. All net types seemed to be effectively

  16. Comparison of the laboratory standard washing using CIPAC washing agent and the domestic washing on three recommended types of long-lasting insecticidal mosquito nets.

    Directory of Open Access Journals (Sweden)

    Jean Pierre Nabléni Ouattara

    Full Text Available One of the best ways to prevent malaria is the use of insecticide-treated bed nets. Manufacturers pursue easier, safer and more efficient nets. Hence, many studies on the efficacy and wash resistance using World Health Organization standards have been reported. The commonly used detergent is "Savon de Marseille", because it closely resembles actually used soaps. At the 54(th Collaborative International Pesticides Analytical Council (CIPAC Technical Meeting in 2010, it was suggested to replace it by a standardized "CIPAC washing agent". The aim of this study was to investigate the difference between a laboratory hand washing simulation using the CIPAC washing agent (method-1 and a domestic washing (method-2 on different bed nets, as well as the effect of the drying process on the release of active ingredient.Interceptor®, Permanet®2.0 and Netprotect® nets were used in three treatments, each repeated 20 times. The first treatment included method-1 washing and indoor drying. The second treatment included method-2 washing and indoor drying. The third treatment used method-2 washing and UV-drying. The residual insecticide contents were determined using gas chromatography.The washing procedure and the number of washes have a significant effect on the release of active ingredient. Statistically, the two washing methods have the same effect on removing the active ingredient from the Interceptor® and Permanet®2.0 net, but a significantly different influence on the Netprotect® nets. The drying process has no significant effect on the insecticide.Both washing procedures affected the amount of insecticide remaining on nets independently of the impregnation technology. The active ingredient decreases with the number of washing cycles following an exponential or logarithmic model for coated nets. The laboratory hand washing simulation had more impact on the decrease of active ingredient content of the Netprotect® nets. All net types seemed to be

  17. Learning Algorithm of Boltzmann Machine Based on Spatial Monte Carlo Integration Method

    Directory of Open Access Journals (Sweden)

    Muneki Yasuda

    2018-04-01

    Full Text Available The machine learning techniques for Markov random fields are fundamental in various fields involving pattern recognition, image processing, sparse modeling, and earth science, and a Boltzmann machine is one of the most important models in Markov random fields. However, the inference and learning problems in the Boltzmann machine are NP-hard. The investigation of an effective learning algorithm for the Boltzmann machine is one of the most important challenges in the field of statistical machine learning. In this paper, we study Boltzmann machine learning based on the (first-order spatial Monte Carlo integration method, referred to as the 1-SMCI learning method, which was proposed in the author’s previous paper. In the first part of this paper, we compare the method with the maximum pseudo-likelihood estimation (MPLE method using a theoretical and a numerical approaches, and show the 1-SMCI learning method is more effective than the MPLE. In the latter part, we compare the 1-SMCI learning method with other effective methods, ratio matching and minimum probability flow, using a numerical experiment, and show the 1-SMCI learning method outperforms them.

  18. Centrifugal washing and recovery as an improved method for obtaining lignin precipitated from South African kraft mill black liquor

    CSIR Research Space (South Africa)

    Namane, M

    2015-10-01

    Full Text Available This study describes centrifugal recovery as an improved method for collection of lignin isolated from black liquor obtained from a South African kraft mill. Precipitation of lignin was achieved by utilising 6 M sulphuric acid. Recovery...

  19. Machine Selection in A Dairy Product Company with Entropy and SAW Method Integration

    Directory of Open Access Journals (Sweden)

    Aşkın Özdağoğlu

    2017-07-01

    Full Text Available Machine selection is an important and difficult process for the firms, and its results may generate more problems than anticipated. In order to find the best alternative, managers should define the requirements of the factory and determine the necessary criteria. On the other hand, the decision making criteria in order to choose the right equipment may vary according to the type of the manufacturing facility, market requirements, and consumer assigned criteria. This study aims to find the best machine alternative  among  the three machine offerings according to twelve evaluation criteria by integrating entropy method with SAW method.

  20. Machine Learning Method Applied in Readout System of Superheated Droplet Detector

    Science.gov (United States)

    Liu, Yi; Sullivan, Clair Julia; d'Errico, Francesco

    2017-07-01

    Direct readability is one advantage of superheated droplet detectors in neutron dosimetry. Utilizing such a distinct characteristic, an imaging readout system analyzes image of the detector for neutron dose readout. To improve the accuracy and precision of algorithms in the imaging readout system, machine learning algorithms were developed. Deep learning neural network and support vector machine algorithms are applied and compared with generally used Hough transform and curvature analysis methods. The machine learning methods showed a much higher accuracy and better precision in recognizing circular gas bubbles.

  1. Quantitative Evaluation of Heavy Duty Machine Tools Remanufacturing Based on Modified Catastrophe Progression Method

    Science.gov (United States)

    shunhe, Li; jianhua, Rao; lin, Gui; weimin, Zhang; degang, Liu

    2017-11-01

    The result of remanufacturing evaluation is the basis for judging whether the heavy duty machine tool can remanufacture in the EOL stage of the machine tool lifecycle management.The objectivity and accuracy of evaluation is the key to the evaluation method.In this paper, the catastrophe progression method is introduced into the quantitative evaluation of heavy duty machine tools’ remanufacturing,and the results are modified by the comprehensive adjustment method,which makes the evaluation results accord with the standard of human conventional thinking.Using the catastrophe progression method to establish the heavy duty machine tools’ quantitative evaluation model,to evaluate the retired TK6916 type CNC floor milling-boring machine’s remanufacturing.The evaluation process is simple,high quantification,the result is objective.

  2. Probabilistics since WASH-1400

    International Nuclear Information System (INIS)

    Whitehead, N.E.

    1980-01-01

    Literature since the issuing of WASH-1400 reactor safety study shows that although the methodology has been attacked, it stands criticism well. Contrary to the aim of the study, which was to give a realistic, rather than a conservative risk estimate, there are many conservatisms in it. The strongly attacked treatment of common mode failure involving the square bounding model is shown here to be very likely to give correct results - and the applications of it in WASH-1400 do not often give results different from using the mean instead of the median. The Three-Mile Island accident is not such as to change the conclusions of WASH-1400 regarding core melt probabilities

  3. Method of Relative Magnitudes for Calculating Magnetic Fluxes in Electrical Machine

    Directory of Open Access Journals (Sweden)

    Oleg A.

    2018-03-01

    Full Text Available Introduction: The article presents the study results of the model of an asynchronous electric motor carried out by the author within the framework of the Priorities Research Program “Research and development in the priority areas of development of Russia’s scientific and technical complex for 2014–2020”. Materials and Methods: A model of an idealized asynchronous machine (with sinusoidal distribution of magnetic induction in air gap is used in vector control systems. It is impossible to create windings for this machine. The basis of the new calculation approach was the Conductivity of Teeth Contours Method, developed at the Electrical Machines Chair of the Moscow Power Engineering Institute (MPEI. Unlike this method, the author used not absolute values, but relative magnitudes of magnetic fluxes. This solution fundamentally improved the method’s capabilities. The relative magnitudes of the magnetic fluxes of the teeth contours do not required the additional consideration for exact structure of magnetic field of tooth and adjacent slots. These structures are identical for all the teeth of the machine and differ only in magnitude. The purpose of the calculations was not traditional harmonic analysis of magnetic induction distribution in air gap of machine, but a refinement of the equations of electric machine model. The vector control researchers used only the cos(θ function as a value of mutual magnetic coupling coefficient between the windings. Results: The author has developed a way to take into account the design of the windings of a real machine by using imaginary measuring winding with the same winding design as a real phase winding. The imaginary winding can be placed in the position of any machine windings. The calculation of the relative magnetic fluxes of this winding helped to estimate the real values of the magnetic coupling coefficients between the windings, and find the correction functions for the model of an idealized

  4. Failure analysis for ultrasound machines in a radiology department after implementation of predictive maintenance method

    Directory of Open Access Journals (Sweden)

    Greg Chu

    2018-01-01

    Full Text Available Objective: The objective of the study was to perform quantitative failure and fault analysis to the diagnostic ultrasound (US scanners in a radiology department after the implementation of the predictive maintenance (PdM method; to study the reduction trend of machine failure; to understand machine operating parameters affecting the failure; to further optimize the method to maximize the machine clinically service time. Materials and Methods: The PdM method has been implemented to the 5 US machines since 2013. Log books were used to record machine failures and their root causes together with the time spent on repair, all of which were retrieved, categorized, and analyzed for the period between 2013 and 2016. Results: There were a total of 108 cases of failure occurred in these 5 US machines during the 4-year study period. The average number of failure per month for all these machines was 2.4. Failure analysis showed that there were 33 cases (30.5% due to software, 44 cases (40.7% due to hardware, and 31 cases (28.7% due to US probe. There was a statistically significant negative correlation between the time spent on regular quality assurance (QA by hospital physicists with the time spent on faulty parts replacement over the study period (P = 0.007. However, there was no statistically significant correlation between regular QA time and total yearly breakdown case (P = 0.12, although there has been a decreasing trend observed in the yearly total breakdown. Conclusion: There has been a significant improvement on the machine failure of US machines attributed to the concerted effort of sonographers and physicists in our department to practice the PdM method, in that system component repair time has been reduced, and a decreasing trend in the number of system breakdown has been observed.

  5. Soil washing technology evaluation

    International Nuclear Information System (INIS)

    Suer, A.

    1995-04-01

    Environmental Restoration Engineering (ERE) continues to review innovative, efficient, and cost effective technologies for SRS soil and/or groundwater remediation. As part of this effort, this technical evaluation provides review and the latest information on the technology for SRS soil remediation. Additional technology evaluation reports will be issued periodically to update these reports. The purpose of this report is to review the soil washing technology and its potential application to SRS soil remediation. To assess whether the Soil Washing technology is a viable option for SRS soil remediation, it is necessary to review the technology/process, technology advantages/limitations, performance, applications, and cost analysis

  6. Washing scaling of GeneChip microarray expression

    Directory of Open Access Journals (Sweden)

    Krohn Knut

    2010-05-01

    Full Text Available Abstract Background Post-hybridization washing is an essential part of microarray experiments. Both the quality of the experimental washing protocol and adequate consideration of washing in intensity calibration ultimately affect the quality of the expression estimates extracted from the microarray intensities. Results We conducted experiments on GeneChip microarrays with altered protocols for washing, scanning and staining to study the probe-level intensity changes as a function of the number of washing cycles. For calibration and analysis of the intensity data we make use of the 'hook' method which allows intensity contributions due to non-specific and specific hybridization of perfect match (PM and mismatch (MM probes to be disentangled in a sequence specific manner. On average, washing according to the standard protocol removes about 90% of the non-specific background and about 30-50% and less than 10% of the specific targets from the MM and PM, respectively. Analysis of the washing kinetics shows that the signal-to-noise ratio doubles roughly every ten stringent washing cycles. Washing can be characterized by time-dependent rate constants which reflect the heterogeneous character of target binding to microarray probes. We propose an empirical washing function which estimates the survival of probe bound targets. It depends on the intensity contribution due to specific and non-specific hybridization per probe which can be estimated for each probe using existing methods. The washing function allows probe intensities to be calibrated for the effect of washing. On a relative scale, proper calibration for washing markedly increases expression measures, especially in the limit of small and large values. Conclusions Washing is among the factors which potentially distort expression measures. The proposed first-order correction method allows direct implementation in existing calibration algorithms for microarray data. We provide an experimental

  7. A Review for Detecting Gene-Gene Interactions Using Machine Learning Methods in Genetic Epidemiology

    Directory of Open Access Journals (Sweden)

    Ching Lee Koo

    2013-01-01

    Full Text Available Recently, the greatest statistical computational challenge in genetic epidemiology is to identify and characterize the genes that interact with other genes and environment factors that bring the effect on complex multifactorial disease. These gene-gene interactions are also denoted as epitasis in which this phenomenon cannot be solved by traditional statistical method due to the high dimensionality of the data and the occurrence of multiple polymorphism. Hence, there are several machine learning methods to solve such problems by identifying such susceptibility gene which are neural networks (NNs, support vector machine (SVM, and random forests (RFs in such common and multifactorial disease. This paper gives an overview on machine learning methods, describing the methodology of each machine learning methods and its application in detecting gene-gene and gene-environment interactions. Lastly, this paper discussed each machine learning method and presents the strengths and weaknesses of each machine learning method in detecting gene-gene interactions in complex human disease.

  8. Machine learning for medical ultrasound: status, methods, and future opportunities.

    Science.gov (United States)

    Brattain, Laura J; Telfer, Brian A; Dhyani, Manish; Grajo, Joseph R; Samir, Anthony E

    2018-04-01

    Ultrasound (US) imaging is the most commonly performed cross-sectional diagnostic imaging modality in the practice of medicine. It is low-cost, non-ionizing, portable, and capable of real-time image acquisition and display. US is a rapidly evolving technology with significant challenges and opportunities. Challenges include high inter- and intra-operator variability and limited image quality control. Tremendous opportunities have arisen in the last decade as a result of exponential growth in available computational power coupled with progressive miniaturization of US devices. As US devices become smaller, enhanced computational capability can contribute significantly to decreasing variability through advanced image processing. In this paper, we review leading machine learning (ML) approaches and research directions in US, with an emphasis on recent ML advances. We also present our outlook on future opportunities for ML techniques to further improve clinical workflow and US-based disease diagnosis and characterization.

  9. A Modified Method Combined with a Support Vector Machine and Bayesian Algorithms in Biological Information

    Directory of Open Access Journals (Sweden)

    Wen-Gang Zhou

    2015-06-01

    Full Text Available With the deep research of genomics and proteomics, the number of new protein sequences has expanded rapidly. With the obvious shortcomings of high cost and low efficiency of the traditional experimental method, the calculation method for protein localization prediction has attracted a lot of attention due to its convenience and low cost. In the machine learning techniques, neural network and support vector machine (SVM are often used as learning tools. Due to its complete theoretical framework, SVM has been widely applied. In this paper, we make an improvement on the existing machine learning algorithm of the support vector machine algorithm, and a new improved algorithm has been developed, combined with Bayesian algorithms. The proposed algorithm can improve calculation efficiency, and defects of the original algorithm are eliminated. According to the verification, the method has proved to be valid. At the same time, it can reduce calculation time and improve prediction efficiency.

  10. A modeling method for hybrid energy behaviors in flexible machining systems

    International Nuclear Information System (INIS)

    Li, Yufeng; He, Yan; Wang, Yan; Wang, Yulin; Yan, Ping; Lin, Shenlong

    2015-01-01

    Increasingly environmental and economic pressures have led to great concerns regarding the energy consumption of machining systems. Understanding energy behaviors of flexible machining systems is a prerequisite for improving energy efficiency of these systems. This paper proposes a modeling method to predict energy behaviors in flexible machining systems. The hybrid energy behaviors not only depend on the technical specification related of machine tools and workpieces, but are significantly affected by individual production scenarios. In the method, hybrid energy behaviors are decomposed into Structure-related energy behaviors, State-related energy behaviors, Process-related energy behaviors and Assignment-related energy behaviors. The modeling method for the hybrid energy behaviors is proposed based on Colored Timed Object-oriented Petri Net (CTOPN). The former two types of energy behaviors are modeled by constructing the structure of CTOPN, whist the latter two types of behaviors are simulated by applying colored tokens and associated attributes. Machining on two workpieces in the experimental workshop were undertaken to verify the proposed modeling method. The results showed that the method can provide multi-perspective transparency on energy consumption related to machine tools, workpieces as well as production management, and is particularly suitable for flexible manufacturing system when frequent changes in machining systems are often encountered. - Highlights: • Energy behaviors in flexible machining systems are modeled in this paper. • Hybrid characteristics of energy behaviors are examined from multiple viewpoints. • Flexible modeling method CTOPN is used to predict the hybrid energy behaviors. • This work offers a multi-perspective transparency on energy consumption

  11. Wash Your Hands

    Centers for Disease Control (CDC) Podcasts

    2010-03-08

    This video shows kids how to properly wash their hands, one of the most important steps we can take to avoid getting sick and spreading germs to others.  Created: 3/8/2010 by Centers for Disease Control and Prevention (CDC).   Date Released: 3/8/2010.

  12. Soil washing treatability study

    International Nuclear Information System (INIS)

    Krstich, M.

    1995-12-01

    Soil washing was identified as a viable treatment process option for remediating soil at the FEMP Environmental Management Project (FEMP). Little information relative to the specific application and potential effectiveness of the soil washing process exists that applies to the types of soil at the FEMP. To properly evaluate this process option in conjunction with the ongoing FEMP Remedial Investigation/Feasibility Study (RI/FS), a treatability testing program was necessary to provide a foundation for a detailed technical evaluation of the viability of the process. In August 1991, efforts were initiated to develop a work plan and experimental design for investigating the effectiveness of soil washing on FEMP soil. In August 1992, the final Treatability Study Work Plan for Operable Unit 5: Soil Washing (DOE 1992) was issued. This document shall be referenced throughout the remainder of this report as the Treatability Study Work Plan (TSWP). The purpose of this treatability study was to generate data to support initial screening and the detailed analysis of alternatives for the Operable Unit 5 FS

  13. Comparing of cogging torque reduction methods in permanent magnet machines with fractional slot windings

    Science.gov (United States)

    Pristup, A. G.; Toporkov, D. M.

    2017-10-01

    The results of the investigation of the cogging torque in permanent magnet synchronous machines, which is caused by the stator slotting and the rotor eccentricity, are presented in the paper. A new design of the machine has been developed in the course of the investigation, and the value of the cogging torque in this construction is less considerably compared to other constructions. In contrast to the available methods of the cogging torque reduction, the solution suggested not only decreases the level of the cogging torque but also has negligibly small influence on characteristics of the machine with the rotor eccentricity which is typical of the mass production and long-term usage.

  14. Research progress in machine learning methods for gene-gene interaction detection.

    Science.gov (United States)

    Peng, Zhe-Ye; Tang, Zi-Jun; Xie, Min-Zhu

    2018-03-20

    Complex diseases are results of gene-gene and gene-environment interactions. However, the detection of high-dimensional gene-gene interactions is computationally challenging. In the last two decades, machine-learning approaches have been developed to detect gene-gene interactions with some successes. In this review, we summarize the progress in research on machine learning methods, as applied to gene-gene interaction detection. It systematically examines the principles and limitations of the current machine learning methods used in genome wide association studies (GWAS) to detect gene-gene interactions, such as neural networks (NN), random forest (RF), support vector machines (SVM) and multifactor dimensionality reduction (MDR), and provides some insights on the future research directions in the field.

  15. Floor-Fractured Craters through Machine Learning Methods

    Science.gov (United States)

    Thorey, C.

    2015-12-01

    Floor-fractured craters are impact craters that have undergone post impact deformations. They are characterized by shallow floors with a plate-like or convex appearance, wide floor moats, and radial, concentric, and polygonal floor-fractures. While the origin of these deformations has long been debated, it is now generally accepted that they are the result of the emplacement of shallow magmatic intrusions below their floor. These craters thus constitute an efficient tool to probe the importance of intrusive magmatism from the lunar surface. The most recent catalog of lunar-floor fractured craters references about 200 of them, mainly located around the lunar maria Herein, we will discuss the possibility of using machine learning algorithms to try to detect new floor-fractured craters on the Moon among the 60000 craters referenced in the most recent catalogs. In particular, we will use the gravity field provided by the Gravity Recovery and Interior Laboratory (GRAIL) mission, and the topographic dataset obtained from the Lunar Orbiter Laser Altimeter (LOLA) instrument to design a set of representative features for each crater. We will then discuss the possibility to design a binary supervised classifier, based on these features, to discriminate between the presence or absence of crater-centered intrusion below a specific crater. First predictions from different classifier in terms of their accuracy and uncertainty will be presented.

  16. Multi-method automated diagnostics of rotating machines

    Science.gov (United States)

    Kostyukov, A. V.; Boychenko, S. N.; Shchelkanov, A. V.; Burda, E. A.

    2017-08-01

    The automated machinery diagnostics and monitoring systems utilized within the petrochemical plants are an integral part of the measures taken to ensure safety and, as a consequence, the efficiency of these industrial facilities. Such systems are often limited in their functionality due to the specifics of the diagnostic techniques adopted. As the diagnostic techniques applied in each system are limited, and machinery defects can have different physical nature, it becomes necessary to combine several diagnostics and monitoring systems to control various machinery components. Such an approach is inconvenient, since it requires additional measures to bring the diagnostic results in a single view of the technical condition of production assets. In this case, we mean by a production facility a bonded complex of a process unit, a drive, a power source and lines. A failure of any of these components will cause an outage of the production asset, which is unacceptable. The purpose of the study is to test a combined use of vibration diagnostics and partial discharge techniques within the diagnostic systems of enterprises for automated control of the technical condition of rotating machinery during maintenance and at production facilities. The described solutions allow you to control the condition of mechanical and electrical components of rotating machines. It is shown that the functionality of the diagnostics systems can be expanded with minimal changes in technological chains of repair and operation of rotating machinery. Automation of such systems reduces the influence of the human factor on the quality of repair and diagnostics of the machinery.

  17. Method and apparatus for improving the quality and efficiency of ultrashort-pulse laser machining

    Science.gov (United States)

    Stuart, Brent C.; Nguyen, Hoang T.; Perry, Michael D.

    2001-01-01

    A method and apparatus for improving the quality and efficiency of machining of materials with laser pulse durations shorter than 100 picoseconds by orienting and maintaining the polarization of the laser light such that the electric field vector is perpendicular relative to the edges of the material being processed. Its use is any machining operation requiring remote delivery and/or high precision with minimal collateral dames.

  18. Vibration Prediction Method of Electric Machines by using Experimental Transfer Function and Magnetostatic Finite Element Analysis

    International Nuclear Information System (INIS)

    Saito, A; Kuroishi, M; Nakai, H

    2016-01-01

    This paper concerns the noise and structural vibration caused by rotating electric machines. Special attention is given to the magnetic-force induced vibration response of interior-permanent magnet machines. In general, to accurately predict and control the vibration response caused by the electric machines, it is inevitable to model not only the magnetic force induced by the fluctuation of magnetic fields, but also the structural dynamic characteristics of the electric machines and surrounding structural components. However, due to complicated boundary conditions and material properties of the components, such as laminated magnetic cores and varnished windings, it has been a challenge to compute accurate vibration response caused by the electric machines even after their physical models are available. In this paper, we propose a highly-accurate vibration prediction method that couples experimentally-obtained discrete structural transfer functions and numerically-obtained distributed magnetic-forces. The proposed vibration synthesis methodology has been applied to predict vibration responses of an interior permanent magnet machine. The results show that the predicted vibration response of the electric machine agrees very well with the measured vibration response for several load conditions, for wide frequency ranges. (paper)

  19. A human-machine cooperation route planning method based on improved A* algorithm

    Science.gov (United States)

    Zhang, Zhengsheng; Cai, Chao

    2011-12-01

    To avoid the limitation of common route planning method to blindly pursue higher Machine Intelligence and autoimmunization, this paper presents a human-machine cooperation route planning method. The proposed method includes a new A* path searing strategy based on dynamic heuristic searching and a human cooperated decision strategy to prune searching area. It can overcome the shortage of A* algorithm to fall into a local long term searching. Experiments showed that this method can quickly plan a feasible route to meet the macro-policy thinking.

  20. Performance of machine learning methods for ligand-based virtual screening.

    Science.gov (United States)

    Plewczynski, Dariusz; Spieser, Stéphane A H; Koch, Uwe

    2009-05-01

    Computational screening of compound databases has become increasingly popular in pharmaceutical research. This review focuses on the evaluation of ligand-based virtual screening using active compounds as templates in the context of drug discovery. Ligand-based screening techniques are based on comparative molecular similarity analysis of compounds with known and unknown activity. We provide an overview of publications that have evaluated different machine learning methods, such as support vector machines, decision trees, ensemble methods such as boosting, bagging and random forests, clustering methods, neuronal networks, naïve Bayesian, data fusion methods and others.

  1. Predicting Coronal Mass Ejections Using Machine Learning Methods

    Science.gov (United States)

    Bobra, M. G.; Ilonidis, S.

    2016-04-01

    Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections (CMEs). Usually, solar active regions that produce large flares will also produce a CME, but this is not always true. Despite advances in numerical modeling, it is still unclear which circumstances will produce a CME. Therefore, it is worthwhile to empirically determine which features distinguish flares associated with CMEs from flares that are not. At this time, no extensive study has used physically meaningful features of active regions to distinguish between these two populations. As such, we attempt to do so by using features derived from (1) photospheric vector magnetic field data taken by the Solar Dynamics Observatory’s Helioseismic and Magnetic Imager instrument and (2) X-ray flux data from the Geostationary Operational Environmental Satellite’s X-ray Flux instrument. We build a catalog of active regions that either produced both a flare and a CME (the positive class) or simply a flare (the negative class). We then use machine-learning algorithms to (1) determine which features distinguish these two populations, and (2) forecast whether an active region that produces an M- or X-class flare will also produce a CME. We compute the True Skill Statistic, a forecast verification metric, and find that it is a relatively high value of ∼0.8 ± 0.2. We conclude that a combination of six parameters, which are all intensive in nature, will capture most of the relevant information contained in the photospheric magnetic field.

  2. PREDICTING CORONAL MASS EJECTIONS USING MACHINE LEARNING METHODS

    Energy Technology Data Exchange (ETDEWEB)

    Bobra, M. G.; Ilonidis, S. [W.W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States)

    2016-04-20

    Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections (CMEs). Usually, solar active regions that produce large flares will also produce a CME, but this is not always true. Despite advances in numerical modeling, it is still unclear which circumstances will produce a CME. Therefore, it is worthwhile to empirically determine which features distinguish flares associated with CMEs from flares that are not. At this time, no extensive study has used physically meaningful features of active regions to distinguish between these two populations. As such, we attempt to do so by using features derived from (1) photospheric vector magnetic field data taken by the Solar Dynamics Observatory ’s Helioseismic and Magnetic Imager instrument and (2) X-ray flux data from the Geostationary Operational Environmental Satellite’s X-ray Flux instrument. We build a catalog of active regions that either produced both a flare and a CME (the positive class) or simply a flare (the negative class). We then use machine-learning algorithms to (1) determine which features distinguish these two populations, and (2) forecast whether an active region that produces an M- or X-class flare will also produce a CME. We compute the True Skill Statistic, a forecast verification metric, and find that it is a relatively high value of ∼0.8 ± 0.2. We conclude that a combination of six parameters, which are all intensive in nature, will capture most of the relevant information contained in the photospheric magnetic field.

  3. The effect of cooking and washing rice on the bio-accessibility of As, Cu, Fe, V and Zn using an on-line continuous leaching method.

    Science.gov (United States)

    Horner, Nolan S; Beauchemin, Diane

    2013-01-03

    A previously developed method based on continuous on-line leaching with artificial gastro-intestinal fluids was used to determine the bio-accessible fraction of As, Cu, Fe, V and Zn in brown and white rice from California by inductively coupled mass spectrometry (ICP-MS). Saliva generally accounted for the largest percentage of total element leached in comparison to gastric and intestinal juices. Arsenic speciation analysis was performed on the saliva and gastric juice leachates using ion exchange chromatography coupled to ICP-MS. The four most toxic species of As (As(III), monomethylarsonic acid (MMA), dimethylarsinic acid (DMA) and As(V)), as well as Cl(-) in the gastric juice leachate, were successfully separated within 5.5min using a simple nitric acid gradient. While cooking rice had relatively little effect on total bio-accessibility, a change in species from As(V) and DMA to As(III) was observed for both types of rice. On the other hand, washing the rice with doubly deionized water prior to cooking removed a large percentage of the total bio-accessible fraction of As, Cu, Fe, V and Zn. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Machine Maintenance Scheduling with Reliability Engineering Method and Maintenance Value Stream Mapping

    Science.gov (United States)

    Sembiring, N.; Nasution, A. H.

    2018-02-01

    Corrective maintenance i.e replacing or repairing the machine component after machine break down always done in a manufacturing company. It causes the production process must be stopped. Production time will decrease due to the maintenance team must replace or repair the damage machine component. This paper proposes a preventive maintenance’s schedule for a critical component of a critical machine of an crude palm oil and kernel company due to increase maintenance efficiency. The Reliability Engineering & Maintenance Value Stream Mapping is used as a method and a tool to analize the reliability of the component and reduce the wastage in any process by segregating value added and non value added activities.

  5. Classification of older adults with/without a fall history using machine learning methods.

    Science.gov (United States)

    Lin Zhang; Ou Ma; Fabre, Jennifer M; Wood, Robert H; Garcia, Stephanie U; Ivey, Kayla M; McCann, Evan D

    2015-01-01

    Falling is a serious problem in an aged society such that assessment of the risk of falls for individuals is imperative for the research and practice of falls prevention. This paper introduces an application of several machine learning methods for training a classifier which is capable of classifying individual older adults into a high risk group and a low risk group (distinguished by whether or not the members of the group have a recent history of falls). Using a 3D motion capture system, significant gait features related to falls risk are extracted. By training these features, classification hypotheses are obtained based on machine learning techniques (K Nearest-neighbour, Naive Bayes, Logistic Regression, Neural Network, and Support Vector Machine). Training and test accuracies with sensitivity and specificity of each of these techniques are assessed. The feature adjustment and tuning of the machine learning algorithms are discussed. The outcome of the study will benefit the prediction and prevention of falls.

  6. Design Method for Fast Switching Seat Valves for Digital Displacement Machines

    DEFF Research Database (Denmark)

    Roemer, Daniel Beck; Johansen, Per; Pedersen, Henrik C.

    2014-01-01

    corresponding to the piston movement, which has been shown to facilitate superior part load efficiency combined with high bandwidth compared to traditional displacement machines. However, DD machines need fast switching on-off valves with low pressure loss for efficient operation, especially in fast rotating......Digital Displacement (DD) machines are upcoming technology where the displacement of each pressure chamber is controlled electronically by use of two fast switching seat valves. The effective displacement and operation type (pumping/motoring) may be controlled by manipulating the seat valves...... method for DD seat valves are presented, taking into account the significant aspects related to obtaining efficient DD valves with basis in a given DD machine specifications. The seat area is minimized and the stroke length is minimized to obtain fast switching times while considering the pressure loss...

  7. Wash-oil problem

    Energy Technology Data Exchange (ETDEWEB)

    Chlosta, J

    1941-01-01

    Meier-Grolman and others have deduced from experimental studies of the vapor pressure of solutions of benzene in paraffin oil and Solway oil-paraffin oil mixtures that the higher the proportion of aliphatic compounds in a wash oil, the less suitable it is for benzene scrubbing. This generalization is not supported. Paraffin oils from brown-coal tar and low viscous oils from the Fischer-Tropsch hydrocarbon synthesis process are both being successfully used for benzene scrubbing.

  8. Assessing a Novel Method to Reduce Anesthesia Machine Contamination: A Prospective, Observational Trial

    Directory of Open Access Journals (Sweden)

    Chuck J. Biddle

    2018-01-01

    Full Text Available Background. Anesthesia machines are known reservoirs of bacterial species, potentially contributing to healthcare associated infections (HAIs. An inexpensive, disposable, nonpermeable, transparent anesthesia machine wrap (AMW may reduce microbial contamination of the anesthesia machine. This study quantified the density and diversity of bacterial species found on anesthesia machines after terminal cleaning and between cases during actual anesthesia care to assess the impact of the AMW. We hypothesized reduced bioburden with the use of the AMW. Methods. In a prospective, experimental research design, the AMW was used in 11 surgical cases (intervention group and not used in 11 control surgical cases. Cases were consecutively assigned to general surgical operating rooms. Seven frequently touched and difficult to disinfect “hot spots” were cultured on each machine preceding and following each case. The density and diversity of cultured colony forming units (CFUs between the covered and uncovered machines were compared using Wilcoxon signed-rank test and Student’s t-tests. Results. There was a statistically significant reduction in CFU density and diversity when the AMW was employed. Conclusion. The protective effect of the AMW during regular anesthetic care provides a reliable and low-cost method to minimize the transmission of pathogens across patients and potentially reduces HAIs.

  9. A Numerical Comparison of Rule Ensemble Methods and Support Vector Machines

    Energy Technology Data Exchange (ETDEWEB)

    Meza, Juan C.; Woods, Mark

    2009-12-18

    Machine or statistical learning is a growing field that encompasses many scientific problems including estimating parameters from data, identifying risk factors in health studies, image recognition, and finding clusters within datasets, to name just a few examples. Statistical learning can be described as 'learning from data' , with the goal of making a prediction of some outcome of interest. This prediction is usually made on the basis of a computer model that is built using data where the outcomes and a set of features have been previously matched. The computer model is called a learner, hence the name machine learning. In this paper, we present two such algorithms, a support vector machine method and a rule ensemble method. We compared their predictive power on three supernova type 1a data sets provided by the Nearby Supernova Factory and found that while both methods give accuracies of approximately 95%, the rule ensemble method gives much lower false negative rates.

  10. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    DEFF Research Database (Denmark)

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna

    2013-01-01

    machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region......-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods...

  11. CNC LATHE MACHINE PRODUCING NC CODE BY USING DIALOG METHOD

    Directory of Open Access Journals (Sweden)

    Yakup TURGUT

    2004-03-01

    Full Text Available In this study, an NC code generation program utilising Dialog Method was developed for turning centres. Initially, CNC lathes turning methods and tool path development techniques were reviewed briefly. By using geometric definition methods, tool path was generated and CNC part program was developed for FANUC control unit. The developed program made CNC part program generation process easy. The program was developed using BASIC 6.0 programming language while the material and cutting tool database were and supported with the help of ACCESS 7.0.

  12. Comparison of fractionation methods for nitrogen and starch in maize and grass silages

    NARCIS (Netherlands)

    Ali, M; de Jonge, L H; Cone, J W; van Duinkerken, G; Blok, M C; Bruinenberg, M H; Hendriks, W H

    2016-01-01

    In in situ nylon bag technique, many feed evaluation systems use a washing machine method (WMM) to determine the washout (W) fraction and to wash the rumen incubated nylon bags. As this method has some disadvantages, an alternate modified method (MM) was recently introduced. The aim of this study

  13. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  14. Method for assembling dynamoelectric machine end shield parts

    International Nuclear Information System (INIS)

    Thomson, J.M.

    1984-01-01

    Methods, apparatus, and systems are provided for automatically assembling end shield assemblies of subassemblies for electric motors. In a preferred form, a system and methods are provided that utilize a non-palletized, non-synchronous concept to convey end shields through a number of assembly stations. At process stations situated along a conveyor, operations are performed on components. One method includes controlling traffic of sub-assemblies by toggle type escapements. A stop or latch of unique design stops end shield components in midstream, and ''lifts'' of unique design disengage parts from the conveyor and also support such parts during various operations. Photo-optic devices and proximity and reed switch mechanisms are utilized for control purposes. The work stations involved in one system include a unique assembly and pressing station involving oil well covers; a unique feed wick seating system; a unique lubricant adding operation; and unique ''building block'' mechanisms and methods

  15. A human-machine interface evaluation method: A difficulty evaluation method in information searching (DEMIS)

    International Nuclear Information System (INIS)

    Ha, Jun Su; Seong, Poong Hyun

    2009-01-01

    A human-machine interface (HMI) evaluation method, which is named 'difficulty evaluation method in information searching (DEMIS)', is proposed and demonstrated with an experimental study. The DEMIS is based on a human performance model and two measures of attentional-resource effectiveness in monitoring and detection tasks in nuclear power plants (NPPs). Operator competence and HMI design are modeled to be most significant factors to human performance. One of the two effectiveness measures is fixation-to-importance ratio (FIR) which represents attentional resource (eye fixations) spent on an information source compared to importance of the information source. The other measure is selective attention effectiveness (SAE) which incorporates FIRs for all information sources. The underlying principle of the measures is that the information source should be selectively attended to according to its informational importance. In this study, poor performance in information searching tasks is modeled to be coupled with difficulties caused by poor mental models of operators or/and poor HMI design. Human performance in information searching tasks is evaluated by analyzing the FIR and the SAE. Operator mental models are evaluated by a questionnaire-based method. Then difficulties caused by a poor HMI design are evaluated by a focused interview based on the FIR evaluation and then root causes leading to poor performance are identified in a systematic way.

  16. Space cutter compensation method for five-axis nonuniform rational basis spline machining

    Directory of Open Access Journals (Sweden)

    Yanyu Ding

    2015-07-01

    Full Text Available In view of the good machining performance of traditional three-axis nonuniform rational basis spline interpolation and the space cutter compensation issue in multi-axis machining, this article presents a triple nonuniform rational basis spline five-axis interpolation method, which uses three nonuniform rational basis spline curves to describe cutter center location, cutter axis vector, and cutter contact point trajectory, respectively. The relative position of the cutter and workpiece is calculated under the workpiece coordinate system, and the cutter machining trajectory can be described precisely and smoothly using this method. The three nonuniform rational basis spline curves are transformed into a 12-dimentional Bézier curve to carry out discretization during the discrete process. With the cutter contact point trajectory as the precision control condition, the discretization is fast. As for different cutters and corners, the complete description method of space cutter compensation vector is presented in this article. Finally, the five-axis nonuniform rational basis spline machining method is further verified in a two-turntable five-axis machine.

  17. Pollutants Characterization of Car Wash Wastewater

    Directory of Open Access Journals (Sweden)

    Hashim Nor Haslina

    2016-01-01

    Full Text Available The huge quantity of water consumed per car during washing cars yields the untreated effluents discharged to the stormwater system. Wastewater samples from snow car wash and two full hand service car wash station were analyzed for pH and the presence of PO43-,TP, O&G, alkalinity, TSS, NO3-, NO2-, COD and surfactant in accordance Standard Method of Water and Wastewater 2012. Two full hand wash service stations and one station of snow foam service were investigated in this study. Amongst the stations, snow foam car wash station indicates the highest concentration of PO43-, TP, O&G, TSS, COD and surfactant with the average value of 10.18 ± 0.87 mg/L, 30.93 ± 0.31 mg/L , 85.00 ± 0.64 mg/L 325.0 ± 0.6 mg/L, 485.0 ± 0.3 mg/L and 54.00 ± 2.50 mg/L as MBAS, respectively. Whereas, in parameters characterization in different stages throughout the car wash process, O&G was found to be the highest in pre soak stage, PO43-, TP, TSS and COD in washing stage and NO3- and NO2- in rinse stage. All parameters were compared to Environmental Quality (Industrial Effluent Regulations, 2009. There is a strong need to study on the characterization of car wash water in order to suggest the suitable treatment need for this type of wastewater.

  18. Method Of Wire Insertion For Electric Machine Stators

    Science.gov (United States)

    Brown, David L; Stabel, Gerald R; Lawrence, Robert Anthony

    2005-02-08

    A method of inserting coils in slots of a stator is provided. The method includes interleaving a first set of first phase windings and a first set of second phase windings on an insertion tool. The method also includes activating the insertion tool to radially insert the first set of first phase windings and the first set of second phase windings in the slots of the stator. In one embodiment, interleaving the first set of first phase windings and the first set of second phase windings on the insertion tool includes forming the first set of first phase windings in first phase openings defined in the insertion tool, and forming the first set of second phase windings in second phase openings defined in the insertion tool.

  19. How can machine-learning methods assist in virtual screening for hyperuricemia? A healthcare machine-learning approach.

    Science.gov (United States)

    Ichikawa, Daisuke; Saito, Toki; Ujita, Waka; Oyama, Hiroshi

    2016-12-01

    Our purpose was to develop a new machine-learning approach (a virtual health check-up) toward identification of those at high risk of hyperuricemia. Applying the system to general health check-ups is expected to reduce medical costs compared with administering an additional test. Data were collected during annual health check-ups performed in Japan between 2011 and 2013 (inclusive). We prepared training and test datasets from the health check-up data to build prediction models; these were composed of 43,524 and 17,789 persons, respectively. Gradient-boosting decision tree (GBDT), random forest (RF), and logistic regression (LR) approaches were trained using the training dataset and were then used to predict hyperuricemia in the test dataset. Undersampling was applied to build the prediction models to deal with the imbalanced class dataset. The results showed that the RF and GBDT approaches afforded the best performances in terms of sensitivity and specificity, respectively. The area under the curve (AUC) values of the models, which reflected the total discriminative ability of the classification, were 0.796 [95% confidence interval (CI): 0.766-0.825] for the GBDT, 0.784 [95% CI: 0.752-0.815] for the RF, and 0.785 [95% CI: 0.752-0.819] for the LR approaches. No significant differences were observed between pairs of each approach. Small changes occurred in the AUCs after applying undersampling to build the models. We developed a virtual health check-up that predicted the development of hyperuricemia using machine-learning methods. The GBDT, RF, and LR methods had similar predictive capability. Undersampling did not remarkably improve predictive power. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A REVIEW OF VIBRATION MACHINE DIAGNOSTICS BY USING ARTIFICIAL INTELLIGENCE METHODS

    Directory of Open Access Journals (Sweden)

    Grover Zurita

    2016-09-01

    Full Text Available In the industry, gears and rolling bearings failures are one of the foremost causes of breakdown in rotating machines, reducing availability time of the production and resulting in costly systems downtime. Therefore, there are growing demands for vibration condition based monitoring of gears and bearings, and any method in order to improve the effectiveness, reliability, and accuracy of the bearing faults diagnosis ought to be evaluated. In order to perform machine diagnosis efficiently, researchers have extensively investigated different advanced digital signal processing techniques and artificial intelligence methods to accurately extract fault characteristics from vibration signals. The main goal of this article is to present the state-of-the-art development in vibration analysis for machine diagnosis based on artificial intelligence methods.

  1. A Novel Cogging Torque Simulation Method for Permanent-Magnet Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Chun-Yu Hsiao

    2011-12-01

    Full Text Available Cogging torque exists between rotor mounted permanent magnets and stator teeth due to magnetic attraction and this is an undesired phenomenon which produces output ripple, vibration and noise in machines. The purpose of this paper is to study the existence and effects of cogging torque, and to present a novel, rapid, half magnet pole pair technique for forecasting and evaluating cogging torque. The technique uses the finite element method as well as Matlab research and development oriented software tools to reduce numerous computing jobs and simulation time. An example of a rotor-skewed structure used to reduce cogging torque of permanent magnet synchronous machines is evaluated and compared with a conventional analysis method for the same motor to verify the effectiveness of the proposed approach. The novel method is proved valuable and suitable for large-capacity machine design.

  2. Assessing a Novel Method to Reduce Anesthesia Machine Contamination: A Prospective, Observational Trial.

    Science.gov (United States)

    Biddle, Chuck J; George-Gay, Beverly; Prasanna, Praveen; Hill, Emily M; Davis, Thomas C; Verhulst, Brad

    2018-01-01

    Anesthesia machines are known reservoirs of bacterial species, potentially contributing to healthcare associated infections (HAIs). An inexpensive, disposable, nonpermeable, transparent anesthesia machine wrap (AMW) may reduce microbial contamination of the anesthesia machine. This study quantified the density and diversity of bacterial species found on anesthesia machines after terminal cleaning and between cases during actual anesthesia care to assess the impact of the AMW. We hypothesized reduced bioburden with the use of the AMW. In a prospective, experimental research design, the AMW was used in 11 surgical cases (intervention group) and not used in 11 control surgical cases. Cases were consecutively assigned to general surgical operating rooms. Seven frequently touched and difficult to disinfect "hot spots" were cultured on each machine preceding and following each case. The density and diversity of cultured colony forming units (CFUs) between the covered and uncovered machines were compared using Wilcoxon signed-rank test and Student's t -tests. There was a statistically significant reduction in CFU density and diversity when the AMW was employed. The protective effect of the AMW during regular anesthetic care provides a reliable and low-cost method to minimize the transmission of pathogens across patients and potentially reduces HAIs.

  3. A Method for Solving the Voltage and Torque Equations of the Split-Phase Induction Machines

    Directory of Open Access Journals (Sweden)

    G. A. Olarinoye

    2013-06-01

    Full Text Available Single phase induction machines have been the subject of many researches in recent times. The voltage and torque equations which describe the dynamic characteristics of these machines have been quoted in many papers, including the papers that present the simulation results of these model equations. The way and manner in which these equations are solved is not common in literature. This paper presents a detailed procedure of how these equations are to be solved with respect to the splitphase induction machine which is one of the different types of the single phase induction machines available in the market. In addition, these equations have been used to simulate the start-up response of the split phase induction motor on no-load. The free acceleration characteristics of the motor voltages, currents and electromagnetic torque have been plotted and discussed. The simulation results presented include the instantaneous torque-speed characteristics of the Split phase Induction machine. A block diagram of the method for the solution of the machine equations has also been presented.

  4. One method for life time estimation of a bucket wheel machine for coal moving

    Science.gov (United States)

    Vîlceanu, Fl; Iancu, C.

    2016-08-01

    Rehabilitation of outdated equipment with lifetime expired, or in the ultimate life period, together with high cost investments for their replacement, makes rational the efforts made to extend their life. Rehabilitation involves checking operational safety based on relevant expertise of metal structures supporting effective resistance and assessing the residual lifetime. The bucket wheel machine for coal constitute basic machine within deposits of coal of power plants. The estimate of remaining life can be done by checking the loading on the most stressed subassembly by Finite Element Analysis on a welding detail. The paper presents step-by-step the method of calculus applied in order to establishing the residual lifetime of a bucket wheel machine for coal moving using non-destructive methods of study (fatigue cracking analysis + FEA). In order to establish the actual state of machine and areas subject to study, was done FEA of this mining equipment, performed on the geometric model of mechanical analyzed structures, with powerful CAD/FEA programs. By applying the method it can be calculated residual lifetime, by extending the results from the most stressed area of the equipment to the entire machine, and thus saving time and money from expensive replacements.

  5. MODAL ANALYSIS OF CARRIER SYSTEM FOR HEAVY HORIZONTAL MULTIFUNCTION MACHINING CENTER BY FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    Yu. V. Vasilevich

    2014-01-01

    Full Text Available The aim of the paper is to reveal and analyze resonance modes of a large-scale milling-drilling-boring machine. The machine has a movable column with vertical slot occupied by a symmetrical carriage with horizontal ram. Static rigidity of the machine is relatively low due to its large dimensions. So it is necessary to assess possible vibration activity. Virtual and operational trials of the machine have been carried out simultaneously. Modeling has been executed with the help of a finite element method (FEM. The FEM-model takes into account not only rigidity of machine structures but also flexibility of bearings, feed drive systems and guides. Modal FEM-analysis has revealed eight resonance modes that embrace the whole machine tool. They form a frequency interval from 12 to 75 Hz which is undesirable for machining. Three closely located resonances (31-37 Hz are considered as the most dangerous ones. They represent various combinations of three simple motions: vertical oscillations of a carriage, horizontal vibrations of a ram and column torsion. Reliability of FEM- estimations has been proved by in-situ vibration measurements.An effect for stabilization of resonance modes has been detected while making variations in design parameters of the machine tool. For example, a virtual replacement of cast iron for steel in machine structures practically does not have any effect on resonance frequencies. Rigidity increase in some parts (e.g. a ram has also a small effect on a resonance pattern. On the other hand, resonance stability makes it possible to avoid them while selecting a spindle rotation frequency.It is recommended to set double feed drives for all axes. A pair of vertical screws prevents a “pecking” resonance of the carriage at frequency of 54 Hz. It is necessary to foresee an operation of a main drive of such heavy machine tool in the above resonance interval with the spindle frequency of more than 75 Hz. For this purpose it is necessary

  6. Use of green washing fluids in a washing process for dioxin contaminated soils

    Directory of Open Access Journals (Sweden)

    Siwalee Yotapukdee

    2017-09-01

    Full Text Available High levels of dioxin contamination in soil have significant environmental challenges. Soil washing is a successful remediation process that is primarily used to treat coarse soils. Several literature studies have used various kinds of chemical washing liquids to remove dioxins from soils, though there are secondary environmental effects. This study intends to develop environmentally friendly soil washing methods that are effective in dioxin removal at an acceptable cost. Sugarcane wine, compost leachate, and ground fish broth were chosen as potential washing liquids. Each washing liquid was analyzed to determine its content of semivolatile organic compounds (SVOCs and volatile organic compounds (VOCs. These compounds are related to their bio-surfactant content. Several of the identified compounds had properties to help remove dioxins from contaminated soil. In the experiments, high removal efficiencies were observed, up to 70%~95% after five to six washes. Although effective removal was observed, a significant amount of wastewater was produced and the problems were not completely resolved. Thus, the optimal washing conditions are necessary to minimize the overall costs, while improving the process effectiveness. Moreover, an appropriate treatment method is required for wastewater containing dioxins.

  7. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  8. Detecting Milling Deformation in 7075 Aluminum Alloy Aeronautical Monolithic Components Using the Quasi-Symmetric Machining Method

    Directory of Open Access Journals (Sweden)

    Qiong Wu

    2016-04-01

    Full Text Available The deformation of aeronautical monolithic components due to CNC machining is a bottle-neck issue in the aviation industry. The residual stress releases and redistributes in the process of material removal, and the distortion of the monolithic component is generated. The traditional one-side machining method will produce oversize deformation. Based on the three-stage CNC machining method, the quasi-symmetric machining method is developed in this study to reduce deformation by symmetry material removal using the M-symmetry distribution law of residual stress. The mechanism of milling deformation due to residual stress is investigated. A deformation experiment was conducted using traditional one-side machining method and quasi-symmetric machining method to compare with finite element method (FEM. The deformation parameters are validated by comparative results. Most of the errors are within 10%. The reason for these errors is determined to improve the reliability of the method. Moreover, the maximum deformation value of using quasi-symmetric machining method is within 20% of that of using the traditional one-side machining method. This result shows the quasi-symmetric machining method is effective in reducing deformation caused by residual stress. Thus, this research introduces an effective method for reducing the deformation of monolithic thin-walled components in the CNC milling process.

  9. SELECTION OF NON-CONVENTIONAL MACHINING PROCESSES USING THE OCRA METHOD

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2015-04-01

    Full Text Available Selection of the most suitable nonconventional machining process (NCMP for a given machining application can be viewed as multi-criteria decision making (MCDM problem with many conflicting and diverse criteria. To aid these selection processes, different MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. operational competitiveness ratings analysis (OCRA method for solving the NCMP selection problems. Applicability, suitability and computational procedure of OCRA method have been demonstrated while solving three case studies dealing with selection of the most suitable NCMP. In each case study the obtained rankings were compared with those derived by the past researchers using different MCDM methods. The results obtained using the OCRA method have good correlation with those derived by the past researchers which validate the usefulness of this method while solving complex NCMP selection problems.

  10. A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Hemphill, Geralyn M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type has become a necessity in cancer research. A major challenge in cancer management is the classification of patients into appropriate risk groups for better treatment and follow-up. Such risk assessment is critically important in order to optimize the patient’s health and the use of medical resources, as well as to avoid cancer recurrence. This paper focuses on the application of machine learning methods for predicting the likelihood of a recurrence of cancer. It is not meant to be an extensive review of the literature on the subject of machine learning techniques for cancer recurrence modeling. Other recent papers have performed such a review, and I will rely heavily on the results and outcomes from these papers. The electronic databases that were used for this review include PubMed, Google, and Google Scholar. Query terms used include “cancer recurrence modeling”, “cancer recurrence and machine learning”, “cancer recurrence modeling and machine learning”, and “machine learning for cancer recurrence and prediction”. The most recent and most applicable papers to the topic of this review have been included in the references. It also includes a list of modeling and classification methods to predict cancer recurrence.

  11. Optimal design method to minimize users' thinking mapping load in human-machine interactions.

    Science.gov (United States)

    Huang, Yanqun; Li, Xu; Zhang, Jie

    2015-01-01

    The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.

  12. Application of machine learning methods for traffic signs recognition

    Science.gov (United States)

    Filatov, D. V.; Ignatev, K. V.; Deviatkin, A. V.; Serykh, E. V.

    2018-02-01

    This paper focuses on solving a relevant and pressing safety issue on intercity roads. Two approaches were considered for solving the problem of traffic signs recognition; the approaches involved neural networks to analyze images obtained from a camera in the real-time mode. The first approach is based on a sequential image processing. At the initial stage, with the help of color filters and morphological operations (dilatation and erosion), the area containing the traffic sign is located on the image, then the selected and scaled fragment of the image is analyzed using a feedforward neural network to determine the meaning of the found traffic sign. Learning of the neural network in this approach is carried out using a backpropagation method. The second approach involves convolution neural networks at both stages, i.e. when searching and selecting the area of the image containing the traffic sign, and when determining its meaning. Learning of the neural network in the second approach is carried out using the intersection over union function and a loss function. For neural networks to learn and the proposed algorithms to be tested, a series of videos from a dash cam were used that were shot under various weather and illumination conditions. As a result, the proposed approaches for traffic signs recognition were analyzed and compared by key indicators such as recognition rate percentage and the complexity of neural networks’ learning process.

  13. Non Machinable Volume Calculation Method for 5-Axis Roughing Based on Faceted Models through Closed Bounded Area Evaluation

    Directory of Open Access Journals (Sweden)

    Kiswanto Gandjar

    2017-01-01

    Full Text Available The increase in the volume of rough machining on the CBV area is one of the indicators of increased efficiencyof machining process. Normally, this area is not subject to the rough machining process, so that the volume of the rest of the material is still big. With the addition of CC point and tool orientation to CBV area on a complex surface, the finishing will be faster because the volume of the excess material on this process will be reduced. This paper presents a method for volume calculation of the parts which do not allow further occurrence of the machining process, particulary for rough machining on a complex object. By comparing the total volume of raw materials and machining area volume, the volume of residual material,on which machining process cannot be done,can be determined. The volume of the total machining area has been taken into account for machiningof the CBV and non CBV areas. By using delaunay triangulation for the triangle which includes the machining and CBV areas. The volume will be calculated using Divergence(Gaussian theorem by focusing on the direction of the normal vector on each triangle. This method can be used as an alternative to selecting tothe rough machining methods which select minimum value of nonmachinable volume so that effectiveness can be achieved in the machining process.

  14. The Relevance Voxel Machine (RVoxM): A Bayesian Method for Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2011-01-01

    This paper presents the Relevance VoxelMachine (RVoxM), a Bayesian multivariate pattern analysis (MVPA) algorithm that is specifically designed for making predictions based on image data. In contrast to generic MVPA algorithms that have often been used for this purpose, the method is designed to ...

  15. Improved method for SNR prediction in machine-learning-based test

    NARCIS (Netherlands)

    Sheng, Xiaoqin; Kerkhoff, Hans G.

    2010-01-01

    This paper applies an improved method for testing the signal-to-noise ratio (SNR) of Analogue-to-Digital Converters (ADC). In previous work, a noisy and nonlinear pulse signal is exploited as the input stimulus to obtain the signature results of ADC. By applying a machine-learning-based approach,

  16. Feasibility of Machine Learning Methods for Separating Wood and Leaf Points from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Wang, D.; Hollaus, M.; Pfeifer, N.

    2017-09-01

    Classification of wood and leaf components of trees is an essential prerequisite for deriving vital tree attributes, such as wood mass, leaf area index (LAI) and woody-to-total area. Laser scanning emerges to be a promising solution for such a request. Intensity based approaches are widely proposed, as different components of a tree can feature discriminatory optical properties at the operating wavelengths of a sensor system. For geometry based methods, machine learning algorithms are often used to separate wood and leaf points, by providing proper training samples. However, it remains unclear how the chosen machine learning classifier and features used would influence classification results. To this purpose, we compare four popular machine learning classifiers, namely Support Vector Machine (SVM), Na¨ıve Bayes (NB), Random Forest (RF), and Gaussian Mixture Model (GMM), for separating wood and leaf points from terrestrial laser scanning (TLS) data. Two trees, an Erytrophleum fordii and a Betula pendula (silver birch) are used to test the impacts from classifier, feature set, and training samples. Our results showed that RF is the best model in terms of accuracy, and local density related features are important. Experimental results confirmed the feasibility of machine learning algorithms for the reliable classification of wood and leaf points. It is also noted that our studies are based on isolated trees. Further tests should be performed on more tree species and data from more complex environments.

  17. FEASIBILITY OF MACHINE LEARNING METHODS FOR SEPARATING WOOD AND LEAF POINTS FROM TERRESTRIAL LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    D. Wang

    2017-09-01

    Full Text Available Classification of wood and leaf components of trees is an essential prerequisite for deriving vital tree attributes, such as wood mass, leaf area index (LAI and woody-to-total area. Laser scanning emerges to be a promising solution for such a request. Intensity based approaches are widely proposed, as different components of a tree can feature discriminatory optical properties at the operating wavelengths of a sensor system. For geometry based methods, machine learning algorithms are often used to separate wood and leaf points, by providing proper training samples. However, it remains unclear how the chosen machine learning classifier and features used would influence classification results. To this purpose, we compare four popular machine learning classifiers, namely Support Vector Machine (SVM, Na¨ıve Bayes (NB, Random Forest (RF, and Gaussian Mixture Model (GMM, for separating wood and leaf points from terrestrial laser scanning (TLS data. Two trees, an Erytrophleum fordii and a Betula pendula (silver birch are used to test the impacts from classifier, feature set, and training samples. Our results showed that RF is the best model in terms of accuracy, and local density related features are important. Experimental results confirmed the feasibility of machine learning algorithms for the reliable classification of wood and leaf points. It is also noted that our studies are based on isolated trees. Further tests should be performed on more tree species and data from more complex environments.

  18. Method of control of machining accuracy of low-rigidity elastic-deformable shafts

    Directory of Open Access Journals (Sweden)

    Antoni Świć

    Full Text Available The paper presents an analysis of the possibility of increasing the accuracy and stability of machining of low-rigidity shafts while ensuring high efficiency and economy of their machining. An effective way of improving the accuracy of machining of shafts is increasing their rigidity as a result of oriented change of the elastic-deformable state through the application of a tensile force which, combined with the machining force, forms longitudinal-lateral strains. The paper also presents mathematical models describing the changes of the elastic-deformable state resulting from the application of the tensile force. It presents the results of experimental studies on the deformation of elastic low-rigidity shafts, performed on a special test stand developed on the basis of a lathe. An estimation was made of the effectiveness of the method of control of the elastic-deformable state with the use, as the regulating effects, the tensile force and eccentricity. It was demonstrated that controlling the two parameters: tensile force and eccentricity, one can improve the accuracy of machining, and thus achieve a theoretically assumed level of accuracy.

  19. Optimization of the Machining parameter of LM6 Alminium alloy in CNC Turning using Taguchi method

    Science.gov (United States)

    Arunkumar, S.; Muthuraman, V.; Baskaralal, V. P. M.

    2017-03-01

    Due to widespread use of highly automated machine tools in the industry, manufacturing requires reliable models and methods for the prediction of output performance of machining process. In machining of parts, surface quality is one of the most specified customer requirements. In order for manufactures to maximize their gains from utilizing CNC turning, accurate predictive models for surface roughness must be constructed. The prediction of optimum machining conditions for good surface finish plays an important role in process planning. This work deals with the study and development of a surface roughness prediction model for machining LM6 aluminum alloy. Two important tools used in parameter design are Taguchi orthogonal arrays and signal to noise ratio (S/N). Speed, feed, depth of cut and coolant are taken as process parameter at three levels. Taguchi’s parameters design is employed here to perform the experiments based on the various level of the chosen parameter. The statistical analysis results in optimum parameter combination of speed, feed, depth of cut and coolant as the best for obtaining good roughness for the cylindrical components. The result obtained through Taguchi is confirmed with real time experimental work.

  20. Retention system and method for the blades of a rotary machine

    Science.gov (United States)

    Pedersen, Poul D.; Glynn, Christopher C.; Walker, Roger C.

    2002-01-01

    A retention system and method for the blades of a rotary machine for preventing forward or aft axial movement of the rotor blades includes a circumferential hub slot formed about a circumference of the machine hub. The rotor blades have machined therein a blade retention slot which is aligned with the circumferential hub slot when the blades are received in correspondingly shaped openings in the hub. At least one ring segment is secured in the blade retention slots and the circumferential hub slot to retain the blades from axial movement. A key assembly is used to secure the ring segments in the aligned slots via a hook portion receiving the ring segments and a threaded portion that is driven radially outwardly by a nut. A cap may be provided to provide a redundant back-up load path for the centrifugal loads on the key. Alternatively, the key assembly may be formed in the blade dovetail.

  1. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    Science.gov (United States)

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Solvent wash solution

    International Nuclear Information System (INIS)

    Neace, J.C.

    1986-01-01

    This patent describes a process for removing diluent degradation products from a solvent extraction solution comprising an admixture of an organic extractant for uranium and plutonium and a non-polar organic liquid diluent, which has been used to recover uranium and plutonium from spent nuclear fuel. Comprising combining a wash solution consisting of: (a) water; and (b) a positive amount up to about, an including, 50 volume percent of at least one highly-polar water-miscible organic solvent, based on the total volume of the water and the highly-polar organic solvent, with the solvent extraction solution after uranium and plutonium values have been stripped from the solvent extraction solution, the diluent degradation products dissolving in the highly-polar organic solvent and the extractant and diluent of the extraction solution not dissolving in the highly-polar organic solvent, and separating the highly-polar organic solvent and the extraction solution to obtain a purified extraction solution

  3. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts

    Science.gov (United States)

    Yang, Hongbin; Sun, Lixia; Li, Weihua; Liu, Guixia; Tang, Yun

    2018-02-01

    For a drug, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future.

  4. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts

    Directory of Open Access Journals (Sweden)

    Hongbin Yang

    2018-02-01

    Full Text Available During drug development, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future.

  5. In Silico Prediction of Chemical Toxicity for Drug Design Using Machine Learning Methods and Structural Alerts.

    Science.gov (United States)

    Yang, Hongbin; Sun, Lixia; Li, Weihua; Liu, Guixia; Tang, Yun

    2018-01-01

    During drug development, safety is always the most important issue, including a variety of toxicities and adverse drug effects, which should be evaluated in preclinical and clinical trial phases. This review article at first simply introduced the computational methods used in prediction of chemical toxicity for drug design, including machine learning methods and structural alerts. Machine learning methods have been widely applied in qualitative classification and quantitative regression studies, while structural alerts can be regarded as a complementary tool for lead optimization. The emphasis of this article was put on the recent progress of predictive models built for various toxicities. Available databases and web servers were also provided. Though the methods and models are very helpful for drug design, there are still some challenges and limitations to be improved for drug safety assessment in the future.

  6. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  7. Design of a new torque standard machine based on a torque generation method using electromagnetic force

    International Nuclear Information System (INIS)

    Nishino, Atsuhiro; Ueda, Kazunaga; Fujii, Kenichi

    2017-01-01

    To allow the application of torque standards in various industries, we have been developing torque standard machines based on a lever deadweight system, i.e. a torque generation method using gravity. However, this method is not suitable for expanding the low end of the torque range, because of the limitations to the sizes of the weights and moment arms. In this study, the working principle of the torque generation method using an electromagnetic force was investigated by referring to watt balance experiments used for the redefinition of the kilogram. Applying this principle to a rotating coordinate system, an electromagnetic force type torque standard machine was designed and prototyped. It was experimentally demonstrated that SI-traceable torque could be generated by converting electrical power to mechanical power. Thus, for the first time, SI-traceable torque was successfully realized using a method other than that based on the force of gravity. (paper)

  8. Gamma/hadron segregation for a ground based imaging atmospheric Cherenkov telescope using machine learning methods: Random Forest leads

    International Nuclear Information System (INIS)

    Sharma Mradul; Koul Maharaj Krishna; Mitra Abhas; Nayak Jitadeepa; Bose Smarajit

    2014-01-01

    A detailed case study of γ-hadron segregation for a ground based atmospheric Cherenkov telescope is presented. We have evaluated and compared various supervised machine learning methods such as the Random Forest method, Artificial Neural Network, Linear Discriminant method, Naive Bayes Classifiers, Support Vector Machines as well as the conventional dynamic supercut method by simulating triggering events with the Monte Carlo method and applied the results to a Cherenkov telescope. It is demonstrated that the Random Forest method is the most sensitive machine learning method for γ-hadron segregation. (research papers)

  9. A Method to Optimize Geometric Errors of Machine Tool based on SNR Quality Loss Function and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Cai Ligang

    2017-01-01

    Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.

  10. Hand washing promotion for preventing diarrhoea

    Science.gov (United States)

    Ejemot-Nwadiaro, Regina I; Ehiri, John E; Arikpo, Dachi; Meremikwu, Martin M; Critchley, Julia A

    2015-01-01

    Background Diarrhoea accounts for 1.8 million deaths in children in low- and middle-income countries (LMICs). One of the identified strategies to prevent diarrhoea is hand washing. Objectives To assess the effects of hand washing promotion interventions on diarrhoeal episodes in children and adults. Search methods We searched the Cochrane Infectious Diseases Group Specialized Register (27 May 2015); CENTRAL (published in the Cochrane Library 2015, Issue 5); MEDLINE (1966 to 27 May 2015); EMBASE (1974 to 27 May 2015); LILACS (1982 to 27 May 2015); PsycINFO (1967 to 27 May 2015); Science Citation Index and Social Science Citation Index (1981 to 27 May 2015); ERIC (1966 to 27 May 2015); SPECTR (2000 to 27 May 2015); Bibliomap (1990 to 27 May 2015); RoRe, The Grey Literature (2002 to 27 May 2015); World Health Organization (WHO) International Clinical Trial Registry Platform (ICTRP), metaRegister of Controlled Trials (mRCT), and reference lists of articles up to 27 May 2015. We also contacted researchers and organizations in the field. Selection criteria Individually randomized controlled trials (RCTs) and cluster-RCTs that compared the effects of hand washing interventions on diarrhoea episodes in children and adults with no intervention. Data collection and analysis Three review authors independently assessed trial eligibility, extracted data, and assessed risk of bias. We stratified the analyses for child day-care centres or schools, community, and hospital-based settings. Where appropriate, incidence rate ratios (IRR) were pooled using the generic inverse variance method and random-effects model with 95% confidence intervals (CIs). We used the GRADE approach to assess the quality of evidence. Main results We included 22 RCTs: 12 trials from child day-care centres or schools in mainly high-income countries (54,006 participants), nine community-based trials in LMICs (15,303 participants), and one hospital-based trial among people with acquired immune deficiency

  11. A new method of machine vision reprocessing based on cellular neural networks

    International Nuclear Information System (INIS)

    Jianhua, W.; Liping, Z.; Fenfang, Z.; Guojian, H.

    1996-01-01

    This paper proposed a method of image preprocessing in machine vision based on Cellular Neural Network (CNN). CNN is introduced to design image smoothing, image recovering, image boundary detecting and other image preprocessing problems. The proposed methods are so simple that the speed of algorithms are increased greatly to suit the needs of real-time image processing. The experimental results show a satisfactory reply

  12. A Novel Application of Machine Learning Methods to Model Microcontroller Upset Due to Intentional Electromagnetic Interference

    Science.gov (United States)

    Bilalic, Rusmir

    A novel application of support vector machines (SVMs), artificial neural networks (ANNs), and Gaussian processes (GPs) for machine learning (GPML) to model microcontroller unit (MCU) upset due to intentional electromagnetic interference (IEMI) is presented. In this approach, an MCU performs a counting operation (0-7) while electromagnetic interference in the form of a radio frequency (RF) pulse is direct-injected into the MCU clock line. Injection times with respect to the clock signal are the clock low, clock rising edge, clock high, and the clock falling edge periods in the clock window during which the MCU is performing initialization and executing the counting procedure. The intent is to cause disruption in the counting operation and model the probability of effect (PoE) using machine learning tools. Five experiments were executed as part of this research, each of which contained a set of 38,300 training points and 38,300 test points, for a total of 383,000 total points with the following experiment variables: injection times with respect to the clock signal, injected RF power, injected RF pulse width, and injected RF frequency. For the 191,500 training points, the average training error was 12.47%, while for the 191,500 test points the average test error was 14.85%, meaning that on average, the machine was able to predict MCU upset with an 85.15% accuracy. Leaving out the results for the worst-performing model (SVM with a linear kernel), the test prediction accuracy for the remaining machines is almost 89%. All three machine learning methods (ANNs, SVMs, and GPML) showed excellent and consistent results in their ability to model and predict the PoE on an MCU due to IEMI. The GP approach performed best during training with a 7.43% average training error, while the ANN technique was most accurate during the test with a 10.80% error.

  13. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    Science.gov (United States)

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  14. Comparing SVM and ANN based Machine Learning Methods for Species Identification of Food Contaminating Beetles.

    Science.gov (United States)

    Bisgin, Halil; Bera, Tanmay; Ding, Hongjian; Semey, Howard G; Wu, Leihong; Liu, Zhichao; Barnes, Amy E; Langley, Darryl A; Pava-Ripoll, Monica; Vyas, Himansu J; Tong, Weida; Xu, Joshua

    2018-04-25

    Insect pests, such as pantry beetles, are often associated with food contaminations and public health risks. Machine learning has the potential to provide a more accurate and efficient solution in detecting their presence in food products, which is currently done manually. In our previous research, we demonstrated such feasibility where Artificial Neural Network (ANN) based pattern recognition techniques could be implemented for species identification in the context of food safety. In this study, we present a Support Vector Machine (SVM) model which improved the average accuracy up to 85%. Contrary to this, the ANN method yielded ~80% accuracy after extensive parameter optimization. Both methods showed excellent genus level identification, but SVM showed slightly better accuracy  for most species. Highly accurate species level identification remains a challenge, especially in distinguishing between species from the same genus which may require improvements in both imaging and machine learning techniques. In summary, our work does illustrate a new SVM based technique and provides a good comparison with the ANN model in our context. We believe such insights will pave better way forward for the application of machine learning towards species identification and food safety.

  15. Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines

    Science.gov (United States)

    Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł

    2018-01-01

    Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.

  16. Identification of Village Building via Google Earth Images and Supervised Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Zhiling Guo

    2016-03-01

    Full Text Available In this study, a method based on supervised machine learning is proposed to identify village buildings from open high-resolution remote sensing images. We select Google Earth (GE RGB images to perform the classification in order to examine its suitability for village mapping, and investigate the feasibility of using machine learning methods to provide automatic classification in such fields. By analyzing the characteristics of GE images, we design different features on the basis of two kinds of supervised machine learning methods for classification: adaptive boosting (AdaBoost and convolutional neural networks (CNN. To recognize village buildings via their color and texture information, the RGB color features and a large number of Haar-like features in a local window are utilized in the AdaBoost method; with multilayer trained networks based on gradient descent algorithms and back propagation, CNN perform the identification by mining deeper information from buildings and their neighborhood. Experimental results from the testing area at Savannakhet province in Laos show that our proposed AdaBoost method achieves an overall accuracy of 96.22% and the CNN method is also competitive with an overall accuracy of 96.30%.

  17. Missing data imputation using statistical and machine learning methods in a real breast cancer problem.

    Science.gov (United States)

    Jerez, José M; Molina, Ignacio; García-Laencina, Pedro J; Alba, Emilio; Ribelles, Nuria; Martín, Miguel; Franco, Leonardo

    2010-10-01

    Missing data imputation is an important task in cases where it is crucial to use all available data and not discard records with missing values. This work evaluates the performance of several statistical and machine learning imputation methods that were used to predict recurrence in patients in an extensive real breast cancer data set. Imputation methods based on statistical techniques, e.g., mean, hot-deck and multiple imputation, and machine learning techniques, e.g., multi-layer perceptron (MLP), self-organisation maps (SOM) and k-nearest neighbour (KNN), were applied to data collected through the "El Álamo-I" project, and the results were then compared to those obtained from the listwise deletion (LD) imputation method. The database includes demographic, therapeutic and recurrence-survival information from 3679 women with operable invasive breast cancer diagnosed in 32 different hospitals belonging to the Spanish Breast Cancer Research Group (GEICAM). The accuracies of predictions on early cancer relapse were measured using artificial neural networks (ANNs), in which different ANNs were estimated using the data sets with imputed missing values. The imputation methods based on machine learning algorithms outperformed imputation statistical methods in the prediction of patient outcome. Friedman's test revealed a significant difference (p=0.0091) in the observed area under the ROC curve (AUC) values, and the pairwise comparison test showed that the AUCs for MLP, KNN and SOM were significantly higher (p=0.0053, p=0.0048 and p=0.0071, respectively) than the AUC from the LD-based prognosis model. The methods based on machine learning techniques were the most suited for the imputation of missing values and led to a significant enhancement of prognosis accuracy compared to imputation methods based on statistical procedures. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Ultrafiltration to reuse laundering wash water

    DEFF Research Database (Denmark)

    Giagnorio, Mattia; Søtoft, Lene Fjerbæk; Tiraferri, Alberto

    2017-01-01

    Laundering industry consumes and discharges large amounts of water and surfactants, and the demand of surface active agents used for washing is increasing worldwide. Some of these substances are considered contaminants of emerging concern, as they persist in the environment. This work aimed...... at evaluating the feasibility of ultrafiltration as a method to treat the wash wastewater and possibly reuse the surfactant-rich permeate stream in laundry facilities. In particular, evaluation of surfactant recovery was performed through analysis of the permeate flux and properties obtained through polymeric...... and ceramic membranes. Wash water samples were collected at an industrial laundering facility for hospital linen and filtered through different ultrafiltration membranes with varying molecular weight cut-off. The critical micelle concentration of the detergent was quantified, and capillarity measurements were...

  19. Comparison between different liquid-liquid and solid phase methods of extraction prior to the identification of the phenolic fraction present in olive oil washing wastewater from the two-phase olive oil extraction system

    Directory of Open Access Journals (Sweden)

    S. Jiménez-Herrera

    2017-09-01

    Full Text Available Phenolic compounds from olive mill wastewater (OMW, are characterized by a strong antioxidant activity. At the same time, they represent an environmental problem because they are difficult to degrade. The purpose of this work was to identify these biologically active compounds in the OMW from two-phase olive oil production in order to convert a polluting residue into a source of natural antioxidants. After optimizing the extraction process of phenolic compounds using liquid-liquid extraction (LLE and solid phase extraction (SPE methods, it was determined that the most appropriate sequence comprised a previous centrifugation to remove the lipid fraction, followed by liquid extraction with ethyl acetate or SPE. The most important compounds identified in olive oil washing wastewater (OOWW were tyrosol, hydroxytyrosol and succinic acid; whereas the ones in the wastewater derived from the washing of the olives (OWW were cresol, catechol, 4-methylcatechol, hydrocinnamic acid and p-hydroxy-hydrocinnamic acid.

  20. Comparison between different liquid-liquid and solid phase methods of extraction prior to the identification of the phenolic fraction present in olive oil washing wastewater from the two-phase olive oil extraction system

    International Nuclear Information System (INIS)

    Jiménez-Herrera, S.; Ochando-Pulido, J.M.; Martínez-Ferez, A.

    2017-01-01

    Phenolic compounds from olive mill wastewater (OMW), are characterized by a strong antioxidant activity. At the same time, they represent an environmental problem because they are difficult to degrade. The purpose of this work was to identify these biologically active compounds in the OMW from two-phase olive oil production in order to convert a polluting residue into a source of natural antioxidants. After optimizing the extraction process of phenolic compounds using liquid-liquid extraction (LLE) and solid phase extraction (SPE) methods, it was determined that the most appropriate sequence comprised a previous centrifugation to remove the lipid fraction, followed by liquid extraction with ethyl acetate or SPE. The most important compounds identified in olive oil washing wastewater (OOWW) were tyrosol, hydroxytyrosol and succinic acid; whereas the ones in the wastewater derived from the washing of the olives (OWW) were cresol, catechol, 4-methylcatechol, hydrocinnamic acid and p-hydroxy-hydrocinnamic acid. [es

  1. Machine learning-based methods for prediction of linear B-cell epitopes.

    Science.gov (United States)

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  2. e-Learning Application for Machine Maintenance Process using Iterative Method in XYZ Company

    Science.gov (United States)

    Nurunisa, Suaidah; Kurniawati, Amelia; Pramuditya Soesanto, Rayinda; Yunan Kurnia Septo Hediyanto, Umar

    2016-02-01

    XYZ Company is a company based on manufacturing part for airplane, one of the machine that is categorized as key facility in the company is Millac 5H6P. As a key facility, the machines should be assured to work well and in peak condition, therefore, maintenance process is needed periodically. From the data gathering, it is known that there are lack of competency from the maintenance staff to maintain different type of machine which is not assigned by the supervisor, this indicate that knowledge which possessed by maintenance staff are uneven. The purpose of this research is to create knowledge-based e-learning application as a realization from externalization process in knowledge transfer process to maintain the machine. The application feature are adjusted for maintenance purpose using e-learning framework for maintenance process, the content of the application support multimedia for learning purpose. QFD is used in this research to understand the needs from user. The application is built using moodle with iterative method for software development cycle and UML Diagram. The result from this research is e-learning application as sharing knowledge media for maintenance staff in the company. From the test, it is known that the application make maintenance staff easy to understand the competencies.

  3. Hand Washing: Do's and Dont's

    Science.gov (United States)

    ... hands frequently can help limit the transfer of bacteria, viruses and other microbes. Always wash your hands before: Preparing food or eating Treating wounds or caring for a sick person Inserting or removing contact lenses Always wash your hands after: Preparing food Using ...

  4. A Novel Bearing Fault Diagnosis Method Based on Gaussian Restricted Boltzmann Machine

    Directory of Open Access Journals (Sweden)

    Xiao-hui He

    2016-01-01

    Full Text Available To realize the fault diagnosis of bearing effectively, this paper presents a novel bearing fault diagnosis method based on Gaussian restricted Boltzmann machine (Gaussian RBM. Vibration signals are firstly resampled to the same equivalent speed. Subsequently, the envelope spectrums of the resampled data are used directly as the feature vectors to represent the fault types of bearing. Finally, in order to deal with the high-dimensional feature vectors based on envelope spectrum, a classifier model based on Gaussian RBM is applied. Gaussian RBM has the ability to provide a closed-form representation of the distribution underlying the training data, and it is very convenient for modeling high-dimensional real-valued data. Experiments on 10 different data sets verify the performance of the proposed method. The superiority of Gaussian RBM classifier is also confirmed by comparing with other classifiers, such as extreme learning machine, support vector machine, and deep belief network. The robustness of the proposed method is also studied in this paper. It can be concluded that the proposed method can realize the bearing fault diagnosis accurately and effectively.

  5. Research on criticality analysis method of CNC machine tools components under fault rate correlation

    Science.gov (United States)

    Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han

    2018-02-01

    In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.

  6. Comparison of three methods for the calibration of cobalt-60 teletherapy machine

    International Nuclear Information System (INIS)

    Adewole, O.O.; Akinlade, B.I.; Oyekunle, O.E.; Ejeh, J.

    2011-01-01

    Two methods of indirect determination of dose rate (machine output) from the Cobalt-60 Teletherapy machines has been reviewed and compared with conventional measurement with dosimetry devices. The dose rate were determined by: (i) Conventional measurement (ii) Application of the law of radioactive decay and (iii) Assumption of 1 % radioactive decomposition per month. The dose rate at the depth of maximum dose (Z m ax), collimator size of 10cm x 10cm and Source to Skin Distance (SSD) of 80cm obtained from these methods were 203.7200cGy/min, 203.8090cGy/min and 203.9530cGy/min respectively. The ratio of dose rate obtained from measurement to that from calculations is within the tolerance value of 2%.

  7. Application of machine-learning methods to solid-state chemistry: ferromagnetism in transition metal alloys

    International Nuclear Information System (INIS)

    Landrum, G.A.Gregory A.; Genin, Hugh

    2003-01-01

    Machine-learning methods are a collection of techniques for building predictive models from experimental data. The algorithms are problem-independent: the chemistry and physics of the problem being studied are contained in the descriptors used to represent the known data. The application of a variety of machine-learning methods to the prediction of ferromagnetism in ordered and disordered transition metal alloys is presented. Applying a decision tree algorithm to build a predictive model for ordered phases results in a model that is 100% accurate. The same algorithm achieves 99% accuracy when trained on a data set containing both ordered and disordered phases. Details of the descriptor sets for both applications are also presented

  8. Use of machine learning methods to classify Universities based on the income structure

    Science.gov (United States)

    Terlyga, Alexandra; Balk, Igor

    2017-10-01

    In this paper we discuss use of machine learning methods such as self organizing maps, k-means and Ward’s clustering to perform classification of universities based on their income. This classification will allow us to quantitate classification of universities as teaching, research, entrepreneur, etc. which is important tool for government, corporations and general public alike in setting expectation and selecting universities to achieve different goals.

  9. A Hierarchical Approach Using Machine Learning Methods in Solar Photovoltaic Energy Production Forecasting

    OpenAIRE

    Zhaoxuan Li; SM Mahbobur Rahman; Rolando Vega; Bing Dong

    2016-01-01

    We evaluate and compare two common methods, artificial neural networks (ANN) and support vector regression (SVR), for predicting energy productions from a solar photovoltaic (PV) system in Florida 15 min, 1 h and 24 h ahead of time. A hierarchical approach is proposed based on the machine learning algorithms tested. The production data used in this work corresponds to 15 min averaged power measurements collected from 2014. The accuracy of the model is determined using computing error statisti...

  10. Machine learning and statistical methods for the prediction of maximal oxygen uptake: recent advances

    Directory of Open Access Journals (Sweden)

    Abut F

    2015-08-01

    Full Text Available Fatih Abut, Mehmet Fatih AkayDepartment of Computer Engineering, Çukurova University, Adana, TurkeyAbstract: Maximal oxygen uptake (VO2max indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance

  11. Method for strength calculating of structural elements of mobile machines for flash butt welding of rails

    Directory of Open Access Journals (Sweden)

    Andriy Valeriy Moltasov

    2017-12-01

    Full Text Available Purpose. The subject of this study is the strength of the loaded units of mobile machines for flash butt welding by refining high-strength rails. The theme of the work is related to the development of a technique for strength calculating of the insulation of the central axis of these machines. The aim of the paper is to establish the mathematical dependence of the pressure on the insulation on the magnitude of deflections of the central axis under the action of the upset force. Design/methodology/approach. Using the Mohr’s method, the displacements of the investigated sections of the central axis under the action of the upset force and the equivalent load distributed along the length of the insulation were calculated. The magnitude of the load distributed along the length of the insulation equivalent to the draft force was determined from the condition that the displacements of the same cross sections are equal under the action of this load and under the action of the upset force. Results. An analytical expression for establishing the relationship between the pressure acting on the insulation and the magnitude of the upset force and the geometric dimensions of the structural elements of the machine was obtained. Based on the condition of the strength of the insulation for crushing, an analytical expression for establishing the relationship between the length of insulation and the size of the upset force, the geometric dimensions of the structural elements of the machine, and the physical and mechanical properties of the insulation material was obtained. Originality/cost. The proposed methodology was tested in the calculation and design of the K1045 mobile rail welding machine, 4 of which is currently successfully used in the USA for welding rails in hard-to-reach places.

  12. Hand Washing Practices and Compliance among Health Care ...

    African Journals Online (AJOL)

    Hand washing is the simplest, most inexpensive and most effective method of reducing the incidence of hospital-acquired infections in the Intensive Care Unit. Several reports have shown a relationship between improved hand washing practices and reduced infection rates. We conducted a prospective, ...

  13. Prediction of Human Drug Targets and Their Interactions Using Machine Learning Methods: Current and Future Perspectives.

    Science.gov (United States)

    Nath, Abhigyan; Kumari, Priyanka; Chaube, Radha

    2018-01-01

    Identification of drug targets and drug target interactions are important steps in the drug-discovery pipeline. Successful computational prediction methods can reduce the cost and time demanded by the experimental methods. Knowledge of putative drug targets and their interactions can be very useful for drug repurposing. Supervised machine learning methods have been very useful in drug target prediction and in prediction of drug target interactions. Here, we describe the details for developing prediction models using supervised learning techniques for human drug target prediction and their interactions.

  14. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles.

    Science.gov (United States)

    Wu, Zhihong; Lu, Ke; Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment.

  15. A New Energy-Based Method for 3-D Finite-Element Nonlinear Flux Linkage computation of Electrical Machines

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Rasmussen, Peter Omand; Ritchie, Ewen

    2011-01-01

    This paper presents a new method for computation of the nonlinear flux linkage in 3-D finite-element models (FEMs) of electrical machines. Accurate computation of the nonlinear flux linkage in 3-D FEM is not an easy task. Compared to the existing energy-perturbation method, the new technique......-perturbation method. The new method proposed is validated using experimental results on two different permanent magnet machines....

  16. Improved machine learning method for analysis of gas phase chemistry of peptides

    Directory of Open Access Journals (Sweden)

    Ahn Natalie

    2008-12-01

    Full Text Available Abstract Background Accurate peptide identification is important to high-throughput proteomics analyses that use mass spectrometry. Search programs compare fragmentation spectra (MS/MS of peptides from complex digests with theoretically derived spectra from a database of protein sequences. Improved discrimination is achieved with theoretical spectra that are based on simulating gas phase chemistry of the peptides, but the limited understanding of those processes affects the accuracy of predictions from theoretical spectra. Results We employed a robust data mining strategy using new feature annotation functions of MAE software, which revealed under-prediction of the frequency of occurrence in fragmentation of the second peptide bond. We applied methods of exploratory data analysis to pre-process the information in the MS/MS spectra, including data normalization and attribute selection, to reduce the attributes to a smaller, less correlated set for machine learning studies. We then compared our rule building machine learning program, DataSqueezer, with commonly used association rules and decision tree algorithms. All used machine learning algorithms produced similar results that were consistent with expected properties for a second gas phase mechanism at the second peptide bond. Conclusion The results provide compelling evidence that we have identified underlying chemical properties in the data that suggest the existence of an additional gas phase mechanism for the second peptide bond. Thus, the methods described in this study provide a valuable approach for analyses of this kind in the future.

  17. Application of PROMETHEE-GAIA method for non-traditional machining processes selection

    Directory of Open Access Journals (Sweden)

    Prasad Karande

    2012-10-01

    Full Text Available With ever increasing demand for manufactured products of hard alloys and metals with high surface finish and complex shape geometry, more interest is now being paid to non-traditional machining (NTM processes, where energy in its direct form is used to remove material from workpiece surface. Compared to conventional machining processes, NTM processes possess almost unlimited capabilities and there is a strong believe that use of NTM processes would go on increasing in diverse range of applications. Presence of a large number of NTM processes along with complex characteristics and capabilities, and lack of experts in NTM process selection domain compel for development of a structured approach for NTM process selection for a given machining application. Past researchers have already attempted to solve NTM process selection problems using various complex mathematical approaches which often require a profound knowledge in mathematics/artificial intelligence from the part of process engineers. In this paper, four NTM process selection problems are solved using an integrated PROMETHEE (preference ranking organization method for enrichment evaluation and GAIA (geometrical analysis for interactive aid method which would act as a visual decision aid to the process engineers. The observed results are quite satisfactory and exactly match with the expected solutions.

  18. Development of a test method for sowing machines concerning the drift of dust abrasion

    Directory of Open Access Journals (Sweden)

    Bahmer, Roland

    2014-02-01

    Full Text Available For a long time the seed treatment was regarded as the most effective and in terms of the impact of the natural environment as the safest form of plant protection. Since the serious damage of bees, caused by dust of abraded seed treatment in the Rhine Valley five years ago, the admission of seed treatment products containing insecticide is in the discussion. To evaluate the emission characteristics of sowing machines during sowing based on secure data, the technical basis for a test method for measuring the drift of abraded seed-dust in sowing machines were developed at the Centre for Agricultural Technology Augustenberg. An indoor test bench was created and a standardized test method by which it is possible to evaluate the drift behavior of sowing machines in comparison was developed. A granulate which is stained with a fluorescent Tracer is used as test seed. The „tracer technique“ allows a reproducible, rapid and inexpensive assessment of the drift behavior of the sowing technology, which is commonly used. To classify the obtained drift values in the test bench, measurements in the field were carried out for comparison. The determined drift volumes of those measurements were at a similar level as the measured values in the test stand. Therefore the standardized measurement of drift in the test stand is suitable for the calculation of exposure scenarios for the sowing of treated seeds.

  19. Survey of methods for integrated sequence analysis with emphasis on man-machine interaction

    Energy Technology Data Exchange (ETDEWEB)

    Kahlbom, U; Holmgren, P [RELCON, Stockholm (Sweden)

    1995-05-01

    This report presents a literature study concerning recently developed monotonic methodologies in the human reliability area. The work was performed by RELCON AB on commission by NKS/RAK-1, subproject 3. The topic of subproject 3 is `Integrated Sequence Analysis with Emphasis on Man-Machine Interaction`. The purpose with the study was to compile recently developed methodologies and to propose some of these methodologies for use in the sequence analysis task. The report describes mainly non-dynamic (monotonic) methodologies. One exception is HITLINE, which is a semi-dynamic method. Reference provides a summary of approaches to dynamic analysis of man-machine-interaction, and explains the differences between monotonic and dynamic methodologies. (au) 21 refs.

  20. Machine Learning methods in fitting first-principles total energies for substitutionally disordered solid

    Science.gov (United States)

    Gao, Qin; Yao, Sanxi; Widom, Michael

    2015-03-01

    Density functional theory (DFT) provides an accurate and first-principles description of solid structures and total energies. However, it is highly time-consuming to calculate structures with hundreds of atoms in the unit cell and almost not possible to calculate thousands of atoms. We apply and adapt machine learning algorithms, including compressive sensing, support vector regression and artificial neural networks to fit the DFT total energies of substitutionally disordered boron carbide. The nonparametric kernel method is also included in our models. Our fitted total energy model reproduces the DFT energies with prediction error of around 1 meV/atom. The assumptions of these machine learning models and applications of the fitted total energies will also be discussed. Financial support from McWilliams Fellowship and the ONR-MURI under the Grant No. N00014-11-1-0678 is gratefully acknowledged.

  1. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Science.gov (United States)

    Choi, Ickwon; Chung, Amy W; Suscovich, Todd J; Rerks-Ngarm, Supachai; Pitisuttithum, Punnee; Nitayaphan, Sorachai; Kaewkungwal, Jaranit; O'Connell, Robert J; Francis, Donald; Robb, Merlin L; Michael, Nelson L; Kim, Jerome H; Alter, Galit; Ackerman, Margaret E; Bailey-Kellogg, Chris

    2015-04-01

    The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity) and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release). We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  2. Transducer-actuator systems and methods for performing on-machine measurements and automatic part alignment

    Science.gov (United States)

    Barkman, William E.; Dow, Thomas A.; Garrard, Kenneth P.; Marston, Zachary

    2016-07-12

    Systems and methods for performing on-machine measurements and automatic part alignment, including: a measurement component operable for determining the position of a part on a machine; and an actuation component operable for adjusting the position of the part by contacting the part with a predetermined force responsive to the determined position of the part. The measurement component consists of a transducer. The actuation component consists of a linear actuator. Optionally, the measurement component and the actuation component consist of a single linear actuator operable for contacting the part with a first lighter force for determining the position of the part and with a second harder force for adjusting the position of the part. The actuation component is utilized in a substantially horizontal configuration and the effects of gravitational drop of the part are accounted for in the force applied and the timing of the contact.

  3. Survey of methods for integrated sequence analysis with emphasis on man-machine interaction

    International Nuclear Information System (INIS)

    Kahlbom, U.; Holmgren, P.

    1995-05-01

    This report presents a literature study concerning recently developed monotonic methodologies in the human reliability area. The work was performed by RELCON AB on commission by NKS/RAK-1, subproject 3. The topic of subproject 3 is 'Integrated Sequence Analysis with Emphasis on Man-Machine Interaction'. The purpose with the study was to compile recently developed methodologies and to propose some of these methodologies for use in the sequence analysis task. The report describes mainly non-dynamic (monotonic) methodologies. One exception is HITLINE, which is a semi-dynamic method. Reference provides a summary of approaches to dynamic analysis of man-machine-interaction, and explains the differences between monotonic and dynamic methodologies. (au) 21 refs

  4. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Directory of Open Access Journals (Sweden)

    Ickwon Choi

    2015-04-01

    Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  5. Method for providing slip energy control in permanent magnet electrical machines

    Science.gov (United States)

    Hsu, John S.

    2006-11-14

    An electric machine (40) has a stator (43), a permanent magnet rotor (38) with permanent magnets (39) and a magnetic coupling uncluttered rotor (46) for inducing a slip energy current in secondary coils (47). A dc flux can be produced in the uncluttered rotor when the secondary coils are fed with dc currents. The magnetic coupling uncluttered rotor (46) has magnetic brushes (A, B, C, D) which couple flux in through the rotor (46) to the secondary coils (47c, 47d) without inducing a current in the rotor (46) and without coupling a stator rotational energy component to the secondary coils (47c, 47d). The machine can be operated as a motor or a generator in multi-phase or single-phase embodiments and is applicable to the hybrid electric vehicle. A method of providing a slip energy controller is also disclosed.

  6. Machine Learning Methods to Extract Documentation of Breast Cancer Symptoms From Electronic Health Records.

    Science.gov (United States)

    Forsyth, Alexander W; Barzilay, Regina; Hughes, Kevin S; Lui, Dickson; Lorenz, Karl A; Enzinger, Andrea; Tulsky, James A; Lindvall, Charlotta

    2018-02-27

    Clinicians document cancer patients' symptoms in free-text format within electronic health record visit notes. Although symptoms are critically important to quality of life and often herald clinical status changes, computational methods to assess the trajectory of symptoms over time are woefully underdeveloped. To create machine learning algorithms capable of extracting patient-reported symptoms from free-text electronic health record notes. The data set included 103,564 sentences obtained from the electronic clinical notes of 2695 breast cancer patients receiving paclitaxel-containing chemotherapy at two academic cancer centers between May 1996 and May 2015. We manually annotated 10,000 sentences and trained a conditional random field model to predict words indicating an active symptom (positive label), absence of a symptom (negative label), or no symptom at all (neutral label). Sentences labeled by human coder were divided into training, validation, and test data sets. Final model performance was determined on 20% test data unused in model development or tuning. The final model achieved precision of 0.82, 0.86, and 0.99 and recall of 0.56, 0.69, and 1.00 for positive, negative, and neutral symptom labels, respectively. The most common positive symptoms were pain, fatigue, and nausea. Machine-based labeling of 103,564 sentences took two minutes. We demonstrate the potential of machine learning to gather, track, and analyze symptoms experienced by cancer patients during chemotherapy. Although our initial model requires further optimization to improve the performance, further model building may yield machine learning methods suitable to be deployed in routine clinical care, quality improvement, and research applications. Copyright © 2018 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  7. Optimization of Coolant Technique Conditions for Machining A319 Aluminium Alloy Using Response Surface Method (RSM)

    Science.gov (United States)

    Zainal Ariffin, S.; Razlan, A.; Ali, M. Mohd; Efendee, A. M.; Rahman, M. M.

    2018-03-01

    Background/Objectives: The paper discusses about the optimum cutting parameters with coolant techniques condition (1.0 mm nozzle orifice, wet and dry) to optimize surface roughness, temperature and tool wear in the machining process based on the selected setting parameters. The selected cutting parameters for this study were the cutting speed, feed rate, depth of cut and coolant techniques condition. Methods/Statistical Analysis Experiments were conducted and investigated based on Design of Experiment (DOE) with Response Surface Method. The research of the aggressive machining process on aluminum alloy (A319) for automotive applications is an effort to understand the machining concept, which widely used in a variety of manufacturing industries especially in the automotive industry. Findings: The results show that the dominant failure mode is the surface roughness, temperature and tool wear when using 1.0 mm nozzle orifice, increases during machining and also can be alternative minimize built up edge of the A319. The exploration for surface roughness, productivity and the optimization of cutting speed in the technical and commercial aspects of the manufacturing processes of A319 are discussed in automotive components industries for further work Applications/Improvements: The research result also beneficial in minimizing the costs incurred and improving productivity of manufacturing firms. According to the mathematical model and equations, generated by CCD based RSM, experiments were performed and cutting coolant condition technique using size nozzle can reduces tool wear, surface roughness and temperature was obtained. Results have been analyzed and optimization has been carried out for selecting cutting parameters, shows that the effectiveness and efficiency of the system can be identified and helps to solve potential problems.

  8. rFerns: An Implementation of the Random Ferns Method for General-Purpose Machine Learning

    Directory of Open Access Journals (Sweden)

    Miron B. Kursa

    2014-11-01

    Full Text Available Random ferns is a very simple yet powerful classification method originally introduced for specific computer vision tasks. In this paper, I show that this algorithm may be considered as a constrained decision tree ensemble and use this interpretation to introduce a series of modifications which enable the use of random ferns in general machine learning problems. Moreover, I extend the method with an internal error approximation and an attribute importance measure based on corresponding features of the random forest algorithm. I also present the R package rFerns containing an efficient implementation of this modified version of random ferns.

  9. Identifying Structural Flow Defects in Disordered Solids Using Machine-Learning Methods

    Science.gov (United States)

    Cubuk, E. D.; Schoenholz, S. S.; Rieser, J. M.; Malone, B. D.; Rottler, J.; Durian, D. J.; Kaxiras, E.; Liu, A. J.

    2015-03-01

    We use machine-learning methods on local structure to identify flow defects—or particles susceptible to rearrangement—in jammed and glassy systems. We apply this method successfully to two very different systems: a two-dimensional experimental realization of a granular pillar under compression and a Lennard-Jones glass in both two and three dimensions above and below its glass transition temperature. We also identify characteristics of flow defects that differentiate them from the rest of the sample. Our results show it is possible to discern subtle structural features responsible for heterogeneous dynamics observed across a broad range of disordered materials.

  10. Neutron–gamma discrimination based on the support vector machine method

    International Nuclear Information System (INIS)

    Yu, Xunzhen; Zhu, Jingjun; Lin, ShinTed; Wang, Li; Xing, Haoyang; Zhang, Caixun; Xia, Yuxi; Liu, Shukui; Yue, Qian; Wei, Weiwei; Du, Qiang; Tang, Changjian

    2015-01-01

    In this study, the combination of the support vector machine (SVM) method with the moment analysis method (MAM) is proposed and utilized to perform neutron/gamma (n/γ) discrimination of the pulses from an organic liquid scintillator (OLS). Neutron and gamma events, which can be firmly separated on the scatter plot drawn by the charge comparison method (CCM), are detected to form the training data set and the test data set for the SVM, and the MAM is used to create the feature vectors for individual events in the data sets. Compared to the traditional methods, such as CCM, the proposed method can not only discriminate the neutron and gamma signals, even at lower energy levels, but also provide the corresponding classification accuracy for each event, which is useful in validating the discrimination. Meanwhile, the proposed method can also offer a predication of the classification for the under-energy-limit events

  11. 100 Area soil washing treatability test plan

    International Nuclear Information System (INIS)

    1993-03-01

    This test plan describes specifications, responsibilities, and general methodology for conducting a soil washing treatability study as applied to source unit contamination in the 100 Area. The objective ofthis treatability study is to evaluate the use of physical separation systems and chemical extraction methods as a means of separating chemically and radioactively contaminated soil fractions from uncontaminated soil fractions. The purpose of separating these fractions is to minimize the volume of soil requiring permanent disposal. It is anticipated that this treatability study will be performed in two phases of testing, a remedy screening phase and a remedy selection phase. The remedy screening phase consists of laboratory- and bench-scale studies performed by Battelle Pacific Northwest laboratories (PNL) under a work order issued by Westinghouse Hanford Company (Westinghouse Hanford). This phase will be used to provide qualitative evaluation of the potential effectiveness of the soil washing technology. The remedy selection phase, consists of pilot-scale testing performed under a separate service contract to be competitively bid under Westinghouse Hanford direction. The remedy selection phase will provide data to support evaluation of the soil washing technology in future feasibility studies for Interim Remedial Measures (IRMs) or final operable unit (OU) remedies. Performance data from these tests will indicate whether applicable or relevant and appropriate requirements (ARARs) or cleanup goals can be met at the site(s) by application of soil washing. The remedy selection tests wig also allow estimation of costs associated with implementation to the accuracy required for the Feasibility Study

  12. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  13. The reduction methods of operator's radiation dose for portable dental X-ray machines.

    Science.gov (United States)

    Cho, Jeong-Yeon; Han, Won-Jeong

    2012-08-01

    This study was aimed to investigate the methods to reduce operator's radiation dose when taking intraoral radiographs with portable dental X-ray machines. Two kinds of portable dental X-ray machines (DX3000, Dexcowin and Rextar, Posdion) were used. Operator's radiation dose was measured with an 1,800 cc ionization chamber (RadCal Corp.) at the hand level of X-ray tubehead and at the operator's chest and waist levels with and without the backscatter shield. The operator's radiation dose at the hand level was measured with and without lead gloves and with long and short cones. The backscatter shield reduced operator's radiation dose at the hand level of X-ray tubehead to 23 - 32%, the lead gloves to 26 - 31%, and long cone to 48 - 52%. And the backscatter shield reduced operator's radiation dose at the operator's chest and waist levels to 0.1 - 37%. When portable dental X-ray systems are used, it is recommended to select X-ray machine attached with a backscatter shield and a long cone and to wear the lead gloves.

  14. Apparatus for washing out halogens

    Energy Technology Data Exchange (ETDEWEB)

    Pier, M; Hahn, J; Kroenig, W

    1941-03-26

    An apparatus is described for washing out of halogens and the like or liquid halogen compounds from the products, which are formed on pressure hydrogenation or splitting of carbon-containing material in the presence of halogens or halogen compounds, consisting of a washing apparatus installed between the reaction vessel and the hot separator, which is inclined in relatively small space for steam regulation and contains, with the steam, arranged baffles, especially spirals.

  15. Modulation transfer function (MTF) measurement method based on support vector machine (SVM)

    Science.gov (United States)

    Zhang, Zheng; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi

    2016-03-01

    An imaging system's spatial quality can be expressed by the system's modulation spread function (MTF) as a function of spatial frequency in terms of the linear response theory. Methods have been proposed to assess the MTF of an imaging system using point, slit or edge techniques. The edge method is widely used for the low requirement of targets. However, the traditional edge methods are limited by the edge angle. Besides, image noise will impair the measurement accuracy, making the measurement result unstable. In this paper, a novel measurement method based on the support vector machine (SVM) is proposed. Image patches with different edge angles and MTF levels are generated as the training set. Parameters related with MTF and image structure are extracted from the edge images. Trained with image parameters and the corresponding MTF, the SVM classifier can assess the MTF of any edge image. The result shows that the proposed method has an excellent performance on measuring accuracy and stability.

  16. Electricity of machine tool

    International Nuclear Information System (INIS)

    Gijeon media editorial department

    1977-10-01

    This book is divided into three parts. The first part deals with electricity machine, which can taints from generator to motor, motor a power source of machine tool, electricity machine for machine tool such as switch in main circuit, automatic machine, a knife switch and pushing button, snap switch, protection device, timer, solenoid, and rectifier. The second part handles wiring diagram. This concludes basic electricity circuit of machine tool, electricity wiring diagram in your machine like milling machine, planer and grinding machine. The third part introduces fault diagnosis of machine, which gives the practical solution according to fault diagnosis and the diagnostic method with voltage and resistance measurement by tester.

  17. Comparison of two different methods for the uncertainty estimation of circle diameter measurements using an optical coordinate measuring machine

    DEFF Research Database (Denmark)

    Morace, Renata Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2005-01-01

    This paper deals with the uncertainty estimation of measurements performed on optical coordinate measuring machines (CMMs). Two different methods were used to assess the uncertainty of circle diameter measurements using an optical CMM: the sensitivity analysis developing an uncertainty budget...

  18. Effects of shampoo and water washing on hair cortisol concentrations.

    Science.gov (United States)

    Hamel, Amanda F; Meyer, Jerrold S; Henchey, Elizabeth; Dettmer, Amanda M; Suomi, Stephen J; Novak, Melinda A

    2011-01-30

    Measurement of cortisol in hair is an emerging biomarker for chronic stress in human and nonhuman primates. Currently unknown, however, is the extent of potential cortisol loss from hair that has been repeatedly exposed to shampoo and/or water. Pooled hair samples from 20 rhesus monkeys were subjected to five treatment conditions: 10, 20, or 30 shampoo washes, 20 water-only washes, or a no-wash control. For each wash, hair was exposed to a dilute shampoo solution or tap water for 45 s, rinsed 4 times with tap water, and rapidly dried. Samples were then processed for cortisol extraction and analysis using previously published methods. Hair cortisol levels were significantly reduced by washing, with an inverse relationship between number of shampoo washes and the cortisol concentration. This effect was mainly due to water exposure, as cortisol levels following 20 water-only washes were similar to those following 20 shampoo treatments. Repeated exposure to water with or without shampoo appears to leach cortisol from hair, yielding values that underestimate the amount of chronic hormone deposition within the shaft. Collecting samples proximal to the scalp and obtaining hair washing frequency data may be valuable when conducting human hair cortisol studies. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Machine learning and statistical methods for the prediction of maximal oxygen uptake: recent advances.

    Science.gov (United States)

    Abut, Fatih; Akay, Mehmet Fatih

    2015-01-01

    Maximal oxygen uptake (VO2max) indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R) and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance.

  20. Active damage detection method based on support vector machine and impulse response

    International Nuclear Information System (INIS)

    Taniguchi, Ryuta; Mita, Akira

    2004-01-01

    An active damage detection method was proposed to characterize damage in bolted joints. The purpose of this study is to propose a damage detection method that can obtain the detailed information of the damage by creating feature vectors for pattern recognition. In the proposed method, the wavelet transform is applied to the sensor signals, and the feature vectors are defined by second power average of the amplitude. The feature vectors generated by experiments were successfully used as the training data for Support Vector Machine (SVM). By applying the wavelet transform to time-frequency analysis, the accuracy of pattern recognition was raised in both correlation coefficient and SVM applications. Moreover, the SVM could identify the damage with very strong discernment capability than others. Applicability of the proposed method was successfully demonstrated. (author)

  1. Detection of License Plate using Sliding Window, Histogram of Oriented Gradient, and Support Vector Machines Method

    Science.gov (United States)

    Astawa, INGA; Gusti Ngurah Bagus Caturbawa, I.; Made Sajayasa, I.; Dwi Suta Atmaja, I. Made Ari

    2018-01-01

    The license plate recognition usually used as part of system such as parking system. License plate detection considered as the most important step in the license plate recognition system. We propose methods that can be used to detect the vehicle plate on mobile phone. In this paper, we used Sliding Window, Histogram of Oriented Gradient (HOG), and Support Vector Machines (SVM) method to license plate detection so it will increase the detection level even though the image is not in a good quality. The image proceed by Sliding Window method in order to find plate position. Feature extraction in every window movement had been done by HOG and SVM method. Good result had shown in this research, which is 96% of accuracy.

  2. Energy-efficient algorithm for classification of states of wireless sensor network using machine learning methods

    Science.gov (United States)

    Yuldashev, M. N.; Vlasov, A. I.; Novikov, A. N.

    2018-05-01

    This paper focuses on the development of an energy-efficient algorithm for classification of states of a wireless sensor network using machine learning methods. The proposed algorithm reduces energy consumption by: 1) elimination of monitoring of parameters that do not affect the state of the sensor network, 2) reduction of communication sessions over the network (the data are transmitted only if their values can affect the state of the sensor network). The studies of the proposed algorithm have shown that at classification accuracy close to 100%, the number of communication sessions can be reduced by 80%.

  3. A method for classification of network traffic based on C5.0 Machine Learning Algorithm

    DEFF Research Database (Denmark)

    Bujlow, Tomasz; Riaz, M. Tahir; Pedersen, Jens Myrup

    2012-01-01

    current network traffic. To overcome the drawbacks of existing methods for traffic classification, usage of C5.0 Machine Learning Algorithm (MLA) was proposed. On the basis of statistical traffic information received from volunteers and C5.0 algorithm we constructed a boosted classifier, which was shown...... and classification, an algorithm for recognizing flow direction and the C5.0 itself. Classified applications include Skype, FTP, torrent, web browser traffic, web radio, interactive gaming and SSH. We performed subsequent tries using different sets of parameters and both training and classification options...

  4. MU-LOC: A Machine-Learning Method for Predicting Mitochondrially Localized Proteins in Plants

    DEFF Research Database (Denmark)

    Zhang, Ning; Rao, R Shyama Prasad; Salvato, Fernanda

    2018-01-01

    -sequence or a multitude of internal signals. Compared with experimental approaches, computational predictions provide an efficient way to infer subcellular localization of a protein. However, it is still challenging to predict plant mitochondrially localized proteins accurately due to various limitations. Consequently......, the performance of current tools can be improved with new data and new machine-learning methods. We present MU-LOC, a novel computational approach for large-scale prediction of plant mitochondrial proteins. We collected a comprehensive dataset of plant subcellular localization, extracted features including amino...

  5. A Shellcode Detection Method Based on Full Native API Sequence and Support Vector Machine

    Science.gov (United States)

    Cheng, Yixuan; Fan, Wenqing; Huang, Wei; An, Jing

    2017-09-01

    Dynamic monitoring the behavior of a program is widely used to discriminate between benign program and malware. It is usually based on the dynamic characteristics of a program, such as API call sequence or API call frequency to judge. The key innovation of this paper is to consider the full Native API sequence and use the support vector machine to detect the shellcode. We also use the Markov chain to extract and digitize Native API sequence features. Our experimental results show that the method proposed in this paper has high accuracy and low detection rate.

  6. BENCHMARK OF MACHINE LEARNING METHODS FOR CLASSIFICATION OF A SENTINEL-2 IMAGE

    Directory of Open Access Journals (Sweden)

    F. Pirotti

    2016-06-01

    Full Text Available Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels for testing and validating subsets. The classes used are the following: (i urban (ii sowable areas (iii water (iv tree plantations (v grasslands. Validation is carried out using three different approaches: (i using pixels from the training dataset (train, (ii using pixels from the training dataset and applying cross-validation with the k-fold method (kfold and (iii using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train, the whole control dataset (full and with k-fold cross-validation (kfold with ten folds. Results from validation of predictions of the whole dataset (full show the

  7. Machine learning methods for credibility assessment of interviewees based on posturographic data.

    Science.gov (United States)

    Saripalle, Sashi K; Vemulapalli, Spandana; King, Gregory W; Burgoon, Judee K; Derakhshani, Reza

    2015-01-01

    This paper discusses the advantages of using posturographic signals from force plates for non-invasive credibility assessment. The contributions of our work are two fold: first, the proposed method is highly efficient and non invasive. Second, feasibility for creating an autonomous credibility assessment system using machine-learning algorithms is studied. This study employs an interview paradigm that includes subjects responding with truthful and deceptive intent while their center of pressure (COP) signal is being recorded. Classification models utilizing sets of COP features for deceptive responses are derived and best accuracy of 93.5% for test interval is reported.

  8. PMSVM: An Optimized Support Vector Machine Classification Algorithm Based on PCA and Multilevel Grid Search Methods

    Directory of Open Access Journals (Sweden)

    Yukai Yao

    2015-01-01

    Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.

  9. In Silico Prediction of Chemicals Binding to Aromatase with Machine Learning Methods.

    Science.gov (United States)

    Du, Hanwen; Cai, Yingchun; Yang, Hongbin; Zhang, Hongxiao; Xue, Yuhan; Liu, Guixia; Tang, Yun; Li, Weihua

    2017-05-15

    Environmental chemicals may affect endocrine systems through multiple mechanisms, one of which is via effects on aromatase (also known as CYP19A1), an enzyme critical for maintaining the normal balance of estrogens and androgens in the body. Therefore, rapid and efficient identification of aromatase-related endocrine disrupting chemicals (EDCs) is important for toxicology and environment risk assessment. In this study, on the basis of the Tox21 10K compound library, in silico classification models for predicting aromatase binders/nonbinders were constructed by machine learning methods. To improve the prediction ability of the models, a combined classifier (CC) strategy that combines different independent machine learning methods was adopted. Performances of the models were measured by test and external validation sets containing 1336 and 216 chemicals, respectively. The best model was obtained with the MACCS (Molecular Access System) fingerprint and CC method, which exhibited an accuracy of 0.84 for the test set and 0.91 for the external validation set. Additionally, several representative substructures for characterizing aromatase binders, such as ketone, lactone, and nitrogen-containing derivatives, were identified using information gain and substructure frequency analysis. Our study provided a systematic assessment of chemicals binding to aromatase. The built models can be helpful to rapidly identify potential EDCs targeting aromatase.

  10. An Overview and Evaluation of Recent Machine Learning Imputation Methods Using Cardiac Imaging Data.

    Science.gov (United States)

    Liu, Yuzhe; Gopalakrishnan, Vanathi

    2017-03-01

    Many clinical research datasets have a large percentage of missing values that directly impacts their usefulness in yielding high accuracy classifiers when used for training in supervised machine learning. While missing value imputation methods have been shown to work well with smaller percentages of missing values, their ability to impute sparse clinical research data can be problem specific. We previously attempted to learn quantitative guidelines for ordering cardiac magnetic resonance imaging during the evaluation for pediatric cardiomyopathy, but missing data significantly reduced our usable sample size. In this work, we sought to determine if increasing the usable sample size through imputation would allow us to learn better guidelines. We first review several machine learning methods for estimating missing data. Then, we apply four popular methods (mean imputation, decision tree, k-nearest neighbors, and self-organizing maps) to a clinical research dataset of pediatric patients undergoing evaluation for cardiomyopathy. Using Bayesian Rule Learning (BRL) to learn ruleset models, we compared the performance of imputation-augmented models versus unaugmented models. We found that all four imputation-augmented models performed similarly to unaugmented models. While imputation did not improve performance, it did provide evidence for the robustness of our learned models.

  11. Process signal selection method to improve the impact mitigation of sensor broken for diagnosis using machine learning

    International Nuclear Information System (INIS)

    Minowa, Hirotsugu; Gofuku, Akio

    2014-01-01

    Accidents of industrial plants cause large loss on human, economic, social credibility. In recent, studies of diagnostic methods using techniques of machine learning are expected to detect early and correctly abnormality occurred in a plant. However, the general diagnostic machines are generated generally to require all process signals (hereafter, signals) for plant diagnosis. Thus if trouble occurs such as process sensor is broken, the diagnostic machine cannot diagnose or may decrease diagnostic performance. Therefore, we propose an important process signal selection method to improve impact mitigation without reducing the diagnostic performance by reducing the adverse effect of noises on multi-agent diagnostic system. The advantage of our method is the general-purpose property that allows to be applied to various supervised machine learning and to set the various parameters to decide termination of search. The experiment evaluation revealed that diagnostic machines generated by our method using SVM improved the impact mitigation and did not reduce performance about the diagnostic accuracy, the velocity of diagnosis, predictions of plant state near accident occurrence, in comparison with the basic diagnostic machine which diagnoses by using all signals. This paper reports our proposed method and the results evaluated which our method was applied to the simulated abnormal of the fast-breeder reactor Monju. (author)

  12. Asset Analysis Method for the Cyber Security of Man Machine Interface System

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Sung Kon; Kim, Hun Hee; Shin, Yeong Cheol [Korea Hydro and Nuclear Power, Daejeon (Korea, Republic of)

    2010-10-15

    As digital MMIS (Man Machine Interface System) is applied in Nuclear Power Plant (NPP), cyber security is becoming more and more important. Regulatory guide (KINS/GT-N27) requires that implementation plan for cyber security be prepared in NPP. Regulatory guide recommends the following 4 processes: 1) an asset analysis of MMIS, 2) a vulnerability analysis of MMIS, 3) establishment of countermeasures, and 4) establishment of operational guideline for cyber security. Conventional method for the asset analysis is mainly performed with a table form for each asset. Conventional method requires a lot of efforts due to the duplication of information. This paper presents an asset analysis method using object oriented approach for the NPP

  13. Investigation of Unbalanced Magnetic Force in Magnetic Geared Machine Using Analytical Methods

    DEFF Research Database (Denmark)

    Zhang, Xiaoxu; Liu, Xiao; Chen, Zhe

    2016-01-01

    The electromagnetic structure of the magnetic geared machine (MGM) may induce a significant unbalanced magnetic force (UMF). However, few methods have been developed to theoretically reveal the essential reasons for this issue in the MGM. In this paper, an analytical method based on an air....... Second, the magnetic field distribution in the MGM is modeled by an exact subdomain method, which allows the magnetic forces to be calculated quantitatively. The magnetic forces in two MGMs are then studied under no-load and full-load conditions. Finally, the finite-element calculation confirms......-gap relative permeance theory is first developed to qualitatively study the origins of the UMF in the MGM. By means of formula derivations, three kinds of magnetic field behaviors in the air gaps are found to be the potential sources of UMF. It is also proved that the UMF is possible to avoid by design choices...

  14. Asset Analysis Method for the Cyber Security of Man Machine Interface System

    International Nuclear Information System (INIS)

    Kang, Sung Kon; Kim, Hun Hee; Shin, Yeong Cheol

    2010-01-01

    As digital MMIS (Man Machine Interface System) is applied in Nuclear Power Plant (NPP), cyber security is becoming more and more important. Regulatory guide (KINS/GT-N27) requires that implementation plan for cyber security be prepared in NPP. Regulatory guide recommends the following 4 processes: 1) an asset analysis of MMIS, 2) a vulnerability analysis of MMIS, 3) establishment of countermeasures, and 4) establishment of operational guideline for cyber security. Conventional method for the asset analysis is mainly performed with a table form for each asset. Conventional method requires a lot of efforts due to the duplication of information. This paper presents an asset analysis method using object oriented approach for the NPP

  15. Nonplanar machines

    International Nuclear Information System (INIS)

    Ritson, D.

    1989-05-01

    This talk examines methods available to minimize, but never entirely eliminate, degradation of machine performance caused by terrain following. Breaking of planar machine symmetry for engineering convenience and/or monetary savings must be balanced against small performance degradation, and can only be decided on a case-by-case basis. 5 refs

  16. Aqueous treatment of water-sensitive paper objects: capillary unit, blotter wash or paraprint wash?

    NARCIS (Netherlands)

    Schalkx, H.; Iedema, P.; Reissland, B.; van Velzen, B.

    2011-01-01

    Blotter washing andwashing with the capillary unit are both methods used for aqueoustreatment of water-sensitive paper objects. The challenge of thistreatment is to remove water-soluble products while keeping thewater-sensitive medium in its place. In this article the two methodsare compared, along

  17. Machine terms dictionary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1979-04-15

    This book gives descriptions of machine terms which includes machine design, drawing, the method of machine, machine tools, machine materials, automobile, measuring and controlling, electricity, basic of electron, information technology, quality assurance, Auto CAD and FA terms and important formula of mechanical engineering.

  18. Enhanced sludge washing evaluation plan

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, R.D.

    1994-09-01

    The Tank Waste Remediation System (TWRS) Program mission is to store, treat, and immobilize highly radioactive Hanford Site waste (current and future tank waste and the strontium/cesium capsules) in an environmentally sound, safe, and cost-effective manner. The scope of the TWRS Waste Pretreatment Program is to treat tank waste and separate that waste into HLW and LLW fractions and provide additional treatment as required to feed LLW and HLW immobilization facilities. Enhanced sludge washing was chosen as the baseline process for separating Hanford tank waste sludge. Section 1.0 briefly discusses the purpose of the evaluation plan and provides the background that led to the choice of enhanced sludge washing as the baseline process. Section 2.0 provides a brief summary of the evaluation plan details. Section 3.0 discusses, in some detail, the technical work planned to support the evaluation of enhanced sludge washing. Section 4.0 briefly discusses the potential important of policy issues to the evaluation. Section 5.0 discusses the methodology to be used in the evaluation process. Section 6.0 summarizes the milestones that have been defined to complete the enhanced sludge washing evaluation and provides a summary schedule to evaluate the performance of enhanced sludge washing. References are identified in Section 7.0, and additional schedule and milestone information is provided in the appendices.

  19. Enhanced sludge washing evaluation plan

    International Nuclear Information System (INIS)

    Jensen, R.D.

    1994-09-01

    The Tank Waste Remediation System (TWRS) Program mission is to store, treat, and immobilize highly radioactive Hanford Site waste (current and future tank waste and the strontium/cesium capsules) in an environmentally sound, safe, and cost-effective manner. The scope of the TWRS Waste Pretreatment Program is to treat tank waste and separate that waste into HLW and LLW fractions and provide additional treatment as required to feed LLW and HLW immobilization facilities. Enhanced sludge washing was chosen as the baseline process for separating Hanford tank waste sludge. Section 1.0 briefly discusses the purpose of the evaluation plan and provides the background that led to the choice of enhanced sludge washing as the baseline process. Section 2.0 provides a brief summary of the evaluation plan details. Section 3.0 discusses, in some detail, the technical work planned to support the evaluation of enhanced sludge washing. Section 4.0 briefly discusses the potential important of policy issues to the evaluation. Section 5.0 discusses the methodology to be used in the evaluation process. Section 6.0 summarizes the milestones that have been defined to complete the enhanced sludge washing evaluation and provides a summary schedule to evaluate the performance of enhanced sludge washing. References are identified in Section 7.0, and additional schedule and milestone information is provided in the appendices

  20. Comparison of four statistical and machine learning methods for crash severity prediction.

    Science.gov (United States)

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Peak detection method evaluation for ion mobility spectrometry by using machine learning approaches.

    Science.gov (United States)

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-04-16

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors' results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications.

  2. 33 CFR 157.150 - Crude Oil Washing Operations and Equipment Manual: Recording information after inspections.

    Science.gov (United States)

    2010-07-01

    ... Equipment Manual: Recording information after inspections. 157.150 Section 157.150 Navigation and Navigable... Vessels Inspections § 157.150 Crude Oil Washing Operations and Equipment Manual: Recording information... machines. (2) Revolutions, number of cycles, and length of cycles of each COW machine. (3) Pressure and...

  3. Effect of abiotic and biotic stress factors analysis using machine learning methods in zebrafish.

    Science.gov (United States)

    Gutha, Rajasekar; Yarrappagaari, Suresh; Thopireddy, Lavanya; Reddy, Kesireddy Sathyavelu; Saddala, Rajeswara Reddy

    2018-03-01

    In order to understand the mechanisms underlying stress responses, meta-analysis of transcriptome is made to identify differentially expressed genes (DEGs) and their biological, molecular and cellular mechanisms in response to stressors. The present study is aimed at identifying the effect of abiotic and biotic stress factors, and it is found that several stress responsive genes are common for both abiotic and biotic stress factors in zebrafish. The meta-analysis of micro-array studies revealed that almost 4.7% i.e., 108 common DEGs are differentially regulated between abiotic and biotic stresses. This shows that there is a global coordination and fine-tuning of gene regulation in response to these two types of challenges. We also performed dimension reduction methods, principal component analysis, and partial least squares discriminant analysis which are able to segregate abiotic and biotic stresses into separate entities. The supervised machine learning model, recursive-support vector machine, could classify abiotic and biotic stresses with 100% accuracy using a subset of DEGs. Beside these methods, the random forests decision tree model classified five out of 8 stress conditions with high accuracy. Finally, Functional enrichment analysis revealed the different gene ontology terms, transcription factors and miRNAs factors in the regulation of stress responses. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.

    Science.gov (United States)

    Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo

    2018-01-01

    Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Advances in industrial biopharmaceutical batch process monitoring: Machine-learning methods for small data problems.

    Science.gov (United States)

    Tulsyan, Aditya; Garvin, Christopher; Ündey, Cenk

    2018-04-06

    Biopharmaceutical manufacturing comprises of multiple distinct processing steps that require effective and efficient monitoring of many variables simultaneously in real-time. The state-of-the-art real-time multivariate statistical batch process monitoring (BPM) platforms have been in use in recent years to ensure comprehensive monitoring is in place as a complementary tool for continued process verification to detect weak signals. This article addresses a longstanding, industry-wide problem in BPM, referred to as the "Low-N" problem, wherein a product has a limited production history. The current best industrial practice to address the Low-N problem is to switch from a multivariate to a univariate BPM, until sufficient product history is available to build and deploy a multivariate BPM platform. Every batch run without a robust multivariate BPM platform poses risk of not detecting potential weak signals developing in the process that might have an impact on process and product performance. In this article, we propose an approach to solve the Low-N problem by generating an arbitrarily large number of in silico batches through a combination of hardware exploitation and machine-learning methods. To the best of authors' knowledge, this is the first article to provide a solution to the Low-N problem in biopharmaceutical manufacturing using machine-learning methods. Several industrial case studies from bulk drug substance manufacturing are presented to demonstrate the efficacy of the proposed approach for BPM under various Low-N scenarios. © 2018 Wiley Periodicals, Inc.

  6. BacHbpred: Support Vector Machine Methods for the Prediction of Bacterial Hemoglobin-Like Proteins

    Directory of Open Access Journals (Sweden)

    MuthuKrishnan Selvaraj

    2016-01-01

    Full Text Available The recent upsurge in microbial genome data has revealed that hemoglobin-like (HbL proteins may be widely distributed among bacteria and that some organisms may carry more than one HbL encoding gene. However, the discovery of HbL proteins has been limited to a small number of bacteria only. This study describes the prediction of HbL proteins and their domain classification using a machine learning approach. Support vector machine (SVM models were developed for predicting HbL proteins based upon amino acid composition (AC, dipeptide composition (DC, hybrid method (AC + DC, and position specific scoring matrix (PSSM. In addition, we introduce for the first time a new prediction method based on max to min amino acid residue (MM profiles. The average accuracy, standard deviation (SD, false positive rate (FPR, confusion matrix, and receiver operating characteristic (ROC were analyzed. We also compared the performance of our proposed models in homology detection databases. The performance of the different approaches was estimated using fivefold cross-validation techniques. Prediction accuracy was further investigated through confusion matrix and ROC curve analysis. All experimental results indicate that the proposed BacHbpred can be a perspective predictor for determination of HbL related proteins. BacHbpred, a web tool, has been developed for HbL prediction.

  7. Kernel machine methods for integrative analysis of genome-wide methylation and genotyping studies.

    Science.gov (United States)

    Zhao, Ni; Zhan, Xiang; Huang, Yen-Tsung; Almli, Lynn M; Smith, Alicia; Epstein, Michael P; Conneely, Karen; Wu, Michael C

    2018-03-01

    Many large GWAS consortia are expanding to simultaneously examine the joint role of DNA methylation in addition to genotype in the same subjects. However, integrating information from both data types is challenging. In this paper, we propose a composite kernel machine regression model to test the joint epigenetic and genetic effect. Our approach works at the gene level, which allows for a common unit of analysis across different data types. The model compares the pairwise similarities in the phenotype to the pairwise similarities in the genotype and methylation values; and high correspondence is suggestive of association. A composite kernel is constructed to measure the similarities in the genotype and methylation values between pairs of samples. We demonstrate through simulations and real data applications that the proposed approach can correctly control type I error, and is more robust and powerful than using only the genotype or methylation data in detecting trait-associated genes. We applied our method to investigate the genetic and epigenetic regulation of gene expression in response to stressful life events using data that are collected from the Grady Trauma Project. Within the kernel machine testing framework, our methods allow for heterogeneity in effect sizes, nonlinear, and interactive effects, as well as rapid P-value computation. © 2017 WILEY PERIODICALS, INC.

  8. Methods and means for the in-house training of mining machine operators

    Directory of Open Access Journals (Sweden)

    Velikanov Vladimir

    2017-01-01

    Full Text Available This study investigates the quality issue of the in-house training process for mining machine operators. The authors prove the urgency of the designated problem. The changes in modern society, as well as the development of science and technology have a direct impact on the vocational education system. This paper describes the main aspects of the in-house training process of mining machine operators; define the essence, structure, contents, and main directions of its revitalization. The following solutions are proposed in order to improve the quality of the in-house training process: to use the original method based on a rating system of the operator knowledge evaluation, active and interactive forms of using modern training technologies. The authors conducted testing techniques in mining enterprises with the aim of confirming the adequacy of the suggested approaches. The results are given in the work. It was proposed that the methods and tools integration has a positive impact on professional training system.

  9. 27 CFR 19.328 - Wash water.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Wash water. 19.328 Section... THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Production Chemical By-Products § 19.328 Wash water. Water used in washing chemicals to remove spirits therefrom may be run into a wash tank or a distilling...

  10. Bladder wash cytology, quantitative cytology, and the qualitative BTA test in patients with superficial bladder cancer

    NARCIS (Netherlands)

    van der Poel, H. G.; van Balken, M. R.; Schamhart, D. H.; Peelen, P.; de Reijke, T.; Debruyne, F. M.; Schalken, J. A.; Witjes, J. A.

    1998-01-01

    Two new methods for the detection of transitional tumor cells in bladder wash (karyometry: QUANTICYT) and voided urine material (BARD BTA test) were compared with bladder wash cytology for the prediction of histology and tumor recurrence. Bladder wash material and voided urine were sampled from 138

  11. Remediation of cadmium-contaminated paddy soils by washing with calcium chloride: Verification of on-site washing

    International Nuclear Information System (INIS)

    Makino, Tomoyuki; Kamiya, Takashi; Takano, Hiroyuki; Itou, Tadashi; Sekiya, Naoki; Sasaki, Kouta; Maejima, Yuji; Sugahara, Kazuo

    2007-01-01

    We developed a new, three-step soil-wash method to remediate Cd-contaminated paddy fields. The method comprises (1) chemically washing the field soil with a CaCl 2 solution; (2) washing the treated soil with water to eliminate residual Cd and CaCl 2 ; and (3) on-site treatment of wastewater using a portable wastewater treatment system. Cd concentrations in the treated water were below Japan's environmental quality standard (0.01 mg Cd L -1 ), and the removal of Cd from the exchangeable fraction was 55% and from the acid-soluble fraction 15%. While soil fertility properties were affected by the soil washing, adverse effects were not crucial and could be corrected. The washing had no affect on rice growth, and reduced the average Cd concentration in rice grains by about two-thirds compared to a control plot. These results confirmed the effectiveness of the soil-wash method in remediating Cd-contaminated paddy fields. - In situ soil washing in a paddy field using an on-site wastewater treatment system resulted in an effective decrease of Cd in soil and rice grains without affecting rice yield

  12. Hand washing promotion for preventing diarrhoea.

    Science.gov (United States)

    Ejemot-Nwadiaro, Regina I; Ehiri, John E; Arikpo, Dachi; Meremikwu, Martin M; Critchley, Julia A

    2015-09-03

    Diarrhoea accounts for 1.8 million deaths in children in low- and middle-income countries (LMICs). One of the identified strategies to prevent diarrhoea is hand washing. To assess the effects of hand washing promotion interventions on diarrhoeal episodes in children and adults. We searched the Cochrane Infectious Diseases Group Specialized Register (27 May 2015); CENTRAL (published in the Cochrane Library 2015, Issue 5); MEDLINE (1966 to 27 May 2015); EMBASE (1974 to 27 May 2015); LILACS (1982 to 27 May 2015); PsycINFO (1967 to 27 May 2015); Science Citation Index and Social Science Citation Index (1981 to 27 May 2015); ERIC (1966 to 27 May 2015); SPECTR (2000 to 27 May 2015); Bibliomap (1990 to 27 May 2015); RoRe, The Grey Literature (2002 to 27 May 2015); World Health Organization (WHO) International Clinical Trial Registry Platform (ICTRP), metaRegister of Controlled Trials (mRCT), and reference lists of articles up to 27 May 2015. We also contacted researchers and organizations in the field. Individually randomized controlled trials (RCTs) and cluster-RCTs that compared the effects of hand washing interventions on diarrhoea episodes in children and adults with no intervention. Three review authors independently assessed trial eligibility, extracted data, and assessed risk of bias. We stratified the analyses for child day-care centres or schools, community, and hospital-based settings. Where appropriate, incidence rate ratios (IRR) were pooled using the generic inverse variance method and random-effects model with 95% confidence intervals (CIs). We used the GRADE approach to assess the quality of evidence. We included 22 RCTs: 12 trials from child day-care centres or schools in mainly high-income countries (54,006 participants), nine community-based trials in LMICs (15,303 participants), and one hospital-based trial among people with acquired immune deficiency syndrome (AIDS) (148 participants).Hand washing promotion (education activities, sometimes with

  13. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  14. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    Science.gov (United States)

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is

  15. A machine learning method for the prediction of receptor activation in the simulation of synapses.

    Directory of Open Access Journals (Sweden)

    Jesus Montes

    Full Text Available Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of

  16. NetiNeti: discovery of scientific names from text using machine learning methods

    Directory of Open Access Journals (Sweden)

    Akella Lakshmi

    2012-08-01

    Full Text Available Abstract Background A scientific name for an organism can be associated with almost all biological data. Name identification is an important step in many text mining tasks aiming to extract useful information from biological, biomedical and biodiversity text sources. A scientific name acts as an important metadata element to link biological information. Results We present NetiNeti (Name Extraction from Textual Information-Name Extraction for Taxonomic Indexing, a machine learning based approach for recognition of scientific names including the discovery of new species names from text that will also handle misspellings, OCR errors and other variations in names. The system generates candidate names using rules for scientific names and applies probabilistic machine learning methods to classify names based on structural features of candidate names and features derived from their contexts. NetiNeti can also disambiguate scientific names from other names using the contextual information. We evaluated NetiNeti on legacy biodiversity texts and biomedical literature (MEDLINE. NetiNeti performs better (precision = 98.9% and recall = 70.5% compared to a popular dictionary based approach (precision = 97.5% and recall = 54.3% on a 600-page biodiversity book that was manually marked by an annotator. On a small set of PubMed Central’s full text articles annotated with scientific names, the precision and recall values are 98.5% and 96.2% respectively. NetiNeti found more than 190,000 unique binomial and trinomial names in more than 1,880,000 PubMed records when used on the full MEDLINE database. NetiNeti also successfully identifies almost all of the new species names mentioned within web pages. Conclusions We present NetiNeti, a machine learning based approach for identification and discovery of scientific names. The system implementing the approach can be accessed at http://namefinding.ubio.org.

  17. Tank 4 Characterization, Settling, And Washing Studies

    International Nuclear Information System (INIS)

    Bannochie, C.; Pareizs, J.; Click, D.; Zamecnik, J.

    2009-01-01

    . (3) There is 19% more S than can be accounted for by IC sulfate measurement. This additional soluble S is detected by ICP-AES analysis of the supernate. (4) Total supernate and slurry sulfur by ICP-AES should be monitored during washing in addition to supernate sulfate in order to avoid under estimating the amount of sulfur species removed or remaining in the supernate. (5) OLI simulation calculations show that the presence of undissolved Burkeite in the Tank 4 sample is reasonable, assuming a small difference in the Na concentration that is well within the analytical uncertainties of the reported value. The following conclusions were drawn from the blend studies of Tank 4 and decanted Tank 51-E1: (1) The addition of Tank 4 slurry to a decanted Tank 51-E1 sample significantly improved the degree and time for settling. (2) The addition of Tank 4 slurry to a decanted Tank 51-E1 sample significantly improved the plastic viscosity and yield stress. (3) The SRNL washing test, where nearly all of the wash solution was decanted from the solids, indicates that approximately 96% or more of the total S was removed from the blend in these tests, and the removal of the sulfur tracks closely with that of Na. Insoluble (undissolved) S remaining in the washed sludge was calculated from an estimate of the final slurry liquid fraction, the S result in the slurry digestion, and the S in the final decant (which was very close to the method detection limit). Based on this calculated result, about 4% of the initial total S remained after these washes; this amount is equivalent to about 18% of the initially undissolved S

  18. Comparison of four machine learning methods for object-oriented change detection in high-resolution satellite imagery

    Science.gov (United States)

    Bai, Ting; Sun, Kaimin; Deng, Shiquan; Chen, Yan

    2018-03-01

    High resolution image change detection is one of the key technologies of remote sensing application, which is of great significance for resource survey, environmental monitoring, fine agriculture, military mapping and battlefield environment detection. In this paper, for high-resolution satellite imagery, Random Forest (RF), Support Vector Machine (SVM), Deep belief network (DBN), and Adaboost models were established to verify the possibility of different machine learning applications in change detection. In order to compare detection accuracy of four machine learning Method, we applied these four machine learning methods for two high-resolution images. The results shows that SVM has higher overall accuracy at small samples compared to RF, Adaboost, and DBN for binary and from-to change detection. With the increase in the number of samples, RF has higher overall accuracy compared to Adaboost, SVM and DBN.

  19. Cost Optimization on Energy Consumption of Punching Machine Based on Green Manufacturing Method at PT Buana Intan Gemilang

    Directory of Open Access Journals (Sweden)

    Prillia Ayudia

    2017-01-01

    Full Text Available PT Buana Intan Gemilang is a company engaged in textile industry. The curtain textile production need punching machine to control the fabric process. The operator still works manually so it takes high cost of electrical energy consumption. So to solve the problem can implement green manufacturing on punching machine. The method include firstly to identify the color by classifying the company into the black, brown, gray or green color categories using questionnaire. Secondly is improvement area to be optimized and analyzed. Improvement plan at this stage that is focusing on energy area and technology. Thirdly is process applies by modifying the technology through implementing automation system on the punching machine so that there is an increase of green level on the process machine. The result obtained after implement the method can save cost on electrical energy consumption in the amount of Rp 1.068.159/day.

  20. A novel washing algorithm for underarm stain removal

    Science.gov (United States)

    Acikgoz Tufan, H.; Gocek, I.; Sahin, U. K.; Erdem, I.

    2017-10-01

    After contacting with human sweat which comprise around 27% sebum, anti-perspirants comprising aluminium chloride or its compounds form a jel-like structure whose solubility in water is very poor. In daily use, this jel-like structure closes sweat pores and hinders wetting of skin by sweat. However, when in contact with garments, they form yellowish stains at the underarm of the garments. These stains are very hard to remove with regular machine washing. In this study, first of all, we focused on understanding and simulating such stain formation on the garments. Two alternative procedures are offered to form jel-like structures. On both procedures, commercially available spray or deo-stick type anti-perspirants, standard acidic and basic sweat solutions and artificial sebum are used to form jel-like structures, and they are applied on fabric in order to get hard stains. Secondly, after simulation of the stain on the fabric, we put our efforts on developing a washing algorithm specifically designed for removal of underarm stains. Eight alternative washing algorithms are offered with varying washing temperature, amounts of detergent, and pre-stain removal procedures. Better algorithm is selected by comparison of Tristimulus Y values after washing.

  1. Water saving in IC wafer washing process; IC wafer senjo deno sessui taisaku

    Energy Technology Data Exchange (ETDEWEB)

    Harada, H. [Mitsubishi Corp., Tokyo (Japan); Araki, M.; Nakazawa, T.

    1997-11-30

    This paper reports features of a wafer washing technology, a new IC wafer washing process, its pure water saving effect, and a `QC washing` which has pure water saving effect in the wafer washing. Wafer washing processes generally include the SC1 process (using ammonia + hydrogen peroxide aqueous solution) purposed for removing contamination due to ultrafine particles, the SC2 process (using hydrochloric acid + hydrogen peroxide aqueous solution) purposed for removing contamination due to heavy metals, the piranha washing process (using hot sulfuric acid + hydrogen peroxide aqueous solution) purposed for removing contamination due to organic matters, and the DHF (using dilute hydrofluoric acid) purposed for removing natural oxide films. Natural oxide films are now remained as surface protection films, by which surface contamination has been reduced remarkably. A high-temperature washing chemical circulating and filtering technology developed in Japan has brought about a reform in wafer washing processes having been used previously. Spin washing is used as a water saving measure, in which washing chemicals or pure water are sprayed onto one each of wafers which is spin-rotated, allowing washing and rinsing to be made with small amount of washing chemicals and pure water. The QC washing is a method to replace tank interior with pure was as quick as possible in order to increase the rinsing effect. 7 refs., 5 figs.

  2. Real-time cytometric assay of nitric oxide and superoxide interaction in peripheral blood monocytes: A no-wash, no-lyse kinetic method.

    Science.gov (United States)

    Balaguer, Susana; Diaz, Laura; Gomes, Angela; Herrera, Guadalupe; O'Connor, José-Enrique; Urios, Amparo; Felipo, Vicente; Montoliu, Carmina

    2017-05-01

    Nitric oxide (NO) and its related reactive nitrogen species (RNS) and reactive oxygen species (ROS) are crucial in monocyte responses against pathogens and also in inflammatory conditions. Central to both processes is the generation of the strong oxidant peroxynitrite (ONOO) by a fast reaction between NO and superoxide anion. ONOO is a biochemical junction for ROS- and RNS cytotoxicity and causes protein nitrosylation. Circulating by-products of protein nitrosylation are early biomarkers of inflammation-based conditions, including minimal hepatic encephalopathy in cirrhotic patients (Montoliu et al., Am J Gastroenterol 2011; 106:1629-1637). In this context, we have designed a novel no-wash, no-lyse real-time flow cytometry assay to detect and follow-up the NO- and superoxide-driven generation of ONOO in peripheral blood monocytes. Whole blood samples were stained with CD45 and CD14 antibodies plus one of a series of fluorescent probes sensitive to RNS, ROS, or glutathione, namely 4-amino-5-methylamino-2',7'-difluorofluorescein diacetate, dihydrorhodamine 123, MitoSOX Red, dihydroethidium, and 5-chloromethylfluorescein diacetate. Samples were exposed sequentially to a NO donor and three different superoxide donors, and analyzed in real time by kinetic flow cytometry. Relevant kinetic descriptors, such as the rate of fluorescence change, were calculated from the kinetic plot. The generation of ONOO, which consumes both NO and superoxide, led to a decrease in the intensity of the cellular fluorescence of the probes sensitive to these molecules. This is a fast and simple assay that may be used to monitor the intracellular generation of ONOO in physiological, pathological, and pharmacological contexts. © 2015 International Clinical Cytometry Society. © 2015 International Clinical Cytometry Society.

  3. Method of Automatic Ontology Mapping through Machine Learning and Logic Mining

    Institute of Scientific and Technical Information of China (English)

    王英林

    2004-01-01

    Ontology mapping is the bottleneck of handling conflicts among heterogeneous ontologies and of implementing reconfiguration or interoperability of legacy systems. We proposed an ontology mapping method by using machine learning, type constraints and logic mining techniques. This method is able to find concept correspondences through instances and the result is optimized by using an error function; it is able to find attribute correspondence between two equivalent concepts and the mapping accuracy is enhanced by combining together instances learning, type constraints and the logic relations that are imbedded in instances; moreover, it solves the most common kind of categorization conflicts. We then proposed a merging algorithm to generate the shared ontology and proposed a reconfigurable architecture for interoperation based on multi agents. The legacy systems are encapsulated as information agents to participate in the integration system. Finally we give a simplified case study.

  4. An Evaluation of Machine Learning Methods to Detect Malicious SCADA Communications

    Energy Technology Data Exchange (ETDEWEB)

    Beaver, Justin M [ORNL; Borges, Raymond Charles [ORNL; Buckner, Mark A [ORNL

    2013-01-01

    Critical infrastructure Supervisory Control and Data Acquisition (SCADA) systems were designed to operate on closed, proprietary networks where a malicious insider posed the greatest threat potential. The centralization of control and the movement towards open systems and standards has improved the efficiency of industrial control, but has also exposed legacy SCADA systems to security threats that they were not designed to mitigate. This work explores the viability of machine learning methods in detecting the new threat scenarios of command and data injection. Similar to network intrusion detection systems in the cyber security domain, the command and control communications in a critical infrastructure setting are monitored, and vetted against examples of benign and malicious command traffic, in order to identify potential attack events. Multiple learning methods are evaluated using a dataset of Remote Terminal Unit communications, which included both normal operations and instances of command and data injection attack scenarios.

  5. Classification of ECG signal with Support Vector Machine Method for Arrhythmia Detection

    Science.gov (United States)

    Turnip, Arjon; Ilham Rizqywan, M.; Kusumandari, Dwi E.; Turnip, Mardi; Sihombing, Poltak

    2018-03-01

    An electrocardiogram is a potential bioelectric record that occurs as a result of cardiac activity. QRS Detection with zero crossing calculation is one method that can precisely determine peak R of QRS wave as part of arrhythmia detection. In this paper, two experimental scheme (2 minutes duration with different activities: relaxed and, typing) were conducted. From the two experiments it were obtained: accuracy, sensitivity, and positive predictivity about 100% each for the first experiment and about 79%, 93%, 83% for the second experiment, respectively. Furthermore, the feature set of MIT-BIH arrhythmia using the support vector machine (SVM) method on the WEKA software is evaluated. By combining the available attributes on the WEKA algorithm, the result is constant since all classes of SVM goes to the normal class with average 88.49% accuracy.

  6. Process for washing electromagnetic filters

    International Nuclear Information System (INIS)

    Guittet, Maurice; Treille, Pierre.

    1980-01-01

    This process concerns the washing of an electro-magnetic filter used, inter alia, for filtering the drain-off waters of nuclear power station steam generators, by means of a washing water used in closed circuit and freed, after each cleaning, of the solids in suspension it contains, by settlement of these solids. This invention enables the volume of water to be evaporated to be divided by 50, thereby providing a solid assurance of better safety, apart from a very significant saving [fr

  7. Cost Optimization on Energy Consumption of Punching Machine Based on Green Manufacturing Method at PT Buana Intan Gemilang

    OpenAIRE

    Prillia Ayudia; Rachmat Haris; Mulyana Tatang

    2017-01-01

    PT Buana Intan Gemilang is a company engaged in textile industry. The curtain textile production need punching machine to control the fabric process. The operator still works manually so it takes high cost of electrical energy consumption. So to solve the problem can implement green manufacturing on punching machine. The method include firstly to identify the color by classifying the company into the black, brown, gray or green color categories using questionnaire. Secondly is improvement are...

  8. Methods and means for improving the man-machine systems for NPP control

    International Nuclear Information System (INIS)

    Konstantinov, L.V.; Rakitin, I.D.

    1984-01-01

    Consideration is being given to the role of ''human factors'' and the ways of improving the man-machine interaction in NPP control and safety systems (CSS). Simulators and tAaining equipment on the basis of dynamic power unit models used for training and improving skill of NPP operatoAs, as well as for mastering collective actions of personnel under accidental conditions are considered in detail. The most perfect program complexes for fast NPP diagnostics and theiA pealization in the Federal Republic of Germany, Japan, Canada, the USA and other countries are described. A special attention is paid to the means and methods of videoterminal dialogue operator interaction with an object both in normal and extreme situations. It is noted that the problems of the man-machine interaction have become the subject of study only in the end of 70s after analyzing the causes of the Three-Mile-Island accident (USA). Publications dealing with the development of perspective control rooms for NPP were analyzed. It was concluded that radical changes both in equipment and principles of organizing the personnel activity will take place in the nearest future. They will be based on the progress in creating dialogue means and computers of the fourth and fifth generations as well as on engineering and psychological and technical aspects of designing

  9. Estimating the complexity of 3D structural models using machine learning methods

    Science.gov (United States)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  10. An illustration of new methods in machine condition monitoring, Part I: stochastic resonance

    International Nuclear Information System (INIS)

    Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.

    2017-01-01

    There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach. (paper)

  11. A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine

    Directory of Open Access Journals (Sweden)

    Bian Xiangjuan

    2014-05-01

    Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.

  12. MU-LOC: A Machine-Learning Method for Predicting Mitochondrially Localized Proteins in Plants

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2018-05-01

    Full Text Available Targeting and translocation of proteins to the appropriate subcellular compartments are crucial for cell organization and function. Newly synthesized proteins are transported to mitochondria with the assistance of complex targeting sequences containing either an N-terminal pre-sequence or a multitude of internal signals. Compared with experimental approaches, computational predictions provide an efficient way to infer subcellular localization of a protein. However, it is still challenging to predict plant mitochondrially localized proteins accurately due to various limitations. Consequently, the performance of current tools can be improved with new data and new machine-learning methods. We present MU-LOC, a novel computational approach for large-scale prediction of plant mitochondrial proteins. We collected a comprehensive dataset of plant subcellular localization, extracted features including amino acid composition, protein position weight matrix, and gene co-expression information, and trained predictors using deep neural network and support vector machine. Benchmarked on two independent datasets, MU-LOC achieved substantial improvements over six state-of-the-art tools for plant mitochondrial targeting prediction. In addition, MU-LOC has the advantage of predicting plant mitochondrial proteins either possessing or lacking N-terminal pre-sequences. We applied MU-LOC to predict candidate mitochondrial proteins for the whole proteome of Arabidopsis and potato. MU-LOC is publicly available at http://mu-loc.org.

  13. Logic Learning Machine and standard supervised methods for Hodgkin's lymphoma prognosis using gene expression data and clinical variables.

    Science.gov (United States)

    Parodi, Stefano; Manneschi, Chiara; Verda, Damiano; Ferrari, Enrico; Muselli, Marco

    2018-03-01

    This study evaluates the performance of a set of machine learning techniques in predicting the prognosis of Hodgkin's lymphoma using clinical factors and gene expression data. Analysed samples from 130 Hodgkin's lymphoma patients included a small set of clinical variables and more than 54,000 gene features. Machine learning classifiers included three black-box algorithms ( k-nearest neighbour, Artificial Neural Network, and Support Vector Machine) and two methods based on intelligible rules (Decision Tree and the innovative Logic Learning Machine method). Support Vector Machine clearly outperformed any of the other methods. Among the two rule-based algorithms, Logic Learning Machine performed better and identified a set of simple intelligible rules based on a combination of clinical variables and gene expressions. Decision Tree identified a non-coding gene ( XIST) involved in the early phases of X chromosome inactivation that was overexpressed in females and in non-relapsed patients. XIST expression might be responsible for the better prognosis of female Hodgkin's lymphoma patients.

  14. Can Machines Learn Respiratory Virus Epidemiology?: A Comparative Study of Likelihood-Free Methods for the Estimation of Epidemiological Dynamics

    Directory of Open Access Journals (Sweden)

    Heidi L. Tessmer

    2018-03-01

    Full Text Available To estimate and predict the transmission dynamics of respiratory viruses, the estimation of the basic reproduction number, R0, is essential. Recently, approximate Bayesian computation methods have been used as likelihood free methods to estimate epidemiological model parameters, particularly R0. In this paper, we explore various machine learning approaches, the multi-layer perceptron, convolutional neural network, and long-short term memory, to learn and estimate the parameters. Further, we compare the accuracy of the estimates and time requirements for machine learning and the approximate Bayesian computation methods on both simulated and real-world epidemiological data from outbreaks of influenza A(H1N1pdm09, mumps, and measles. We find that the machine learning approaches can be verified and tested faster than the approximate Bayesian computation method, but that the approximate Bayesian computation method is more robust across different datasets.

  15. Quantitative Diagnosis of Rotor Vibration Fault Using Process Power Spectrum Entropy and Support Vector Machine Method

    Directory of Open Access Journals (Sweden)

    Cheng-Wei Fei

    2014-01-01

    Full Text Available To improve the diagnosis capacity of rotor vibration fault in stochastic process, an effective fault diagnosis method (named Process Power Spectrum Entropy (PPSE and Support Vector Machine (SVM (PPSE-SVM, for short method was proposed. The fault diagnosis model of PPSE-SVM was established by fusing PPSE method and SVM theory. Based on the simulation experiment of rotor vibration fault, process data for four typical vibration faults (rotor imbalance, shaft misalignment, rotor-stator rubbing, and pedestal looseness were collected under multipoint (multiple channels and multispeed. By using PPSE method, the PPSE values of these data were extracted as fault feature vectors to establish the SVM model of rotor vibration fault diagnosis. From rotor vibration fault diagnosis, the results demonstrate that the proposed method possesses high precision, good learning ability, good generalization ability, and strong fault-tolerant ability (robustness in four aspects of distinguishing fault types, fault severity, fault location, and noise immunity of rotor stochastic vibration. This paper presents a novel method (PPSE-SVM for rotor vibration fault diagnosis and real-time vibration monitoring. The presented effort is promising to improve the fault diagnosis precision of rotating machinery like gas turbine.

  16. Using multivariate machine learning methods and structural MRI to classify childhood onset schizophrenia and healthy controls

    Directory of Open Access Journals (Sweden)

    Deanna eGreenstein

    2012-06-01

    Full Text Available Introduction: Multivariate machine learning methods can be used to classify groups of schizophrenia patients and controls using structural magnetic resonance imaging (MRI. However, machine learning methods to date have not been extended beyond classification and contemporaneously applied in a meaningful way to clinical measures. We hypothesized that brain measures would classify groups, and that increased likelihood of being classified as a patient using regional brain measures would be positively related to illness severity, developmental delays and genetic risk. Methods: Using 74 anatomic brain MRI sub regions and Random Forest, we classified 98 COS patients and 99 age, sex, and ethnicity-matched healthy controls. We also used Random Forest to determine the likelihood of being classified as a schizophrenia patient based on MRI measures. We then explored relationships between brain-based probability of illness and symptoms, premorbid development, and presence of copy number variation associated with schizophrenia. Results: Brain regions jointly classified COS and control groups with 73.7% accuracy. Greater brain-based probability of illness was associated with worse functioning (p= 0.0004 and fewer developmental delays (p=0.02. Presence of copy number variation (CNV was associated with lower probability of being classified as schizophrenia (p=0.001. The regions that were most important in classifying groups included left temporal lobes, bilateral dorsolateral prefrontal regions, and left medial parietal lobes. Conclusions: Schizophrenia and control groups can be well classified using Random Forest and anatomic brain measures, and brain-based probability of illness has a positive relationship with illness severity and a negative relationship with developmental delays/problems and CNV-based risk.

  17. Entropy method combined with extreme learning machine method for the short-term photovoltaic power generation forecasting

    International Nuclear Information System (INIS)

    Tang, Pingzhou; Chen, Di; Hou, Yushuo

    2016-01-01

    As the world’s energy problem becomes more severe day by day, photovoltaic power generation has opened a new door for us with no doubt. It will provide an effective solution for this severe energy problem and meet human’s needs for energy if we can apply photovoltaic power generation in real life, Similar to wind power generation, photovoltaic power generation is uncertain. Therefore, the forecast of photovoltaic power generation is very crucial. In this paper, entropy method and extreme learning machine (ELM) method were combined to forecast a short-term photovoltaic power generation. First, entropy method is used to process initial data, train the network through the data after unification, and then forecast electricity generation. Finally, the data results obtained through the entropy method with ELM were compared with that generated through generalized regression neural network (GRNN) and radial basis function neural network (RBF) method. We found that entropy method combining with ELM method possesses higher accuracy and the calculation is faster.

  18. Predicting metabolic syndrome using decision tree and support vector machine methods

    Directory of Open Access Journals (Sweden)

    Farzaneh Karimi-Alavijeh

    2016-06-01

    Full Text Available BACKGROUND: Metabolic syndrome which underlies the increased prevalence of cardiovascular disease and Type 2 diabetes is considered as a group of metabolic abnormalities including central obesity, hypertriglyceridemia, glucose intolerance, hypertension, and dyslipidemia. Recently, artificial intelligence based health-care systems are highly regarded because of its success in diagnosis, prediction, and choice of treatment. This study employs machine learning technics for predict the metabolic syndrome. METHODS: This study aims to employ decision tree and support vector machine (SVM to predict the 7-year incidence of metabolic syndrome. This research is a practical one in which data from 2107 participants of Isfahan Cohort Study has been utilized. The subjects without metabolic syndrome according to the ATPIII criteria were selected. The features that have been used in this data set include: gender, age, weight, body mass index, waist circumference, waist-to-hip ratio, hip circumference, physical activity, smoking, hypertension, antihypertensive medication use, systolic blood pressure (BP, diastolic BP, fasting blood sugar, 2-hour blood glucose, triglycerides (TGs, total cholesterol, low-density lipoprotein, high density lipoprotein-cholesterol, mean corpuscular volume, and mean corpuscular hemoglobin. Metabolic syndrome was diagnosed based on ATPIII criteria and two methods of decision tree and SVM were selected to predict the metabolic syndrome. The criteria of sensitivity, specificity and accuracy were used for validation. RESULTS: SVM and decision tree methods were examined according to the criteria of sensitivity, specificity and accuracy. Sensitivity, specificity and accuracy were 0.774 (0.758, 0.74 (0.72 and 0.757 (0.739 in SVM (decision tree method. CONCLUSION: The results show that SVM method sensitivity, specificity and accuracy is more efficient than decision tree. The results of decision tree method show that the TG is the most

  19. Abdominopelvic washings: A comprehensive review

    Directory of Open Access Journals (Sweden)

    Erika F Rodriguez

    2013-01-01

    Full Text Available Intraperitoneal spread may occur with gynecological epithelial neoplasms, as well as with non-gynecological malignancies, which may result in serosal involvement with or without concomitant effusion. Therefore, washings in patients with abdominopelvic tumors represent important specimens for cytologic examination. They are primarily utilized for staging ovarian cancers, although their role has decreased in staging of endometrial and cervical carcinoma. Abdominopelvic washings can be positive in a variety of pathologic conditions, including benign conditions, borderline neoplastic tumors, locally invasive tumors, or distant metastases. In a subset of cases, washings can be diagnostically challenging due to the presence of co-existing benign cells (e.g., mesothelial hyperplasia, endosalpingiosis, or endometriosis, lesions in which there is only minimal atypia (e.g., serous borderline tumors or scant atypical cells, and the rarity of specific tumor types (e.g., mesothelioma. Ancillary studies including immunocytochemistry and fluorescence in situ hybridization may be required in difficult cases to resolve the diagnosis. This article provides a comprehensive and contemporary review of abdominopelvic washings in the evaluation of gynecologic and non-gynecologic tumors, including primary peritoneal and mesothelial entities.

  20. WASH-1400: quantifying the uncertainties

    International Nuclear Information System (INIS)

    Erdmann, R.C.; Leverenz, F.L. Jr.; Lellouche, G.S.

    1981-01-01

    The purpose of this paper is to focus on the limitations of the WASH-1400 analysis in estimating the risk from light water reactors (LWRs). This assessment attempts to modify the quantification of the uncertainty in and estimate of risk as presented by the RSS (reactor safety study). 8 refs

  1. Methods, systems and apparatus for controlling third harmonic voltage when operating a multi-space machine in an overmodulation region

    Science.gov (United States)

    Perisic, Milun; Kinoshita, Michael H; Ranson, Ray M; Gallegos-Lopez, Gabriel

    2014-06-03

    Methods, system and apparatus are provided for controlling third harmonic voltages when operating a multi-phase machine in an overmodulation region. The multi-phase machine can be, for example, a five-phase machine in a vector controlled motor drive system that includes a five-phase PWM controlled inverter module that drives the five-phase machine. Techniques for overmodulating a reference voltage vector are provided. For example, when the reference voltage vector is determined to be within the overmodulation region, an angle of the reference voltage vector can be modified to generate a reference voltage overmodulation control angle, and a magnitude of the reference voltage vector can be modified, based on the reference voltage overmodulation control angle, to generate a modified magnitude of the reference voltage vector. By modifying the reference voltage vector, voltage command signals that control a five-phase inverter module can be optimized to increase output voltages generated by the five-phase inverter module.

  2. Influence of Industrial Washing and Cyclic Fatigue on Slippage of Linen Fabric Threads along the Seam

    Directory of Open Access Journals (Sweden)

    Irina KORUNČAK

    2014-04-01

    Full Text Available All seams of garments shall withstand the established force effect in the longitudinal and transverse directions. Resistance to thread slippage along the seam is a major property of fabrics that is regulated by strict guidelines. In many research works, lining fabrics are chosen as the object of research as thread slippage is the most typical of them. What concerns the reports exploring slippage of linen fabric threads along the seam, just very few papers are available. Studies dealing with the influence made by industrial washing and cyclic load on the defect under investigation, thereby taking into account operational properties of garments are not readily available at all. The objective of the paper is to define the influence of industrial washing and cyclic tensile on slippage of linen fabric threads along the seam. For the research, five 100 % linen fabrics of plain weave have been selected. Control test specimens, unwashed and processed with different washing methods, have been analysed. Cyclic tensile of the test specimens has been carried out by a tensile machine “Tinius Olsen” at tensile force P = 20 N, tensile speed of 12.55 mm/s, number of cycles of 100. The carried-out testing has demonstrated that industrial washing decreased resistance of linen fabrics to thread slippage along the seam in the most cases. Analysis of the results obtained has shown that cyclic tensile led to particularly significant increase in the seam gap. DOI: http://dx.doi.org/10.5755/j01.ms.20.1.2486

  3. Diagnostic Method of Diabetes Based on Support Vector Machine and Tongue Images

    Directory of Open Access Journals (Sweden)

    Jianfeng Zhang

    2017-01-01

    Full Text Available Objective. The purpose of this research is to develop a diagnostic method of diabetes based on standardized tongue image using support vector machine (SVM. Methods. Tongue images of 296 diabetic subjects and 531 nondiabetic subjects were collected by the TDA-1 digital tongue instrument. Tongue body and tongue coating were separated by the division-merging method and chrominance-threshold method. With extracted color and texture features of the tongue image as input variables, the diagnostic model of diabetes with SVM was trained. After optimizing the combination of SVM kernel parameters and input variables, the influences of the combinations on the model were analyzed. Results. After normalizing parameters of tongue images, the accuracy rate of diabetes predication was increased from 77.83% to 78.77%. The accuracy rate and area under curve (AUC were not reduced after reducing the dimensions of tongue features with principal component analysis (PCA, while substantially saving the training time. During the training for selecting SVM parameters by genetic algorithm (GA, the accuracy rate of cross-validation was grown from 72% or so to 83.06%. Finally, we compare with several state-of-the-art algorithms, and experimental results show that our algorithm has the best predictive accuracy. Conclusions. The diagnostic method of diabetes on the basis of tongue images in Traditional Chinese Medicine (TCM is of great value, indicating the feasibility of digitalized tongue diagnosis.

  4. Filling machine preventive maintenance using age replacement method in PT Lucas Djaja

    Directory of Open Access Journals (Sweden)

    Mustofa Fifi Herni

    2018-01-01

    Full Text Available PT Lucas Djaja is a company engaged in the pharmaceutical industry which produce sterile drugs and non-sterile. Filling machine has a high failure rate and expensive corrective maintenance cost. PT Lucas Djaja has a policy to perform engine maintenance by way of corrective maintenance. The study focused on the critical components, namely bearing R2, bearing 625 and bearing 626. When the replacement of the failure done by the company is currently using the formula mean time to failure with the result of bearing R2 at point 165 days, bearing 625 at a point 205 days, and bearing 626 at a point 182 days. Solutions generated by using age replacement method with minimization of total maintenance cost given on the bearing R2 at a point 60 days, bearing 625 at the point of 80 days and bearing 626 at a point 40 days.

  5. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods.

    Science.gov (United States)

    Gonzalez-Navarro, Felix F; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A; Flores-Rios, Brenda L; Ibarra-Esquer, Jorge E

    2016-10-26

    Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.

  6. Prediction of Student Dropout in E-Learning Program Through the Use of Machine Learning Method

    Directory of Open Access Journals (Sweden)

    Mingjie Tan

    2015-02-01

    Full Text Available The high rate of dropout is a serious problem in E-learning program. Thus it has received extensive concern from the education administrators and researchers. Predicting the potential dropout students is a workable solution to prevent dropout. Based on the analysis of related literature, this study selected student’s personal characteristic and academic performance as input attributions. Prediction models were developed using Artificial Neural Network (ANN, Decision Tree (DT and Bayesian Networks (BNs. A large sample of 62375 students was utilized in the procedures of model training and testing. The results of each model were presented in confusion matrix, and analyzed by calculating the rates of accuracy, precision, recall, and F-measure. The results suggested all of the three machine learning methods were effective in student dropout prediction, and DT presented a better performance. Finally, some suggestions were made for considerable future research.

  7. Android Used in The Learning Innovation Atwood Machines on Lagrange Mechanics Methods

    Directory of Open Access Journals (Sweden)

    Shabrina Shabrina

    2017-12-01

    Full Text Available Android is one of the smartphone operating system platforms that is now widely developed in learning media. Android allows the learning process to be more flexible and not oriented to be teacher center, but it allows to be student center. The Atwood machines is an experimental tool that is often used to observe mechanical laws in constantly accelerated motion which can also be described by the Lagrange mechanics methods. As an innovative and alternative learning activity, Atwood Android-based learning apps are running for two experimental variations, which are variations in load in cart and load masses that are hung. The experiment of load-carrier mass variation found that the larger load mass in the cart, the smaller the acceleration experienced by the system. Meanwhile, the experiment on the variation of the loaded mass found that the larger the loaded mass, the greater the acceleration experienced by the system.

  8. Real Time Monitoring System of Pollution Waste on Musi River Using Support Vector Machine (SVM) Method

    Science.gov (United States)

    Fachrurrozi, Muhammad; Saparudin; Erwin

    2017-04-01

    Real-time Monitoring and early detection system which measures the quality standard of waste in Musi River, Palembang, Indonesia is a system for determining air and water pollution level. This system was designed in order to create an integrated monitoring system and provide real time information that can be read. It is designed to measure acidity and water turbidity polluted by industrial waste, as well as to show and provide conditional data integrated in one system. This system consists of inputting and processing the data, and giving output based on processed data. Turbidity, substances, and pH sensor is used as a detector that produce analog electrical direct current voltage (DC). Early detection system works by determining the value of the ammonia threshold, acidity, and turbidity level of water in Musi River. The results is then presented based on the level group pollution by the Support Vector Machine classification method.

  9. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Felix F. Gonzalez-Navarro

    2016-10-01

    Full Text Available Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.

  10. A Hierarchical Approach Using Machine Learning Methods in Solar Photovoltaic Energy Production Forecasting

    Directory of Open Access Journals (Sweden)

    Zhaoxuan Li

    2016-01-01

    Full Text Available We evaluate and compare two common methods, artificial neural networks (ANN and support vector regression (SVR, for predicting energy productions from a solar photovoltaic (PV system in Florida 15 min, 1 h and 24 h ahead of time. A hierarchical approach is proposed based on the machine learning algorithms tested. The production data used in this work corresponds to 15 min averaged power measurements collected from 2014. The accuracy of the model is determined using computing error statistics such as mean bias error (MBE, mean absolute error (MAE, root mean square error (RMSE, relative MBE (rMBE, mean percentage error (MPE and relative RMSE (rRMSE. This work provides findings on how forecasts from individual inverters will improve the total solar power generation forecast of the PV system.

  11. Detection of needle to nerve contact based on electric bioimpedance and machine learning methods.

    Science.gov (United States)

    Kalvoy, Havard; Tronstad, Christian; Ullensvang, Kyrre; Steinfeldt, Thorsten; Sauter, Axel R

    2017-07-01

    In an ongoing project for electrical impedance-based needle guidance we have previously showed in an animal model that intraneural needle positions can be detected with bioimpedance measurement. To enhance the power of this method we in this study have investigated whether an early detection of the needle only touching the nerve also is feasible. Measurement of complex impedance during needle to nerve contact was compared with needle positions in surrounding tissues in a volunteer study on 32 subjects. Classification analysis using Support-Vector Machines demonstrated that discrimination is possible, but that the sensitivity and specificity for the nerve touch algorithm not is at the same level of performance as for intra-neuralintraneural detection.

  12. A control system for and a method of controlling a superconductive rotating electrical machine

    DEFF Research Database (Denmark)

    2014-01-01

    This invention relates to a method of controlling and a control system (100) for a superconductive rotating electric machine (200) comprising at least one superconductive winding (102; 103), where the control system (100) is adapted to control a power unit (101) supplying during use the at least...... or more actual values (110, 111)of one or more parameters for a given superconductive winding (102; 103), each parameter representing a physical condition of the given superconductive winding (102; 103), and to dynamically derive one or more electrical current values to be maintained in the given...... superconductive winding (102; 103) by the power unit (101) where the one or more electrical current values is/are derived taking into account the received one or more actual values (110, 111). In this way,greater flexibility and more precise control of the performance of the superconducting rotating electrical...

  13. Predicting metabolic syndrome using decision tree and support vector machine methods.

    Science.gov (United States)

    Karimi-Alavijeh, Farzaneh; Jalili, Saeed; Sadeghi, Masoumeh

    2016-05-01

    Metabolic syndrome which underlies the increased prevalence of cardiovascular disease and Type 2 diabetes is considered as a group of metabolic abnormalities including central obesity, hypertriglyceridemia, glucose intolerance, hypertension, and dyslipidemia. Recently, artificial intelligence based health-care systems are highly regarded because of its success in diagnosis, prediction, and choice of treatment. This study employs machine learning technics for predict the metabolic syndrome. This study aims to employ decision tree and support vector machine (SVM) to predict the 7-year incidence of metabolic syndrome. This research is a practical one in which data from 2107 participants of Isfahan Cohort Study has been utilized. The subjects without metabolic syndrome according to the ATPIII criteria were selected. The features that have been used in this data set include: gender, age, weight, body mass index, waist circumference, waist-to-hip ratio, hip circumference, physical activity, smoking, hypertension, antihypertensive medication use, systolic blood pressure (BP), diastolic BP, fasting blood sugar, 2-hour blood glucose, triglycerides (TGs), total cholesterol, low-density lipoprotein, high density lipoprotein-cholesterol, mean corpuscular volume, and mean corpuscular hemoglobin. Metabolic syndrome was diagnosed based on ATPIII criteria and two methods of decision tree and SVM were selected to predict the metabolic syndrome. The criteria of sensitivity, specificity and accuracy were used for validation. SVM and decision tree methods were examined according to the criteria of sensitivity, specificity and accuracy. Sensitivity, specificity and accuracy were 0.774 (0.758), 0.74 (0.72) and 0.757 (0.739) in SVM (decision tree) method. The results show that SVM method sensitivity, specificity and accuracy is more efficient than decision tree. The results of decision tree method show that the TG is the most important feature in predicting metabolic syndrome. According

  14. Effects of soap-water wash on human epidermal penetration.

    Science.gov (United States)

    Zhu, Hanjiang; Jung, Eui-Chang; Phuong, Christina; Hui, Xiaoying; Maibach, Howard

    2016-08-01

    Skin decontamination is a primary interventional method used to decrease dermal absorption of hazardous contaminants, including chemical warfare agents, pesticides and industrial pollutants. Soap and water wash, the most common and readily available decontamination system, may enhance percutaneous absorption through the "wash-in effect." To understand better the effect of soap-water wash on percutaneous penetration, and provide insight to improving skin decontamination methods, in vitro human epidermal penetration rates of four C(14) -labeled model chemicals (hydroquinone, clonidine, benzoic acid and paraoxon) were assayed using flow-through diffusion cells. Stratum corneum (SC) absorption rates of these chemicals at various hydration levels (0-295% of the dry SC weights) were determined and compared with the results of the epidermal penetration study to clarify the effect of SC hydration on skin permeability. Results showed accelerated penetration curves of benzoic acid and paraoxon after surface wash at 30 min postdosing. Thirty minutes after washing (60 min postdosing), penetration rates of hydroquinone and benzoic acid decreased due to reduced amounts of chemical on the skin surface and in the SC. At the end of the experiment (90 min postdosing), a soap-water wash resulted in lower hydroquinone penetration, greater paraoxon penetration and similar levels of benzoic acid and clonidine penetration compared to penetration levels in the non-wash groups. The observed wash-in effect agrees with the enhancement effect of SC hydration on the SC chemical absorption rate. These results suggest SC hydration derived from surface wash to be one cause of the wash-in effect. Further, the occurrence of a wash-in effect is dependent on chemical identity and elapsed time between exposure and onset of decontamination. By reducing chemical residue quantity on skin surface and in the SC reservoir, the soap-water wash may decrease the total quantity of chemical absorbed in the

  15. Ideology of a multiparametric system for estimating the insulation system of electric machines on the basis of absorption testing methods

    Science.gov (United States)

    Kislyakov, M. A.; Chernov, V. A.; Maksimkin, V. L.; Bozhin, Yu. M.

    2017-12-01

    The article deals with modern methods of monitoring the state and predicting the life of electric machines. In 50% of the cases of failure in the performance of electric machines is associated with insulation damage. As promising, nondestructive methods of control, methods based on the investigation of the processes of polarization occurring in insulating materials are proposed. To improve the accuracy of determining the state of insulation, a multiparametric approach is considered, which is a basis for the development of an expert system for estimating the state of health.

  16. Hand washing frequencies and procedures used in retail food services.

    Science.gov (United States)

    Strohbehn, Catherine; Sneed, Jeannie; Paez, Paola; Meyer, Janell

    2008-08-01

    Transmission of viruses, bacteria, and parasites to food by way of improperly washed hands is a major contributing factor in the spread of foodborne illnesses. Field observers have assessed compliance with hand washing regulations, yet few studies have included consideration of frequency and methods used by sectors of the food service industry or have included benchmarks for hand washing. Five 3-h observation periods of employee (n = 80) hand washing behaviors during menu production, service, and cleaning were conducted in 16 food service operations for a total of 240 h of direct observation. Four operations from each of four sectors of the retail food service industry participated in the study: assisted living for the elderly, childcare, restaurants, and schools. A validated observation form, based on 2005 Food Code guidelines, was used by two trained researchers. Researchers noted when hands should have been washed, when hands were washed, and how hands were washed. Overall compliance with Food Code recommendations for frequency during production, service, and cleaning phases ranged from 5% in restaurants to 33% in assisted living facilities. Procedural compliance rates also were low. Proposed benchmarks for the number of times hand washing should occur by each employee for each sector of food service during each phase of operation are seven times per hour for assisted living, nine times per hour for childcare, 29 times per hour for restaurants, and 11 times per hour for schools. These benchmarks are high, especially for restaurant employees. Implementation would mean lost productivity and potential for dermatitis; thus, active managerial control over work assignments is needed. These benchmarks can be used for training and to guide employee hand washing behaviors.

  17. A Study of Applications of Machine Learning Based Classification Methods for Virtual Screening of Lead Molecules.

    Science.gov (United States)

    Vyas, Renu; Bapat, Sanket; Jain, Esha; Tambe, Sanjeev S; Karthikeyan, Muthukumarasamy; Kulkarni, Bhaskar D

    2015-01-01

    The ligand-based virtual screening of combinatorial libraries employs a number of statistical modeling and machine learning methods. A comprehensive analysis of the application of these methods for the diversity oriented virtual screening of biological targets/drug classes is presented here. A number of classification models have been built using three types of inputs namely structure based descriptors, molecular fingerprints and therapeutic category for performing virtual screening. The activity and affinity descriptors of a set of inhibitors of four target classes DHFR, COX, LOX and NMDA have been utilized to train a total of six classifiers viz. Artificial Neural Network (ANN), k nearest neighbor (k-NN), Support Vector Machine (SVM), Naïve Bayes (NB), Decision Tree--(DT) and Random Forest--(RF). Among these classifiers, the ANN was found as the best classifier with an AUC of 0.9 irrespective of the target. New molecular fingerprints based on pharmacophore, toxicophore and chemophore (PTC), were used to build the ANN models for each dataset. A good accuracy of 87.27% was obtained using 296 chemophoric binary fingerprints for the COX-LOX inhibitors compared to pharmacophoric (67.82%) and toxicophoric (70.64%). The methodology was validated on the classical Ames mutagenecity dataset of 4337 molecules. To evaluate it further, selectivity and promiscuity of molecules from five drug classes viz. anti-anginal, anti-convulsant, anti-depressant, anti-arrhythmic and anti-diabetic were studied. The TPC fingerprints computed for each category were able to capture the drug-class specific features using the k-NN classifier. These models can be useful for selecting optimal molecules for drug design.

  18. Evaluation of auto-assessment method for C-D analysis based on support vector machine

    International Nuclear Information System (INIS)

    Takei, Takaaki; Ikeda, Mitsuru; Imai, Kuniharu; Kamihira, Hiroaki; Kishimoto, Tomonari; Goto, Hiroya

    2010-01-01

    Contrast-Detail (C-D) analysis is one of the visual quality assessment methods in medical imaging, and many auto-assessment methods for C-D analysis have been developed in recent years. However, for the auto-assessment method for C-D analysis, the effects of nonlinear image processing are not clear. So, we have made an auto-assessment method for C-D analysis using a support vector machine (SVM), and have evaluated its performance for the images processed with a noise reduction method. The feature indexes used in the SVM were the normalized cross correlation (NCC) coefficient on each signal between the noise-free and noised image, the contrast to noise ratio (CNR) on each signal, the radius of each signal, and the Student's t-test statistic for the mean difference between the signal and background pixel values. The results showed that the auto-assessment method for C-D analysis by using Student's t-test statistic agreed well with the visual assessment for the non-processed images, but disagreed for the images processed with the noise reduction method. Our results also showed that the auto-assessment method for C-D analysis by the SVM made of NCC and CNR agreed well with the visual assessment for the non-processed and noise-reduced images. Therefore, the auto-assessment method for C-D analysis by the SVM will be expected to have the robustness for the non-linear image processing. (author)

  19. Distinguishing butchery cut marks from crocodile bite marks through machine learning methods.

    Science.gov (United States)

    Domínguez-Rodrigo, Manuel; Baquedano, Enrique

    2018-04-10

    All models of evolution of human behaviour depend on the correct identification and interpretation of bone surface modifications (BSM) on archaeofaunal assemblages. Crucial evolutionary features, such as the origin of stone tool use, meat-eating, food-sharing, cooperation and sociality can only be addressed through confident identification and interpretation of BSM, and more specifically, cut marks. Recently, it has been argued that linear marks with the same properties as cut marks can be created by crocodiles, thereby questioning whether secure cut mark identifications can be made in the Early Pleistocene fossil record. Powerful classification methods based on multivariate statistics and machine learning (ML) algorithms have previously successfully discriminated cut marks from most other potentially confounding BSM. However, crocodile-made marks were marginal to or played no role in these comparative analyses. Here, for the first time, we apply state-of-the-art ML methods on crocodile linear BSM and experimental butchery cut marks, showing that the combination of multivariate taphonomy and ML methods provides accurate identification of BSM, including cut and crocodile bite marks. This enables empirically-supported hominin behavioural modelling, provided that these methods are applied to fossil assemblages.

  20. A New Application of Support Vector Machine Method: Condition Monitoring and Analysis of Reactor Coolant Pump

    International Nuclear Information System (INIS)

    Meng Qinghu; Meng Qingfeng; Feng Wuwei

    2012-01-01

    Fukushima nuclear power plant accident caused huge losses and pollution and it showed that the reactor coolant pump is very important in a nuclear power plant. Therefore, to keep the safety and reliability, the condition of the coolant pump needs to be online condition monitored and fault analyzed. In this paper, condition monitoring and analysis based on support vector machine (SVM) is proposed. This method is just to aim at the small sample studies such as reactor coolant pump. Both experiment data and field data are analyzed. In order to eliminate the noise and useless frequency, these data are disposed through a multi-band FIR filter. After that, a fault feature selection method based on principal component analysis is proposed. The related variable quantity is changed into unrelated variable quantity, and the dimension is descended. Then the SVM method is used to separate different fault characteristics. Firstly, this method is used as a two-kind classifier to separate each two different running conditions. Then the SVM is used as a multiple classifier to separate all of the different condition types. The SVM could separate these conditions successfully. After that, software based on SVM was designed for reactor coolant pump condition analysis. This software is installed on the reactor plant control system of Qinshan nuclear power plant in China. It could monitor the online data and find the pump mechanical fault automatically.

  1. Machine learning plus optical flow: a simple and sensitive method to detect cardioactive drugs

    Science.gov (United States)

    Lee, Eugene K.; Kurokawa, Yosuke K.; Tu, Robin; George, Steven C.; Khine, Michelle

    2015-07-01

    Current preclinical screening methods do not adequately detect cardiotoxicity. Using human induced pluripotent stem cell-derived cardiomyocytes (iPS-CMs), more physiologically relevant preclinical or patient-specific screening to detect potential cardiotoxic effects of drug candidates may be possible. However, one of the persistent challenges for developing a high-throughput drug screening platform using iPS-CMs is the need to develop a simple and reliable method to measure key electrophysiological and contractile parameters. To address this need, we have developed a platform that combines machine learning paired with brightfield optical flow as a simple and robust tool that can automate the detection of cardiomyocyte drug effects. Using three cardioactive drugs of different mechanisms, including those with primarily electrophysiological effects, we demonstrate the general applicability of this screening method to detect subtle changes in cardiomyocyte contraction. Requiring only brightfield images of cardiomyocyte contractions, we detect changes in cardiomyocyte contraction comparable to - and even superior to - fluorescence readouts. This automated method serves as a widely applicable screening tool to characterize the effects of drugs on cardiomyocyte function.

  2. A multilevel-ROI-features-based machine learning method for detection of morphometric biomarkers in Parkinson's disease.

    Science.gov (United States)

    Peng, Bo; Wang, Suhong; Zhou, Zhiyong; Liu, Yan; Tong, Baotong; Zhang, Tao; Dai, Yakang

    2017-06-09

    Machine learning methods have been widely used in recent years for detection of neuroimaging biomarkers in regions of interest (ROIs) and assisting diagnosis of neurodegenerative diseases. The innovation of this study is to use multilevel-ROI-features-based machine learning method to detect sensitive morphometric biomarkers in Parkinson's disease (PD). Specifically, the low-level ROI features (gray matter volume, cortical thickness, etc.) and high-level correlative features (connectivity between ROIs) are integrated to construct the multilevel ROI features. Filter- and wrapper- based feature selection method and multi-kernel support vector machine (SVM) are used in the classification algorithm. T1-weighted brain magnetic resonance (MR) images of 69 PD patients and 103 normal controls from the Parkinson's Progression Markers Initiative (PPMI) dataset are included in the study. The machine learning method performs well in classification between PD patients and normal controls with an accuracy of 85.78%, a specificity of 87.79%, and a sensitivity of 87.64%. The most sensitive biomarkers between PD patients and normal controls are mainly distributed in frontal lobe, parental lobe, limbic lobe, temporal lobe, and central region. The classification performance of our method with multilevel ROI features is significantly improved comparing with other classification methods using single-level features. The proposed method shows promising identification ability for detecting morphometric biomarkers in PD, thus confirming the potentiality of our method in assisting diagnosis of the disease. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Laser Induced Damage of Potassium Dihydrogen Phosphate (KDP Optical Crystal Machined by Water Dissolution Ultra-Precision Polishing Method

    Directory of Open Access Journals (Sweden)

    Yuchuan Chen

    2018-03-01

    Full Text Available Laser induced damage threshold (LIDT is an important optical indicator for nonlinear Potassium Dihydrogen Phosphate (KDP crystal used in high power laser systems. In this study, KDP optical crystals are initially machined with single point diamond turning (SPDT, followed by water dissolution ultra-precision polishing (WDUP and then tested with 355 nm nanosecond pulsed-lasers. Power spectral density (PSD analysis shows that WDUP process eliminates the laser-detrimental spatial frequencies band of micro-waviness on SPDT machined surface and consequently decreases its modulation effect on the laser beams. The laser test results show that LIDT of WDUP machined crystal improves and its stability has a significant increase by 72.1% compared with that of SPDT. Moreover, a subsequent ultrasonic assisted solvent cleaning process is suggested to have a positive effect on the laser performance of machined KDP crystal. Damage crater investigation indicates that the damage morphologies exhibit highly thermal explosion features of melted cores and brittle fractures of periphery material, which can be described with the classic thermal explosion model. The comparison result demonstrates that damage mechanisms for SPDT and WDUP machined crystal are the same and WDUP process reveals the real bulk laser resistance of KDP optical crystal by removing the micro-waviness and subsurface damage on SPDT machined surface. This improvement of WDUP method makes the LIDT more accurate and will be beneficial to the laser performance of KDP crystal.

  4. Usage of I++ Simulator to Program Coordinate Measuring Machines when Common Programming Methods are difficult to apply

    Directory of Open Access Journals (Sweden)

    Gąska A.

    2014-02-01

    Full Text Available Nowadays, simulators facilitate tasks performed daily by the engineers of different branches, including coordinate metrologists. Sometimes it is difficult or almost impossible to program a Coordinate Measuring Machine (CMM using standard methods. This happens, for example, during measurements of nano elements or in cases when measurements are performed on high-precision (accurate measuring machines which work in strictly air-conditioned spaces and the presence of the operator in such room during the programming of CMM could cause an increase in temperature, which in turn could make it necessary to wait some time until conditions stabilize. This article describes functioning of a simulator and its usage during Coordinate Measuring Machine programming in the latter situation. Article also describes a general process of programming CMMs which ensures the correct machine performance after starting the program on a real machine. As an example proving the presented considerations, measurement of exemplary workpiece, which was performed on the machine working in the strictly air-conditioned room, was described

  5. Seeing It All: Evaluating Supervised Machine Learning Methods for the Classification of Diverse Otariid Behaviours.

    Directory of Open Access Journals (Sweden)

    Monique A Ladds

    Full Text Available Constructing activity budgets for marine animals when they are at sea and cannot be directly observed is challenging, but recent advances in bio-logging technology offer solutions to this problem. Accelerometers can potentially identify a wide range of behaviours for animals based on unique patterns of acceleration. However, when analysing data derived from accelerometers, there are many statistical techniques available which when applied to different data sets produce different classification accuracies. We investigated a selection of supervised machine learning methods for interpreting behavioural data from captive otariids (fur seals and sea lions. We conducted controlled experiments with 12 seals, where their behaviours were filmed while they were wearing 3-axis accelerometers. From video we identified 26 behaviours that could be grouped into one of four categories (foraging, resting, travelling and grooming representing key behaviour states for wild seals. We used data from 10 seals to train four predictive classification models: stochastic gradient boosting (GBM, random forests, support vector machine using four different kernels and a baseline model: penalised logistic regression. We then took the best parameters from each model and cross-validated the results on the two seals unseen so far. We also investigated the influence of feature statistics (describing some characteristic of the seal, testing the models both with and without these. Cross-validation accuracies were lower than training accuracy, but the SVM with a polynomial kernel was still able to classify seal behaviour with high accuracy (>70%. Adding feature statistics improved accuracies across all models tested. Most categories of behaviour -resting, grooming and feeding-were all predicted with reasonable accuracy (52-81% by the SVM while travelling was poorly categorised (31-41%. These results show that model selection is important when classifying behaviour and that by using

  6. Washing of Cloth Contaminated with Radionuclides Using a Detergent-free Laundry System

    Energy Technology Data Exchange (ETDEWEB)

    Yim, Sung Paal

    2005-07-01

    In this study, a new laundry system to wash clothes without using detergent (detergent-free) was applied to wash clothes contaminated with radionuclides at the RWTF of KAERI. If the clothes contaminated with radionuclides and soil decontaminated and cleaned by washing without using detergent, the problem caused by the detergent could be solved naturally. The experiment was performed in two stages. In the first stage, washability of the processed water from the detergent=free laundry system was investigated with regard to its decontamination efficiency for the radionuclides and the detergency for the soil by using the test cloth specimens. In the second stage, real working clothes contaminated with radionuclides from the RWTF were washed by using a laundry machine equipped with a detergent-free system. Decontamination and detergency of the clothes were estimated after washing and the wastewater was also analyzed for its properties.

  7. On Plant Detection of Intact Tomato Fruits Using Image Analysis and Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Kyosuke Yamamoto

    2014-07-01

    Full Text Available Fully automated yield estimation of intact fruits prior to harvesting provides various benefits to farmers. Until now, several studies have been conducted to estimate fruit yield using image-processing technologies. However, most of these techniques require thresholds for features such as color, shape and size. In addition, their performance strongly depends on the thresholds used, although optimal thresholds tend to vary with images. Furthermore, most of these techniques have attempted to detect only mature and immature fruits, although the number of young fruits is more important for the prediction of long-term fluctuations in yield. In this study, we aimed to develop a method to accurately detect individual intact tomato fruits including mature, immature and young fruits on a plant using a conventional RGB digital camera in conjunction with machine learning approaches. The developed method did not require an adjustment of threshold values for fruit detection from each image because image segmentation was conducted based on classification models generated in accordance with the color, shape, texture and size of the images. The results of fruit detection in the test images showed that the developed method achieved a recall of 0.80, while the precision was 0.88. The recall values of mature, immature and young fruits were 1.00, 0.80 and 0.78, respectively.

  8. Spectral methods in machine learning and new strategies for very large datasets

    Science.gov (United States)

    Belabbas, Mohamed-Ali; Wolfe, Patrick J.

    2009-01-01

    Spectral methods are of fundamental importance in statistics and machine learning, because they underlie algorithms from classical principal components analysis to more recent approaches that exploit manifold structure. In most cases, the core technical problem can be reduced to computing a low-rank approximation to a positive-definite kernel. For the growing number of applications dealing with very large or high-dimensional datasets, however, the optimal approximation afforded by an exact spectral decomposition is too costly, because its complexity scales as the cube of either the number of training examples or their dimensionality. Motivated by such applications, we present here 2 new algorithms for the approximation of positive-semidefinite kernels, together with error bounds that improve on results in the literature. We approach this problem by seeking to determine, in an efficient manner, the most informative subset of our data relative to the kernel approximation task at hand. This leads to two new strategies based on the Nyström method that are directly applicable to massive datasets. The first of these—based on sampling—leads to a randomized algorithm whereupon the kernel induces a probability distribution on its set of partitions, whereas the latter approach—based on sorting—provides for the selection of a partition in a deterministic way. We detail their numerical implementation and provide simulation results for a variety of representative problems in statistical data analysis, each of which demonstrates the improved performance of our approach relative to existing methods. PMID:19129490

  9. Comparison of Machine Learning Methods for the Purpose Of Human Fall Detection

    Directory of Open Access Journals (Sweden)

    Strémy Maximilián

    2014-12-01

    Full Text Available According to several studies, the European population is rapidly aging far over last years. It is therefore important to ensure that aging population is able to live independently without the support of working-age population. In accordance with the studies, fall is the most dangerous and frequent accident in the everyday life of aging population. In our paper, we present a system to track the human fall by a visual detection, i.e. using no wearable equipment. For this purpose, we used a Kinect sensor, which provides the human body position in the Cartesian coordinates. It is possible to directly capture a human body because the Kinect sensor has a depth and also an infrared camera. The first step in our research was to detect postures and classify the fall accident. We experimented and compared the selected machine learning methods including Naive Bayes, decision trees and SVM method to compare the performance in recognizing the human postures (standing, sitting and lying. The highest classification accuracy of over 93.3% was achieved by the decision tree method.

  10. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    Science.gov (United States)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  11. Teamwork: improved eQTL mapping using combinations of machine learning methods.

    Directory of Open Access Journals (Sweden)

    Marit Ackermann

    Full Text Available Expression quantitative trait loci (eQTL mapping is a widely used technique to uncover regulatory relationships between genes. A range of methodologies have been developed to map links between expression traits and genotypes. The DREAM (Dialogue on Reverse Engineering Assessments and Methods initiative is a community project to objectively assess the relative performance of different computational approaches for solving specific systems biology problems. The goal of one of the DREAM5 challenges was to reverse-engineer genetic interaction networks from synthetic genetic variation and gene expression data, which simulates the problem of eQTL mapping. In this framework, we proposed an approach whose originality resides in the use of a combination of existing machine learning algorithms (committee. Although it was not the best performer, this method was by far the most precise on average. After the competition, we continued in this direction by evaluating other committees using the DREAM5 data and developed a method that relies on Random Forests and LASSO. It achieved a much higher average precision than the DREAM best performer at the cost of slightly lower average sensitivity.

  12. Study on HRA-based method for assessing digital man-machine interface

    International Nuclear Information System (INIS)

    Li Pengcheng; Dai Licao; Zhang Li; Zhao Ming; Hu Hong

    2014-01-01

    In order to identify the design flaws of digital man-machine interface (MMI) that may trigger human errors or weaken the performance of operators, a HRA-based method (namely HCR + CREAM + HEC) for assessing digital MMI was established. Firstly, the HCR method was used to identify the risk scenarios of high human error probability from the overall event as a whole perspective. Then, for the identified high-risk scenarios, the CREAM was adopted to determine the various error modes and its error probability, and the failure probability was ranked. Finally, the human factors engineering checklist of digital MMI was established according to the characteristics of digital MMI, it was used to check the digital MMI with high error probability in order to identify the design flaws of digital MMI, and the suggestions of optimization were provided. The results show that the provided assessment method can quickly and efficiently identify the design flaws of digital MMI which easily trigger human errors, and the safety of operation of the digital control system for nuclear power plants can be enhanced by optimization of design. (authors)

  13. High Frequency Voltage Injection Methods and Observer Design for Initial Position Detection of Permanent Magnet Synchronous Machines

    DEFF Research Database (Denmark)

    Jin, Xinhai; Ni, Ronggang; Chen, Wei

    2018-01-01

    The information of the initial rotor position is essential for smooth start up and robust control of Permanent Magnet Synchronous Machines (PMSMs). RoTating Voltage Injection (RTVI) methods in the stationary reference frame have been commonly adopted to detect the initial rotor position at stands......The information of the initial rotor position is essential for smooth start up and robust control of Permanent Magnet Synchronous Machines (PMSMs). RoTating Voltage Injection (RTVI) methods in the stationary reference frame have been commonly adopted to detect the initial rotor position...

  14. Machine learning methods to predict child posttraumatic stress: a proof of concept study.

    Science.gov (United States)

    Saxe, Glenn N; Ma, Sisi; Ren, Jiwen; Aliferis, Constantin

    2017-07-10

    The care of traumatized children would benefit significantly from accurate predictive models for Posttraumatic Stress Disorder (PTSD), using information available around the time of trauma. Machine Learning (ML) computational methods have yielded strong results in recent applications across many diseases and data types, yet they have not been previously applied to childhood PTSD. Since these methods have not been applied to this complex and debilitating disorder, there is a great deal that remains to be learned about their application. The first step is to prove the concept: Can ML methods - as applied in other fields - produce predictive classification models for childhood PTSD? Additionally, we seek to determine if specific variables can be identified - from the aforementioned predictive classification models - with putative causal relations to PTSD. ML predictive classification methods - with causal discovery feature selection - were applied to a data set of 163 children hospitalized with an injury and PTSD was determined three months after hospital discharge. At the time of hospitalization, 105 risk factor variables were collected spanning a range of biopsychosocial domains. Seven percent of subjects had a high level of PTSD symptoms. A predictive classification model was discovered with significant predictive accuracy. A predictive model constructed based on subsets of potentially causally relevant features achieves similar predictivity compared to the best predictive model constructed with all variables. Causal Discovery feature selection methods identified 58 variables of which 10 were identified as most stable. In this first proof-of-concept application of ML methods to predict childhood Posttraumatic Stress we were able to determine both predictive classification models for childhood PTSD and identify several causal variables. This set of techniques has great potential for enhancing the methodological toolkit in the field and future studies should seek to

  15. A novel selection method of seismic attributes based on gray relational degree and support vector machine.

    Directory of Open Access Journals (Sweden)

    Yaping Huang

    Full Text Available The selection of seismic attributes is a key process in reservoir prediction because the prediction accuracy relies on the reliability and credibility of the seismic attributes. However, effective selection method for useful seismic attributes is still a challenge. This paper presents a novel selection method of seismic attributes for reservoir prediction based on the gray relational degree (GRD and support vector machine (SVM. The proposed method has a two-hierarchical structure. In the first hierarchy, the primary selection of seismic attributes is achieved by calculating the GRD between seismic attributes and reservoir parameters, and the GRD between the seismic attributes. The principle of the primary selection is that these seismic attributes with higher GRD to the reservoir parameters will have smaller GRD between themselves as compared to those with lower GRD to the reservoir parameters. Then the SVM is employed in the second hierarchy to perform an interactive error verification using training samples for the purpose of determining the final seismic attributes. A real-world case study was conducted to evaluate the proposed GRD-SVM method. Reliable seismic attributes were selected to predict the coalbed methane (CBM content in southern Qinshui basin, China. In the analysis, the instantaneous amplitude, instantaneous bandwidth, instantaneous frequency, and minimum negative curvature were selected, and the predicted CBM content was fundamentally consistent with the measured CBM content. This real-world case study demonstrates that the proposed method is able to effectively select seismic attributes, and improve the prediction accuracy. Thus, the proposed GRD-SVM method can be used for the selection of seismic attributes in practice.

  16. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  17. Soil washing for brine removal

    International Nuclear Information System (INIS)

    Ayyachamy, J.S.; Atalay, A.; Zaman, M.

    1992-01-01

    During the exploration for oil and thereafter, brine transfer lines get ruptured releasing the brine which contaminates the surrounding soil. The salinity level in brine is very high, sometimes approaching or exceeding that of sea water. Soils contaminated with brine are unproductive and unsuitable for plant growth. Several investigators have documented the pollution of surface water and groundwater due to brine disposal from oil and needed to clean up such sites. The objective of this study is to develop a soil washing technique that can be used to remove brine sites were collected and used in the study. This paper reports on results which indicate that soil washing using various surface active agents is effective in removing the brine

  18. Digital signal processing control of induction machine`s torque and stator flux utilizing the direct stator flux field orientation method

    Energy Technology Data Exchange (ETDEWEB)

    Seiz, Julie Burger [Union College, Schenectady, NY (United States)

    1997-04-01

    This paper presents a review of the Direct Stator Flux Field Orientation control method. This method can be used to control an induction motor`s torque and flux directly and is the application of interest for this thesis. This control method is implemented without the traditional feedback loops and associated hardware. Predictions are made, by mathematical calculations, of the stator voltage vector. The voltage vector is determined twice a switching period. The switching period is fixed throughout the analysis. The three phase inverter duty cycle necessary to control the torque and flux of the induction machine is determined by the voltage space vector Pulse Width Modulation (PWM) technique. Transient performance of either the flux or torque requires an alternate modulation scheme which is also addressed in this thesis. A block diagram of this closed loop system is provided. 22 figs., 7 tabs.

  19. Leaching Behavior of Circulating Fluidised Bed MSWI Air Pollution Control Residue in Washing Process

    Directory of Open Access Journals (Sweden)

    Zhiliang Chen

    2016-09-01

    Full Text Available In this study, air pollution control (APC residue is conducted with water washing process to reduce its chloride content. A novel electrical conductivily (EC measurement method is proposed to monitor the dynamic change of chloride concentrations in leachate as well as the chloride content of the residue. The method equally applies to various washing processes with different washing time, liquid/solid ratio and washing frequency. The results show that washing effectively extracts chloride salts from APC residues, including those from circulating fluidized bed (CFB municipal solid waste incineration (MSWI. The most appropriate liquid/solid ratio and washing time in the first washing are found to be around 4 L water per kg of APC residue and 30 min, respectively, and washing twice is required to obtain maximum dissolution. The pH value is the major controlling factor of the heavy metals speciation in leachate, while chloride concentration also affects the speciation of Cd. Water washing causes no perceptible transfer of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs from the APC residue to leachate. The chloride concentration is strongly related with electrical conductivity (EC, as well as with the concentrations of calcium, sodium and potassium of washing water. Their regression analyses specify that soluble chloride salts and EC could act as an indirect indicator to monitor the change of chloride concentration and remaining chloride content, thus, contributing to the selection of the optimal washing conditions.

  20. A new method for grain refinement in magnesium alloy: High speed extrusion machining

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yao, E-mail: liuyao@ustb.edu.cn [School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083 (China); Cai, Songlin [China Electric Power Research Institute, State Grid Corporation of China, Beijing 100192 (China); Dai, Lanhong [State Key Laboratory of Nonlinear Mechanics, Institute of Mechanics, Chinese Academy of Science, Beijing 100190 (China)

    2016-01-10

    Magnesium alloys have received broad attentions in industry due to their competitive strength to density ratio, but the poor ductility and strength limit their wide range of applications as engineering materials. A novel severe plastic deformation (SPD) technique of high speed extrusion machining (HSEM) was used here. This method could improve the aforementioned disadvantages of magnesium alloys by one single processing step. In this work, systematic HSEM experiments with different chip thickness ratios were conducted for magnesium alloy AZ31B. The microstructure of the chips reveals that HSEM is an effective SPD method for attaining magnesium alloys with different grain sizes and textures. The magnesium alloy with bimodal grain size distribution has increased mechanical properties than initial sample. The electron backscatter diffraction (EBSD) analysis shows that the dynamic recrystallization (DRX) affects the grain refinement and resulting hardness in AZ31B. Based on the experimental observations, a new theoretical model is put forward to describe the effect of DRX on materials during HSEM. Compared with the experimental measurements, the theoretical model is effective to predict the mechanical property of materials after HSEM.

  1. The development of vector based 2.5D print methods for a painting machine

    Science.gov (United States)

    Parraman, Carinna

    2013-02-01

    Through recent trends in the application of digitally printed decorative finishes to products, CAD, 3D additive layer manufacturing and research in material perception, [1, 2] there is a growing interest in the accurate rendering of materials and tangible displays. Although current advances in colour management and inkjet printing has meant that users can take for granted high-quality colour and resolution in their printed images, digital methods for transferring a photographic coloured image from screen to paper is constrained by pixel count, file size, colorimetric conversion between colour spaces and the gamut limits of input and output devices. This paper considers new approaches to applying alternative colour palettes by using a vector-based approach through the application of paint mixtures, towards what could be described as a 2.5D printing method. The objective is to not apply an image to a textured surface, but where texture and colour are integral to the mark, that like a brush, delineates the contours in the image. The paper describes the difference between the way inks and paints are mixed and applied. When transcribing the fluid appearance of a brush stroke, there is a difference between a halftone printed mark and a painted mark. The issue of surface quality is significant to subjective qualities when studying the appearance of ink or paint on paper. The paper provides examples of a range of vector marks that are then transcribed into brush stokes by the painting machine.

  2. Combined measurement system for double shield tunnel boring machine guidance based on optical and visual methods.

    Science.gov (United States)

    Lin, Jiarui; Gao, Kai; Gao, Yang; Wang, Zheng

    2017-10-01

    In order to detect the position of the cutting shield at the head of a double shield tunnel boring machine (TBM) during the excavation, this paper develops a combined measurement system which is mainly composed of several optical feature points, a monocular vision sensor, a laser target sensor, and a total station. The different elements of the combined system are mounted on the TBM in suitable sequence, and the position of the cutting shield in the reference total station frame is determined by coordinate transformations. Subsequently, the structure of the feature points and matching technique for them are expounded, the position measurement method based on monocular vision is presented, and the calibration methods for the unknown relationships among different parts of the system are proposed. Finally, a set of experimental platforms to simulate the double shield TBM is established, and accuracy verification experiments are conducted. Experimental results show that the mean deviation of the system is 6.8 mm, which satisfies the requirements of double shield TBM guidance.

  3. [Customization of hemodialysis therapy: dialysis is not a washing machine].

    Science.gov (United States)

    Santoro, Antonio

    2018-02-01

    In recent years, the population of chronic dialysis has grown in number but also in age and frequency of co-morbidies such as cardiac diseases, vascular pathologies, diabetes, etc. The majority of patients on chronic hemodialysis are over 70 years and, given the high number of comorbidities, they often exhibit poor tolerance to dialysis treatments. A non-tolerated dialytic treatment can have side-effects that would require an intensification of the dialysis sessions and many hospitalizations. Consequently, the problematic dialysis treatments, as well as harmful for the patient, become economically more detrimental than other treatments apparently more expensive but more tolerated ones In the current days we have, thanks to the huge developments in dialysis technology, powerful weapons to ensure effective and scarcely symptomatic dialysis treatments to the majority of the HD patients. New, highly biocompatible membranes with defined and modular cut-off and / or absorption capacity may allow us to provide adequate purification. Moreover the monitoring and biofeedback systems such as blood volume tracking, body temperature monitoring (BTM) and blood pressure (BPM) can be very useful in reducing the risk of intra-dialytic hypotension and symptoms. Therefore, the dialytic therapy, as well as all the pharmacological therapies for the chronic patient, must consider the specificity of the patient, basing on his metabolic problems, cardiovascular tolerance, residual renal function and on his dietary and general compliance. The central aim of the nephrologist is to formulate the better prescription for the individual patient, considering the dialysis modalities, the membrane type, the dry weight (ideal post-dialysis body weight), the frequency and the duration of the weekly sessions and the technological tools that can optimize the treatment. Copyright by Società Italiana di Nefrologia SIN, Rome, Italy.

  4. Extraction of Plant Physiological Status from Hyperspectral Signatures Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Daniel Doktor

    2014-12-01

    Full Text Available The machine learning method, random forest (RF, is applied in order to derive biophysical and structural vegetation parameters from hyperspectral signatures. Hyperspectral data are, among other things, characterized by their high dimensionality and autocorrelation. Common multivariate regression approaches, which usually include only a limited number of spectral indices as predictors, do not make full use of the available information. In contrast, machine learning methods, such as RF, are supposed to be better suited to extract information on vegetation status. First, vegetation parameters are extracted from hyperspectral signatures simulated with the radiative transfer model, PROSAIL. Second, the transferability of these results with respect to laboratory and field measurements is investigated. In situ observations of plant physiological parameters and corresponding spectra are gathered in the laboratory for summer barley (Hordeum vulgare. Field in situ measurements focus on winter crops over several growing seasons. Chlorophyll content, Leaf Area Index and phenological growth stages are derived from simulated and measured spectra. RF performs very robustly and with a very high accuracy on PROSAIL simulated data. Furthermore, it is almost unaffected by introduced noise and bias in the data. When applied to laboratory data, the prediction accuracy is still good (C\\(_{ab}\\: \\(R^2\\ = 0.94/ LAI: \\(R^2\\ = 0.80/BBCH (Growth stages of mono-and dicotyledonous plants : \\(R^2\\ = 0.91, but not as high as for simulated spectra. Transferability to field measurements is given with prediction levels as high as for laboratory data (C\\(_{ab}\\: \\(R^2\\ = 0.89/LAI: \\(R^2\\ = 0.89/BBCH: \\(R^2\\ = \\(\\sim\\0.8. Wavelengths for deriving plant physiological status based on simulated and measured hyperspectral signatures are mostly selected from appropriate spectral regions (both field and laboratory: 700–800 nm regressing on C\\(_{ab}\\ and 800–1300

  5. Washing technology development for gravel contaminated with uranium

    Energy Technology Data Exchange (ETDEWEB)

    Park, Uk Ryang; Kim, Gye Nam; Kim, Seung Soo; Kim, Wan Suk; Moon, Jai Kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    The soil washing method has a short decontamination time and is economical. In addition, methods including phytoremediation, solidification/stabilization and bioremediation exist. Phytoremediation and bioremediation are economical, but have low remedial efficiency. In addition, bioremediation causes washing wastewater because it requires a washing process for the separation of microorganisms from the soils. In addition, solidification/stabilization is a commonly used methods, but eventually increases the volume of wastes. As mentioned above, many researches involved in the decontamination of radioactively contaminated soils have been actively processed. On the other hand, researches for decontaminating radioactively contaminated gravels are not being currently processed. In this study, we performed basic experiments using decontamination methods to decontaminate radioactively contaminated gravel. First, we measured the concentration of uranium in gravel included in uranium-contaminated soils and performed a washing experiment to monitor the tendency of uranium removal. In addition, when managing gravel with a low uranium-decontamination rate, we tried to satisfy the radioactivity concentration criteria for self-disposal in the wastes (0.4Bq/g or less) by performing a washing experiment after only a physical crushing process. We performed washing experiments to satisfy the radioactivity concentration criteria for self-disposal (0.4 Bq/g or less) in gravel included in radioactively contaminated soil. We performed washing experiments for gravel whose initial average concentration of uranium was 1.3Bq/g. In addition, the average concentration of uranium was 0.8Bq/g. Too increase the decontamination rate, we crushed the gravel with a jaw crusher and performed the washing experiments. The results were similar to the results without crushing. In addition, it was determined that the smaller the size of the gravel particles, the more efficient the uranium decontamination

  6. Parameter Identification of Ship Maneuvering Models Using Recursive Least Square Method Based on Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Man Zhu

    2017-03-01

    Full Text Available Determination of ship maneuvering models is a tough task of ship maneuverability prediction. Among several prime approaches of estimating ship maneuvering models, system identification combined with the full-scale or free- running model test is preferred. In this contribution, real-time system identification programs using recursive identification method, such as the recursive least square method (RLS, are exerted for on-line identification of ship maneuvering models. However, this method seriously depends on the objects of study and initial values of identified parameters. To overcome this, an intelligent technology, i.e., support vector machines (SVM, is firstly used to estimate initial values of the identified parameters with finite samples. As real measured motion data of the Mariner class ship always involve noise from sensors and external disturbances, the zigzag simulation test data include a substantial quantity of Gaussian white noise. Wavelet method and empirical mode decomposition (EMD are used to filter the data corrupted by noise, respectively. The choice of the sample number for SVM to decide initial values of identified parameters is extensively discussed and analyzed. With de-noised motion data as input-output training samples, parameters of ship maneuvering models are estimated using RLS and SVM-RLS, respectively. The comparison between identification results and true values of parameters demonstrates that both the identified ship maneuvering models from RLS and SVM-RLS have reasonable agreements with simulated motions of the ship, and the increment of the sample for SVM positively affects the identification results. Furthermore, SVM-RLS using data de-noised by EMD shows the highest accuracy and best convergence.

  7. Research on the method of improving the accuracy of CMM (coordinate measuring machine) testing aspheric surface

    Science.gov (United States)

    Cong, Wang; Xu, Lingdi; Li, Ang

    2017-10-01

    Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial

  8. A Bayesian least-squares support vector machine method for predicting the remaining useful life of a microwave component

    Directory of Open Access Journals (Sweden)

    Fuqiang Sun

    2017-01-01

    Full Text Available Rapid and accurate lifetime prediction of critical components in a system is important to maintaining the system’s reliable operation. To this end, many lifetime prediction methods have been developed to handle various failure-related data collected in different situations. Among these methods, machine learning and Bayesian updating are the most popular ones. In this article, a Bayesian least-squares support vector machine method that combines least-squares support vector machine with Bayesian inference is developed for predicting the remaining useful life of a microwave component. A degradation model describing the change in the component’s power gain over time is developed, and the point and interval remaining useful life estimates are obtained considering a predefined failure threshold. In our case study, the radial basis function neural network approach is also implemented for comparison purposes. The results indicate that the Bayesian least-squares support vector machine method is more precise and stable in predicting the remaining useful life of this type of components.

  9. A method for the automatic separation of the images of galaxies and stars from measurements made with the COSMOS machine

    International Nuclear Information System (INIS)

    MacGillivray, H.T.; Martin, R.; Pratt, N.M.; Reddish, V.C.; Seddon, H.; Alexander, L.W.G.; Walker, G.S.; Williams, P.R.

    1976-01-01

    A method has been developed which allows the computer to distinguish automatically between the images of galaxies and those of stars from measurements made with the COSMOS automatic plate-measuring machine at the Royal Observatory, Edinburgh. Results have indicated that a 90 to 95 per cent separation between galaxies and stars is possible. (author)

  10. Bibliography of papers, reports, and presentations related to point-sample dimensional measurement methods for machined part evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, J.M. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems

    1996-04-01

    The Dimensional Inspection Techniques Specification (DITS) Project is an ongoing effort to produce tools and guidelines for optimum sampling and data analysis of machined parts, when measured using point-sample methods of dimensional metrology. This report is a compilation of results of a literature survey, conducted in support of the DITS. Over 160 citations are included, with author abstracts where available.

  11. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    OpenAIRE

    He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han

    2015-01-01

    Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...

  12. Neuroanatomical heterogeneity of schizophrenia revealed by semi-supervised machine learning methods.

    Science.gov (United States)

    Honnorat, Nicolas; Dong, Aoyan; Meisenzahl-Lechner, Eva; Koutsouleris, Nikolaos; Davatzikos, Christos

    2017-12-20

    Schizophrenia is associated with heterogeneous clinical symptoms and neuroanatomical alterations. In this work, we aim to disentangle the patterns of neuroanatomical alterations underlying a heterogeneous population of patients using a semi-supervised clustering method. We apply this strategy to a cohort of patients with schizophrenia of varying extends of disease duration, and we describe the neuroanatomical, demographic and clinical characteristics of the subtypes discovered. We analyze the neuroanatomical heterogeneity of 157 patients diagnosed with Schizophrenia, relative to a control population of 169 subjects, using a machine learning method called CHIMERA. CHIMERA clusters the differences between patients and a demographically-matched population of healthy subjects, rather than clustering patients themselves, thereby specifically assessing disease-related neuroanatomical alterations. Voxel-Based Morphometry was conducted to visualize the neuroanatomical patterns associated with each group. The clinical presentation and the demographics of the groups were then investigated. Three subgroups were identified. The first two differed substantially, in that one involved predominantly temporal-thalamic-peri-Sylvian regions, whereas the other involved predominantly frontal regions and the thalamus. Both subtypes included primarily male patients. The third pattern was a mix of these two and presented milder neuroanatomic alterations and comprised a comparable number of men and women. VBM and statistical analyses suggest that these groups could correspond to different neuroanatomical dimensions of schizophrenia. Our analysis suggests that schizophrenia presents distinct neuroanatomical variants. This variability points to the need for a dimensional neuroanatomical approach using data-driven, mathematically principled multivariate pattern analysis methods, and should be taken into account in clinical studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Eddy current loss analysis of open-slot fault-tolerant permanent-magnet machines based on conformal mapping method

    Science.gov (United States)

    Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang

    2017-05-01

    This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.

  14. Development of the pressure-time method as a relative and absolute method for low-head hydraulic machines

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Pontus [Poeyry SwedPower AB, Stockholm (Sweden); Cervantes, Michel [Luleaa Univ. of Technology, Luleaa (Sweden)

    2013-02-15

    The pressure-time method is an absolute method common for flow measurements in power plants. The method determines the flow rate by measuring the pressure and estimating the losses between two sections in the penstock during a closure of the guide vanes. The method has limitations according to the IEC41 standard, which makes it difficult to use at Swedish plants where the head is generally low. This means that there is limited experience/knowledge in Sweden on this method, where the Winter-Kennedy is usually used. Since several years, Luleaa University of Technology works actively in the development of the pressure-time method for low-head hydraulic machines with encouraging results. Focus has been in decreasing the distance between both measuring sections and evaluation of the viscous losses. Measurements were performed on a pipe test rig (D=0.3 m) in a laboratory under well controlled conditions with 7method was tested as a relative method by measuring the pressure between the free surface and a section in the penstock without knowing the exact geometry, i.e., pipe factor. Such measurements may be simple to perform as most of the inlet spiral casings have pressure taps. Furthermore, the viscous losses do not need to be accurately determined as long as they are handled similarly between the measurements. The pressure-time method may thus become an alternative to the Winter-Kennedy.

  15. Improving the accuracy of myocardial perfusion scintigraphy results by machine learning method

    International Nuclear Information System (INIS)

    Groselj, C.; Kukar, M.

    2002-01-01

    Full text: Machine learning (ML) as rapidly growing artificial intelligence subfield has already proven in last decade to be a useful tool in many fields of decision making, also in some fields of medicine. Its decision accuracy usually exceeds the human one. To assess applicability of ML in interpretation the results of stress myocardial perfusion scintigraphy for CAD diagnosis. The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy for the investigation was computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate of whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy for scintigraphy were expressed in this way. The results of both decision procedures were compared. With ML method 19 patients more out of 327 (5.8 %) were correctly diagnosed by stress myocardial perfusion scintigraphy. ML could be an important tool for decision making in myocardial perfusion scintigraphy. (author)

  16. Machine vision method for online surface inspection of easy open can ends

    Science.gov (United States)

    Mariño, Perfecto; Pastoriza, Vicente; Santamaría, Miguel

    2006-10-01

    Easy open can end manufacturing process in the food canning sector currently makes use of a manual, non-destructive testing procedure to guarantee can end repair coating quality. This surface inspection is based on a visual inspection made by human inspectors. Due to the high production rate (100 to 500 ends per minute) only a small part of each lot is verified (statistical sampling), then an automatic, online, inspection system, based on machine vision, has been developed to improve this quality control. The inspection system uses a fuzzy model to make the acceptance/rejection decision for each can end from the information obtained by the vision sensor. In this work, the inspection method is presented. This surface inspection system checks the total production, classifies the ends in agreement with an expert human inspector, supplies interpretability to the operators in order to find out the failure causes and reduce mean time to repair during failures, and allows to modify the minimum can end repair coating quality.

  17. e-Bitter: Bitterant Prediction by the Consensus Voting From the Machine-Learning Methods.

    Science.gov (United States)

    Zheng, Suqing; Jiang, Mengying; Zhao, Chengwei; Zhu, Rui; Hu, Zhicheng; Xu, Yong; Lin, Fu

    2018-01-01

    In-silico bitterant prediction received the considerable attention due to the expensive and laborious experimental-screening of the bitterant. In this work, we collect the fully experimental dataset containing 707 bitterants and 592 non-bitterants, which is distinct from the fully or partially hypothetical non-bitterant dataset used in the previous works. Based on this experimental dataset, we harness the consensus votes from the multiple machine-learning methods (e.g., deep learning etc.) combined with the molecular fingerprint to build the bitter/bitterless classification models with five-fold cross-validation, which are further inspected by the Y-randomization test and applicability domain analysis. One of the best consensus models affords the accuracy, precision, specificity, sensitivity, F1-score, and Matthews correlation coefficient (MCC) of 0.929, 0.918, 0.898, 0.954, 0.936, and 0.856 respectively on our test set. For the automatic prediction of bitterant, a graphic program "e-Bitter" is developed for the convenience of users via the simple mouse click. To our best knowledge, it is for the first time to adopt the consensus model for the bitterant prediction and develop the first free stand-alone software for the experimental food scientist.

  18. Using Machine Learning Methods Jointly to Find Better Set of Rules in Data Mining

    Directory of Open Access Journals (Sweden)

    SUG Hyontai

    2017-01-01

    Full Text Available Rough set-based data mining algorithms are one of widely accepted machine learning technologies because of their strong mathematical background and capability of finding optimal rules based on given data sets only without room for prejudiced views to be inserted on the data. But, because the algorithms find rules very precisely, we may confront with the overfitting problem. On the other hand, association rule algorithms find rules of association, where the association resides between sets of items in database. The algorithms find itemsets that occur more than given minimum support, so that they can find the itemsets practically in reasonable time even for very large databases by supplying the minimum support appropriately. In order to overcome the problem of the overfitting problem in rough set-based algorithms, first we find large itemsets, after that we select attributes that cover the large itemsets. By using the selected attributes only, we may find better set of rules based on rough set theory. Results from experiments support our suggested method.

  19. e-Bitter: Bitterant Prediction by the Consensus Voting From the Machine-Learning Methods

    Directory of Open Access Journals (Sweden)

    Suqing Zheng

    2018-03-01

    Full Text Available In-silico bitterant prediction received the considerable attention due to the expensive and laborious experimental-screening of the bitterant. In this work, we collect the fully experimental dataset containing 707 bitterants and 592 non-bitterants, which is distinct from the fully or partially hypothetical non-bitterant dataset used in the previous works. Based on this experimental dataset, we harness the consensus votes from the multiple machine-learning methods (e.g., deep learning etc. combined with the molecular fingerprint to build the bitter/bitterless classification models with five-fold cross-validation, which are further inspected by the Y-randomization test and applicability domain analysis. One of the best consensus models affords the accuracy, precision, specificity, sensitivity, F1-score, and Matthews correlation coefficient (MCC of 0.929, 0.918, 0.898, 0.954, 0.936, and 0.856 respectively on our test set. For the automatic prediction of bitterant, a graphic program “e-Bitter” is developed for the convenience of users via the simple mouse click. To our best knowledge, it is for the first time to adopt the consensus model for the bitterant prediction and develop the first free stand-alone software for the experimental food scientist.

  20. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  1. Prediction of Backbreak in Open-Pit Blasting Operations Using the Machine Learning Method

    Science.gov (United States)

    Khandelwal, Manoj; Monjezi, M.

    2013-03-01

    Backbreak is an undesirable phenomenon in blasting operations. It can cause instability of mine walls, falling down of machinery, improper fragmentation, reduced efficiency of drilling, etc. The existence of various effective parameters and their unknown relationships are the main reasons for inaccuracy of the empirical models. Presently, the application of new approaches such as artificial intelligence is highly recommended. In this paper, an attempt has been made to predict backbreak in blasting operations of Soungun iron mine, Iran, incorporating rock properties and blast design parameters using the support vector machine (SVM) method. To investigate the suitability of this approach, the predictions by SVM have been compared with multivariate regression analysis (MVRA). The coefficient of determination (CoD) and the mean absolute error (MAE) were taken as performance measures. It was found that the CoD between measured and predicted backbreak was 0.987 and 0.89 by SVM and MVRA, respectively, whereas the MAE was 0.29 and 1.07 by SVM and MVRA, respectively.

  2. Tunnelling support methods and their possible application to machine rock face excavation in coal mining

    Energy Technology Data Exchange (ETDEWEB)

    Maidl, B.; Edeling, H.

    1981-06-11

    Mechanized pushing through the rocks is possible even in teary rock if protective measures are taken directly behind the drill bit. Present arch-type supports are best reinforced with sprayed concrete as it will take up rock deformations. In this case, however, the question soon arises whether arch-type steel supports should be used at all. So far, mature solutions have not been found but they will be possible if the mining industry is really interested. Sprayed concrete with admixtures of reinforcing steel fibers plays a major role here as it will protect miner's heads already at an early stage and is suitable as support even at a later stage. Equally interesting would be reinforced concrete pumped behind advancing formwork. A combination of both techniques may turn out to be the most suitable method to replace arch-type supports. A problem of particular importance is machine bracing in the fresh concrete lining. If the concrete is filled in directly behind the drill bit, it is only 4 to 6 h old when it reaches the bracing device, i.e., its pressure resistance is lower than the contact pressure of present mining machinery. It may be difficult to find a solution here but it is considered to be possible. With shell concrete, the formwork should be constructed so as to withstand the contact pressure.

  3. Improving Hip-Worn Accelerometer Estimates of Sitting Using Machine Learning Methods.

    Science.gov (United States)

    Kerr, Jacqueline; Carlson, Jordan; Godbole, Suneeta; Cadmus-Bertram, Lisa; Bellettiere, John; Hartman, Sheri

    2018-02-13

    To improve estimates of sitting time from hip worn accelerometers used in large cohort studies by employing machine learning methods developed on free living activPAL data. Thirty breast cancer survivors concurrently wore a hip worn accelerometer and a thigh worn activPAL for 7 days. A random forest classifier, trained on the activPAL data, was employed to detect sitting, standing and sit-stand transitions in 5 second windows in the hip worn accelerometer. The classifier estimates were compared to the standard accelerometer cut point and significant differences across different bout lengths were investigated using mixed effect models. Overall, the algorithm predicted the postures with moderate accuracy (stepping 77%, standing 63%, sitting 67%, sit to stand 52% and stand to sit 51%). Daily level analyses indicated that errors in transition estimates were only occurring during sitting bouts of 2 minutes or less. The standard cut point was significantly different from the activPAL across all bout lengths, overestimating short bouts and underestimating long bouts. This is among the first algorithms for sitting and standing for hip worn accelerometer data to be trained from entirely free living activPAL data. The new algorithm detected prolonged sitting which has been shown to be most detrimental to health. Further validation and training in larger cohorts is warranted.This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  4. Yield Estimation of Sugar Beet Based on Plant Canopy Using Machine Vision Methods

    Directory of Open Access Journals (Sweden)

    S Latifaltojar

    2014-09-01

    Full Text Available Crop yield estimation is one of the most important parameters for information and resources management in precision agriculture. This information is employed for optimizing the field inputs for successive cultivations. In the present study, the feasibility of sugar beet yield estimation by means of machine vision was studied. For the field experiments stripped images were taken during the growth season with one month intervals. The image of horizontal view of plants canopy was prepared at the end of each month. At the end of growth season, beet roots were harvested and the correlation between the sugar beet canopy in each month of growth period and corresponding weight of the roots were investigated. Results showed that there was a strong correlation between the beet yield and green surface area of autumn cultivated sugar beets. The highest coefficient of determination was 0.85 at three months before harvest. In order to assess the accuracy of the final model, the second year of study was performed with the same methodology. The results depicted a strong relationship between the actual and estimated beet weights with R2=0.94. The model estimated beet yield with about 9 percent relative error. It is concluded that this method has appropriate potential for estimation of sugar beet yield based on band imaging prior to harvest

  5. e-Bitter: Bitterant Prediction by the Consensus Voting From the Machine-learning Methods

    Science.gov (United States)

    Zheng, Suqing; Jiang, Mengying; Zhao, Chengwei; Zhu, Rui; Hu, Zhicheng; Xu, Yong; Lin, Fu

    2018-03-01

    In-silico bitterant prediction received the considerable attention due to the expensive and laborious experimental-screening of the bitterant. In this work, we collect the fully experimental dataset containing 707 bitterants and 592 non-bitterants, which is distinct from the fully or partially hypothetical non-bitterant dataset used in the previous works. Based on this experimental dataset, we harness the consensus votes from the multiple machine-learning methods (e.g., deep learning etc.) combined with the molecular fingerprint to build the bitter/bitterless classification models with five-fold cross-validation, which are further inspected by the Y-randomization test and applicability domain analysis. One of the best consensus models affords the accuracy, precision, specificity, sensitivity, F1-score, and Matthews correlation coefficient (MCC) of 0.929, 0.918, 0.898, 0.954, 0.936, and 0.856 respectively on our test set. For the automatic prediction of bitterant, a graphic program “e-Bitter” is developed for the convenience of users via the simple mouse click. To our best knowledge, it is for the first time to adopt the consensus model for the bitterant prediction and develop the first free stand-alone software for the experimental food scientist.

  6. Prediction of Aerosol Optical Depth in West Asia: Machine Learning Methods versus Numerical Models

    Science.gov (United States)

    Omid Nabavi, Seyed; Haimberger, Leopold; Abbasi, Reyhaneh; Samimi, Cyrus

    2017-04-01

    Dust-prone areas of West Asia are releasing increasingly large amounts of dust particles during warm months. Because of the lack of ground-based observations in the region, this phenomenon is mainly monitored through remotely sensed aerosol products. The recent development of mesoscale Numerical Models (NMs) has offered an unprecedented opportunity to predict dust emission, and, subsequently Aerosol Optical Depth (AOD), at finer spatial and temporal resolutions. Nevertheless, the significant uncertainties in input data and simulations of dust activation and transport limit the performance of numerical models in dust prediction. The presented study aims to evaluate if machine-learning algorithms (MLAs), which require much less computational expense, can yield the same or even better performance than NMs. Deep blue (DB) AOD, which is observed by satellites but also predicted by MLAs and NMs, is used for validation. We concentrate our evaluations on the over dry Iraq plains, known as the main origin of recently intensified dust storms in West Asia. Here we examine the performance of four MLAs including Linear regression Model (LM), Support Vector Machine (SVM), Artificial Neural Network (ANN), Multivariate Adaptive Regression Splines (MARS). The Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) and the Dust REgional Atmosphere Model (DREAM) are included as NMs. The MACC aerosol re-analysis of European Centre for Medium-range Weather Forecast (ECMWF) is also included, although it has assimilated satellite-based AOD data. Using the Recursive Feature Elimination (RFE) method, nine environmental features including soil moisture and temperature, NDVI, dust source function, albedo, dust uplift potential, vertical velocity, precipitation and 9-month SPEI drought index are selected for dust (AOD) modeling by MLAs. During the feature selection process, we noticed that NDVI and SPEI are of the highest importance in MLAs predictions. The data set was divided

  7. On Intelligent Design and Planning Method of Process Route Based on Gun Breech Machining Process

    Science.gov (United States)

    Hongzhi, Zhao; Jian, Zhang

    2018-03-01

    The paper states an approach of intelligent design and planning of process route based on gun breech machining process, against several problems, such as complex machining process of gun breech, tedious route design and long period of its traditional unmanageable process route. Based on gun breech machining process, intelligent design and planning system of process route are developed by virtue of DEST and VC++. The system includes two functional modules--process route intelligent design and its planning. The process route intelligent design module, through the analysis of gun breech machining process, summarizes breech process knowledge so as to complete the design of knowledge base and inference engine. And then gun breech process route intelligently output. On the basis of intelligent route design module, the final process route is made, edited and managed in the process route planning module.

  8. Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining

    Directory of Open Access Journals (Sweden)

    Qiaokang Liang

    2016-11-01

    Full Text Available Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing.

  9. Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining.

    Science.gov (United States)

    Liang, Qiaokang; Zhang, Dan; Wu, Wanneng; Zou, Kunlin

    2016-11-16

    Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC) tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing.

  10. Performances of the PCA method in electrical machines diagnosis using Matlab

    OpenAIRE

    Ramahaleomiarantsoa , J.F.; Sambatra , Eric Jean Roy; Heraud , Nicolas; Razafimahenina , Jean Marie

    2012-01-01

    Nowadays, faults diagnosis is almost an inevitable step to be maintained in the optimal safety operating of every physical system. Electrical machines, main elements of every electromechanical system, are among the research topics of many academic and industrial laboratories because of the importance of their roles in the industrial process. Lots of technologies of these machines are old and well controlled. However, they remain the seat of several electrical and mechanical faults [1-4]. Thus...

  11. Comparison of Deep Learning With Multiple Machine Learning Methods and Metrics Using Diverse Drug Discovery Data Sets.

    Science.gov (United States)

    Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean

    2017-12-04

    Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further

  12. Optimization of Surface Finish in Turning Operation by Considering the Machine Tool Vibration using Taguchi Method

    Directory of Open Access Journals (Sweden)

    Muhammad Munawar

    2012-01-01

    Full Text Available Optimization of surface roughness has been one of the primary objectives in most of the machining operations. Poor control on the desired surface roughness generates non conforming parts and results into increase in cost and loss of productivity due to rework or scrap. Surface roughness value is a result of several process variables among which machine tool condition is one of the significant variables. In this study, experimentation was carried out to investigate the effect of machine tool condition on surface roughness. Variable used to represent machine tool\\'s condition was vibration amplitude. Input parameters used, besides vibration amplitude, were feed rate and insert nose radius. Cutting speed and depth of cut were kept constant. Based on Taguchi orthogonal array, a series of experimentation was designed and performed on AISI 1040 carbon steel bar at default and induced machine tool\\'s vibration amplitudes. ANOVA (Analysis of Variance, revealed that vibration amplitude and feed rate had moderate effect on the surface roughness and insert nose radius had the highest significant effect on the surface roughness. It was also found that a machine tool with low vibration amplitude produced better surface roughness. Insert with larger nose radius produced better surface roughness at low feed rate.

  13. Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation.

    Science.gov (United States)

    Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E

    2017-08-04

    Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the

  14. A MACHINE-LEARNING METHOD TO INFER FUNDAMENTAL STELLAR PARAMETERS FROM PHOTOMETRIC LIGHT CURVES

    International Nuclear Information System (INIS)

    Miller, A. A.; Bloom, J. S.; Richards, J. W.; Starr, D. L.; Lee, Y. S.; Butler, N. R.; Tokarz, S.; Smith, N.; Eisner, J. A.

    2015-01-01

    A fundamental challenge for wide-field imaging surveys is obtaining follow-up spectroscopic observations: there are >10 9 photometrically cataloged sources, yet modern spectroscopic surveys are limited to ∼few× 10 6 targets. As we approach the Large Synoptic Survey Telescope era, new algorithmic solutions are required to cope with the data deluge. Here we report the development of a machine-learning framework capable of inferring fundamental stellar parameters (T eff , log g, and [Fe/H]) using photometric-brightness variations and color alone. A training set is constructed from a systematic spectroscopic survey of variables with Hectospec/Multi-Mirror Telescope. In sum, the training set includes ∼9000 spectra, for which stellar parameters are measured using the SEGUE Stellar Parameters Pipeline (SSPP). We employed the random forest algorithm to perform a non-parametric regression that predicts T eff , log g, and [Fe/H] from photometric time-domain observations. Our final optimized model produces a cross-validated rms error (RMSE) of 165 K, 0.39 dex, and 0.33 dex for T eff , log g, and [Fe/H], respectively. Examining the subset of sources for which the SSPP measurements are most reliable, the RMSE reduces to 125 K, 0.37 dex, and 0.27 dex, respectively, comparable to what is achievable via low-resolution spectroscopy. For variable stars this represents a ≈12%-20% improvement in RMSE relative to models trained with single-epoch photometric colors. As an application of our method, we estimate stellar parameters for ∼54,000 known variables. We argue that this method may convert photometric time-domain surveys into pseudo-spectrographic engines, enabling the construction of extremely detailed maps of the Milky Way, its structure, and history

  15. Comparison of Machine Learning methods for incipient motion in gravel bed rivers

    Science.gov (United States)

    Valyrakis, Manousos

    2013-04-01

    Soil erosion and sediment transport of natural gravel bed streams are important processes which affect both the morphology as well as the ecology of earth's surface. For gravel bed rivers at near incipient flow conditions, particle entrainment dynamics are highly intermittent. This contribution reviews the use of modern Machine Learning (ML) methods implemented for short term prediction of entrainment instances of individual grains exposed in fully developed near boundary turbulent flows. Results obtained by network architectures of variable complexity based on two different ML methods namely the Artificial Neural Network (ANN) and the Adaptive Neuro-Fuzzy Inference System (ANFIS) are compared in terms of different error and performance indices, computational efficiency and complexity as well as predictive accuracy and forecast ability. Different model architectures are trained and tested with experimental time series obtained from mobile particle flume experiments. The experimental setup consists of a Laser Doppler Velocimeter (LDV) and a laser optics system, which acquire data for the instantaneous flow and particle response respectively, synchronously. The first is used to record the flow velocity components directly upstream of the test particle, while the later tracks the particle's displacements. The lengthy experimental data sets (millions of data points) are split into the training and validation subsets used to perform the corresponding learning and testing of the models. It is demonstrated that the ANFIS hybrid model, which is based on neural learning and fuzzy inference principles, better predicts the critical flow conditions above which sediment transport is initiated. In addition, it is illustrated that empirical knowledge can be extracted, validating the theoretical assumption that particle ejections occur due to energetic turbulent flow events. Such a tool may find application in management and regulation of stream flows downstream of dams for stream

  16. Recognition of Time Stamps on Full-Disk Hα Images Using Machine Learning Methods

    Science.gov (United States)

    Xu, Y.; Huang, N.; Jing, J.; Liu, C.; Wang, H.; Fu, G.

    2016-12-01

    Observation and understanding of the physics of the 11-year solar activity cycle and 22-year magnetic cycle are among the most important research topics in solar physics. The solar cycle is responsible for magnetic field and particle fluctuation in the near-earth environment that have been found increasingly important in affecting the living of human beings in the modern era. A systematic study of large-scale solar activities, as made possible by our rich data archive, will further help us to understand the global-scale magnetic fields that are closely related to solar cycles. The long-time-span data archive includes both full-disk and high-resolution Hα images. Prior to the widely use of CCD cameras in 1990s, 35-mm films were the major media to store images. The research group at NJIT recently finished the digitization of film data obtained by the National Solar Observatory (NSO) and Big Bear Solar Observatory (BBSO) covering the period of 1953 to 2000. The total volume of data exceeds 60 TB. To make this huge database scientific valuable, some processing and calibration are required. One of the most important steps is to read the time stamps on all of the 14 million images, which is almost impossible to be done manually. We implemented three different methods to recognize the time stamps automatically, including Optical Character Recognition (OCR), Classification Tree and TensorFlow. The latter two are known as machine learning algorithms which are very popular now a day in pattern recognition area. We will present some sample images and the results of clock recognition from all three methods.

  17. Predictive ability of machine learning methods for massive crop yield prediction

    Directory of Open Access Journals (Sweden)

    Alberto Gonzalez-Sanchez

    2014-04-01

    Full Text Available An important issue for agricultural planning purposes is the accurate yield estimation for the numerous crops involved in the planning. Machine learning (ML is an essential approach for achieving practical and effective solutions for this problem. Many comparisons of ML methods for yield prediction have been made, seeking for the most accurate technique. Generally, the number of evaluated crops and techniques is too low and does not provide enough information for agricultural planning purposes. This paper compares the predictive accuracy of ML and linear regression techniques for crop yield prediction in ten crop datasets. Multiple linear regression, M5-Prime regression trees, perceptron multilayer neural networks, support vector regression and k-nearest neighbor methods were ranked. Four accuracy metrics were used to validate the models: the root mean square error (RMS, root relative square error (RRSE, normalized mean absolute error (MAE, and correlation factor (R. Real data of an irrigation zone of Mexico were used for building the models. Models were tested with samples of two consecutive years. The results show that M5-Prime and k-nearest neighbor techniques obtain the lowest average RMSE errors (5.14 and 4.91, the lowest RRSE errors (79.46% and 79.78%, the lowest average MAE errors (18.12% and 19.42%, and the highest average correlation factors (0.41 and 0.42. Since M5-Prime achieves the largest number of crop yield models with the lowest errors, it is a very suitable tool for massive crop yield prediction in agricultural planning.

  18. Using Standard-Sole Cost Method for Performance Gestion Accounting and Calculation Cost in the Machine Building Industry

    Directory of Open Access Journals (Sweden)

    Cleopatra Sendroiu

    2006-07-01

    Full Text Available The main purpose of improving and varying cost calculation methods in the machine building industry is to make them more operational and efficient in supplying the information necessary to the management in taking its decisions. The present cost calculation methods used in the machine building plants - global method and the method per orders - by which a historical cost is determined a posteriori used in deducting and post factum justification of manufacturing expenses does not offer the management the possibility to fully satisfy its need for information. We are talking about a change of conception in applying certain systems, methods and work techniques, according to the needs of efficient administration of production and the plant seen as a whole. The standard-cost method best answers to the needs of the effective management of the value side of the manufacturing process and raising economic efficiency. We consider that, in the machine building industry, these objectives can be achieved by using the standard - sole cost alternative of the standard-cost method.

  19. Using Standard-Sole Cost Method for Performance Gestion Accounting and Calculation Cost in the Machine Building Industry

    Directory of Open Access Journals (Sweden)

    Aureliana Geta Roman

    2006-09-01

    Full Text Available The main purpose of improving and varying cost calculation methods in the machine building industry is to make them more operational and efficient in supplying the information necessary to the management in taking its decisions. The present cost calculation methods used in the machine building plants – global method and the method per orders – by which a historical cost is determined a posteriori used in deducting and post factum justification of manufacturing expenses does not offer the management the possibility to fully satisfy its need for information. We are talking about a change of conception in applying certain systems, methods and work techniques, according to the needs of efficient administration of production and the plant seen as a whole. The standard-cost method best answers to the needs of the effective management of the value side of the manufacturing process and raising economic efficiency. We consider that, in the machine building industry, these objectives can be achieved by using the standard - sole cost alternative of the standard-cost method.

  20. Methodes de compensation des erreurs d'usinage utilisant la mesure sur machines-outils

    Science.gov (United States)

    Guiassa, Rachid

    On-machine measurement process is used to inspect the part immediately after the cut without part removal and additional setups. It detects the machining defects visible to the machine tool. The system machine-tool-part deflection and the cutting tool dimension inaccuracy are the most important sources of these defects. The machined part can be inspected, at the semi-finishing cut level to identify systematic defects that may occur later at the finishing cut. Therefore, corrective actions can be derived to anticipate the expected error in order to produce a part with acceptable accuracy. For industrial profitability, the measurement and the compensation tasks must be done under the closed door machining requirement without human interventions. This thesis aims to develop mathematical models that use the data inspection of previous cuts to formulate the compensation of the finishing-cut. The goal of the compensation is to anticipate the expected error which is identified under two components. One is independent on the depth of cut and is related to the cutting tool dimension such as the wear. The other is dependent on the cutting depth such as the deflection. A general model is presented which relies solely on-machine probing data from semi-finishing cuts to compensate the final cut. A variable cutting compliance coefficient relates the total system deflection to the depth of cut in multi-cut process. It is used to estimate the compensation of the tool path. The model is able to take into account the effect of the cutting depth variation and the material removal in the estimation of the error at the finishing-cut. In order to generate the continuous compensated tool path from discrete measurements, a B-Spline deformation technique is adapted to the available data and applied to compute the compensated tool path according to a restricted number of discrete compensation vectors. The results show that the on-machine probed errors can be significantly reduced using the

  1. Prediction of interactions between viral and host proteins using supervised machine learning methods.

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Barman

    Full Text Available BACKGROUND: Viral-host protein-protein interaction plays a vital role in pathogenesis, since it defines viral infection of the host and regulation of the host proteins. Identification of key viral-host protein-protein interactions (PPIs has great implication for therapeutics. METHODS: In this study, a systematic attempt has been made to predict viral-host PPIs by integrating different features, including domain-domain association, network topology and sequence information using viral-host PPIs from VirusMINT. The three well-known supervised machine learning methods, such as SVM, Naïve Bayes and Random Forest, which are commonly used in the prediction of PPIs, were employed to evaluate the performance measure based on five-fold cross validation techniques. RESULTS: Out of 44 descriptors, best features were found to be domain-domain association and methionine, serine and valine amino acid composition of viral proteins. In this study, SVM-based method achieved better sensitivity of 67% over Naïve Bayes (37.49% and Random Forest (55.66%. However the specificity of Naïve Bayes was the highest (99.52% as compared with SVM (74% and Random Forest (89.08%. Overall, the SVM and Random Forest achieved accuracy of 71% and 72.41%, respectively. The proposed SVM-based method was evaluated on blind dataset and attained a sensitivity of 64%, specificity of 83%, and accuracy of 74%. In addition, unknown potential targets of hepatitis B virus-human and hepatitis E virus-human PPIs have been predicted through proposed SVM model and validated by gene ontology enrichment analysis. Our proposed model shows that, hepatitis B virus "C protein" binds to membrane docking protein, while "X protein" and "P protein" interacts with cell-killing and metabolic process proteins, respectively. CONCLUSION: The proposed method can predict large scale interspecies viral-human PPIs. The nature and function of unknown viral proteins (HBV and HEV, interacting partners of host

  2. Computer-Aided Diagnosis for Breast Ultrasound Using Computerized BI-RADS Features and Machine Learning Methods.

    Science.gov (United States)

    Shan, Juan; Alam, S Kaisar; Garra, Brian; Zhang, Yingtao; Ahmed, Tahira

    2016-04-01

    This work identifies effective computable features from the Breast Imaging Reporting and Data System (BI-RADS), to develop a computer-aided diagnosis (CAD) system for breast ultrasound. Computerized features corresponding to ultrasound BI-RADs categories were designed and tested using a database of 283 pathology-proven benign and malignant lesions. Features were selected based on classification performance using a "bottom-up" approach for different machine learning methods, including decision tree, artificial neural network, random forest and support vector machine. Using 10-fold cross-validation on the database of 283 cases, the highest area under the receiver operating characteristic (ROC) curve (AUC) was 0.84 from a support vector machine with 77.7% overall accuracy; the highest overall accuracy, 78.5%, was from a random forest with the AUC 0.83. Lesion margin and orientation were optimum features common to all of the different machine learning methods. These features can be used in CAD systems to help distinguish benign from worrisome lesions. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. All rights reserved.

  3. Multipolar electrostatics based on the Kriging machine learning method: an application to serine.

    Science.gov (United States)

    Yuan, Yongna; Mills, Matthew J L; Popelier, Paul L A

    2014-04-01

    A multipolar, polarizable electrostatic method for future use in a novel force field is described. Quantum Chemical Topology (QCT) is used to partition the electron density of a chemical system into atoms, then the machine learning method Kriging is used to build models that relate the multipole moments of the atoms to the positions of their surrounding nuclei. The pilot system serine is used to study both the influence of the level of theory and the set of data generator methods used. The latter consists of: (i) sampling of protein structures deposited in the Protein Data Bank (PDB), or (ii) normal mode distortion along either (a) Cartesian coordinates, or (b) redundant internal coordinates. Wavefunctions for the sampled geometries were obtained at the HF/6-31G(d,p), B3LYP/apc-1, and MP2/cc-pVDZ levels of theory, prior to calculation of the atomic multipole moments by volume integration. The average absolute error (over an independent test set of conformations) in the total atom-atom electrostatic interaction energy of serine, using Kriging models built with the three data generator methods is 11.3 kJ mol⁻¹ (PDB), 8.2 kJ mol⁻¹ (Cartesian distortion), and 10.1 kJ mol⁻¹ (redundant internal distortion) at the HF/6-31G(d,p) level. At the B3LYP/apc-1 level, the respective errors are 7.7 kJ mol⁻¹, 6.7 kJ mol⁻¹, and 4.9 kJ mol⁻¹, while at the MP2/cc-pVDZ level they are 6.5 kJ mol⁻¹, 5.3 kJ mol⁻¹, and 4.0 kJ mol⁻¹. The ranges of geometries generated by the redundant internal coordinate distortion and by extraction from the PDB are much wider than the range generated by Cartesian distortion. The atomic multipole moment and electrostatic interaction energy predictions for the B3LYP/apc-1 and MP2/cc-pVDZ levels are similar, and both are better than the corresponding predictions at the HF/6-31G(d,p) level.

  4. Hand washing in operating room: a procedural comparison

    Directory of Open Access Journals (Sweden)

    Alessia Stilo

    2016-09-01

    Full Text Available BACKGROUND Hand washing has been considered a measure of personal hygiene for centuries and it is known that an improper hand hygiene by healthcare workers is responsible for about 40% of nosocomial infections. Therefore, surgical hand preparation is a critical element for healthcare safety in order to reduce microbial contamination of  surgical wound in case of non detected break of the gloves. The aim of our study is to evaluate the efficacy three antiseptics: Povi-iodine scrub; EPG (Ethanol, Hydrogen Peroxide, Glycerol, recommended by WHO, and common marseille soap type in a liquid formulation. METHODS It was designed a randomized, double-blind, single-center study conducted in the University Hospital of Messina, from January to June 2013. We asked operators to put the fingertips of their right hand (if not left-handed for one minute on the PCA medium, before washing with the three types of antiseptics, and after washing and drying. Drying was made using sterile gauzes or disposable wipes. Then, we measured the number of colony forming units per mL (CFU/mL and calculated the percentage of microbial load reduction. RESULTS 211 samples have been considered for statistical analysis: in 42 samples, in fact, initial microbial load was lower than after washing. Washing with EPG reduced CFU/ml from  a mean of 38,9 to 4,1 (86,5% reduction, washing with povi-iodine scrub from 59,55 to 12,9 (75,9% reduction and washing with Marseille soap from 47,26 to 12,7 (64,3% reduction. CONCLUSIONS Our study shows that washing with EPG has superior efficacy in CFU reduction. Antiseptic hand washing, however, cannot be considered the only measure to reduce infections: the anomaly of some results (initial microbial load lower than after washing  demonstrates that drying is an essential phase in the presurgical preparation. Therefore, hand hygiene must be part of a more complex strategy of surveillance and control of nosocomial infections

  5. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods.

    Science.gov (United States)

    Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-08-29

    To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care

  6. Methods, systems and apparatus for controlling operation of two alternating current (AC) machines

    Science.gov (United States)

    Gallegos-Lopez, Gabriel [Torrance, CA; Nagashima, James M [Cerritos, CA; Perisic, Milun [Torrance, CA; Hiti, Silva [Redondo Beach, CA

    2012-02-14

    A system is provided for controlling two AC machines. The system comprises a DC input voltage source that provides a DC input voltage, a voltage boost command control module (VBCCM), a five-phase PWM inverter module coupled to the two AC machines, and a boost converter coupled to the inverter module and the DC input voltage source. The boost converter is designed to supply a new DC input voltage to the inverter module having a value that is greater than or equal to a value of the DC input voltage. The VBCCM generates a boost command signal (BCS) based on modulation indexes from the two AC machines. The BCS controls the boost converter such that the boost converter generates the new DC input voltage in response to the BCS. When the two AC machines require additional voltage that exceeds the DC input voltage required to meet a combined target mechanical power required by the two AC machines, the BCS controls the boost converter to drive the new DC input voltage generated by the boost converter to a value greater than the DC input voltage.

  7. Identifying essential genes in bacterial metabolic networks with machine learning methods

    Science.gov (United States)

    2010-01-01

    Background Identifying essential genes in bacteria supports to identify potential drug targets and an understanding of minimal requirements for a synthetic cell. However, experimentally assaying the essentiality of their coding genes is resource intensive and not feasible for all bacterial organisms, in particular if they are infective. Results We developed a machine learning technique to identify essential genes using the experimental data of genome-wide knock-out screens from one bacterial organism to infer essential genes of another related bacterial organism. We used a broad variety of topological features, sequence characteristics and co-expression properties potentially associated with essentiality, such as flux deviations, centrality, codon frequencies of the sequences, co-regulation and phyletic retention. An organism-wise cross-validation on bacterial species yielded reliable results with good accuracies (area under the receiver-operator-curve of 75% - 81%). Finally, it was applied to drug target predictions for Salmonella typhimurium. We compared our predictions to the viability of experimental knock-outs of S. typhimurium and identified 35 enzymes, which are highly relevant to be considered as potential drug targets. Specifically, we detected promising drug targets in the non-mevalonate pathway. Conclusions Using elaborated features characterizing network topology, sequence information and microarray data enables to predict essential genes from a bacterial reference organism to a related query organism without any knowledge about the essentiality of genes of the query organism. In general, such a method is beneficial for inferring drug targets when experimental data about genome-wide knockout screens is not available for the investigated organism. PMID:20438628

  8. Extensions and applications of ensemble-of-trees methods in machine learning

    Science.gov (United States)

    Bleich, Justin

    Ensemble-of-trees algorithms have emerged to the forefront of machine learning due to their ability to generate high forecasting accuracy for a wide array of regression and classification problems. Classic ensemble methodologies such as random forests (RF) and stochastic gradient boosting (SGB) rely on algorithmic procedures to generate fits to data. In contrast, more recent ensemble techniques such as Bayesian Additive Regression Trees (BART) and Dynamic Trees (DT) focus on an underlying Bayesian probability model to generate the fits. These new probability model-based approaches show much promise versus their algorithmic counterparts, but also offer substantial room for improvement. The first part of this thesis focuses on methodological advances for ensemble-of-trees techniques with an emphasis on the more recent Bayesian approaches. In particular, we focus on extensions of BART in four distinct ways. First, we develop a more robust implementation of BART for both research and application. We then develop a principled approach to variable selection for BART as well as the ability to naturally incorporate prior information on important covariates into the algorithm. Next, we propose a method for handling missing data that relies on the recursive structure of decision trees and does not require imputation. Last, we relax the assumption of homoskedasticity in the BART model to allow for parametric modeling of heteroskedasticity. The second part of this thesis returns to the classic algorithmic approaches in the context of classification problems with asymmetric costs of forecasting errors. First we consider the performance of RF and SGB more broadly and demonstrate its superiority to logistic regression for applications in criminology with asymmetric costs. Next, we use RF to forecast unplanned hospital readmissions upon patient discharge with asymmetric costs taken into account. Finally, we explore the construction of stable decision trees for forecasts of

  9. Applicability of the Reactor Safety Study (WASH-1400) to LMFBR risk assessments

    International Nuclear Information System (INIS)

    El-Sheikh, K.A.; Feller, K.G.; Fleischer, L.; Greebler, P.; McDonald, A.; Sultan, P.; Temme, M.I.; Fullwood, R.R.

    1976-01-01

    The feasibility of applying the WASH-1400 methods and data to LMFBR risk assessment is evaluated using the following approach for a selected LMFBR: (1) Structuring the LMFBR risk assessment problem in a modular form similar to WASH-1400; (2) Comparing the predictive tools applicable to each module; (3) Comparing the dependencies among the various modules. It is concluded that the WASH-1400 applicability is limited due to LWR-LMFBR differences in operating environments and accident phenomena. WASH-1400 and LMFBR specific methods applicable to LMFBR risk assessments are indicated

  10. Simple gambling or sophisticated gaming? : applying game analysis methods to modern video slot machine games

    OpenAIRE

    Leppäsalko, Tero

    2017-01-01

    Slot machine games have become the most popular form of gambling worldwide. In Finland, their pervasiveness in public spaces and popularity makes them one of the most common form of gaming. However, in game studies, gambling games are often regarded as borderline games due to the player’s lack of control. In this thesis I ask whether modern video slot machine games can be considered as games and if so, what similarities there are between them and contemporary video games. To find out if m...

  11. Primal Domain Decomposition Method with Direct and Iterative Solver for Circuit-Field-Torque Coupled Parallel Finite Element Method to Electric Machine Modelling

    Directory of Open Access Journals (Sweden)

    Daniel Marcsa

    2015-01-01

    Full Text Available The analysis and design of electromechanical devices involve the solution of large sparse linear systems, and require therefore high performance algorithms. In this paper, the primal Domain Decomposition Method (DDM with parallel forward-backward and with parallel Preconditioned Conjugate Gradient (PCG solvers are introduced in two-dimensional parallel time-stepping finite element formulation to analyze rotating machine considering the electromagnetic field, external circuit and rotor movement. The proposed parallel direct and the iterative solver with two preconditioners are analyzed concerning its computational efficiency and number of iterations of the solver with different preconditioners. Simulation results of a rotating machine is also presented.

  12. Effect of dielectric fluid with surfactant and graphite powder on Electrical Discharge Machining of titanium alloy using Taguchi method

    Directory of Open Access Journals (Sweden)

    Murahari Kolli

    2015-12-01

    Full Text Available In this paper, Taguchi method was employed to optimize the surfactant and graphite powder concentration in dielectric fluid for the machining of Ti-6Al-4V using Electrical Discharge Machining (EDM. The process parameters such as discharge current, surfactant concentration and powder concentration were changed to explore their effects on Material Removal Rate (MRR, Surface Roughness (SR, Tool wear rate (TWR and Recast Layer Thickness (RLT. Detailed analysis of structural features of machined surface was carried out using Scanning Electron Microscope (SEM to observe the influence of surfactant and graphite powder on the machining process. It was observed from the experimental results that the graphite powder and surfactant added dielectric fluid significantly improved the MRR, reduces the SR, TWR and RLT at various conditions. Analysis of Variance (ANOVA and F-test of experimental data values related to the important process parameters of EDM revealed that discharge current and surfactant concentration has more percentage of contribution on the MRR and TWR whereas the SR, and RLT were found to be affected greatly by the discharge current and graphite powder concentration.

  13. Wash water waste pretreatment system

    Science.gov (United States)

    1977-01-01

    Investigations were completed on wash waters based on each candidate personal cleansing agent. Evaluations of coagulants, antifoam agents, and the effect of promising antifoams on the chemical precipitation were included. Based on these evaluations two candidate soaps as well as their companion antifoam agents were selected for further work. Operating parameters included the effect of soap concentration, ferric chloride concentration, duration of mixing, and pore size of depth filters on the degree of soap removal. The effect of pressure on water flow through filter cartridges and on the rate of decline of water flow was also investigated. The culmination of the program was the recommendation of a pretreatment concept based on chemical precipitation followed by pressure filtration.

  14. Detecting wash trade in the financial market

    OpenAIRE

    Cao, Yi; Li, Yuhua; Coleman, Sonya; Belatreche, Ammar; McGinnity, T. M.

    2014-01-01

    Wash trade refers to the activities of traders who utilise deliberately designed collusive transactions to increase the trading volumes for creating active market impression. Wash trade can be damaging to the proper functioning and integrity of capital markets. Existing work focuses on collusive clique detections based on certain assumptions of trading behaviours. Effective approaches for analysing and detecting wash trade in a real-life market have yet to be developed. T...

  15. 21 CFR 1250.87 - Wash water.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Wash water. 1250.87 Section 1250.87 Food and Drugs... Sanitation Facilities and Conditions on Vessels § 1250.87 Wash water. Where systems installed on vessels for wash water, as defined in § 1250.3(n), do not comply with the requirements of a potable water system...

  16. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    Science.gov (United States)

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  17. A Sensor-less Method for Online Thermal Monitoring of Switched Reluctance Machine

    DEFF Research Database (Denmark)

    Wang, Chao; Liu, Hui; Liu, Xiao

    2015-01-01

    Stator winding is one of the most vulnerable parts in Switched Reluctance Machine (SRM), especially under thermal stresses during frequently changing operation circumstances and susceptible heat dissipation conditions. Thus real-time online thermal monitoring of the stator winding is of great sig...

  18. A machine learning method for fast and accurate characterization of depth-of-interaction gamma cameras

    DEFF Research Database (Denmark)

    Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen

    2017-01-01

    to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite...

  19. Friction-resilient position control for machine tools—Adaptive and sliding-mode methods compared

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik

    2018-01-01

    Robust trajectory tracking and increasing demand for high-accuracy tool positioning have motivated research in advanced control design for machine tools. State-of-the-art industry solutions employ cascades of Proportional (P) and Proportional-Integral (PI) controllers for closed-loop servo contro...

  20. The Librarian Leading the Machine: A Reassessment of Library Instruction Methods

    Science.gov (United States)

    Greer, Katie; Hess, Amanda Nichols; Kraemer, Elizabeth W.

    2016-01-01

    This article builds on the 2007 College and Research Libraries article, "The Librarian, the Machine, or a Little of Both." Since that time, Oakland University Libraries implemented changes to its instruction program that reflect larger trends in teaching and assessment throughout the profession; following these revisions, librarians…

  1. Development of Methods of Preparing Materials for Teaching Machines: Professional Paper 29-68.

    Science.gov (United States)

    Skinner, B. F.; Zook, Lola M., Ed.

    In the preparation of 12-inch disc teaching machine materials for elementary college courses, a preliminary analysis of subject matter and required skills precedes sequential framing. The programer must assess the beginning level of student competence and frame questions to supply new material until the proper response stands alone. Statements for…

  2. A Method for Identifying the Mechanical Parameters in Resistance Spot Welding Machines

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels

    2003-01-01

    Mechanical dynamic responses of resistance welding machine have a significant influence on weld quality and electrode service life, it must be considered when the real welding production is carried out or the welding process is stimulated. The mathematical models for characterizing the mechanical...

  3. Magnetic modeling of a Linear Synchronous Machine with the spectral element method

    NARCIS (Netherlands)

    Curti, M.; Paulides, J.J.H.; Lomonova, E.

    2017-01-01

    The field calculus for electrical machines is realized solving subdomain problems. Most often, the latter are solved using either finite element analysis or the semi-analytical solution of a Laplace or Poisson equation obtained by separation of variables. The first option can capture complex

  4. Magnetic modeling of a Linear Synchronous Machine with the spectral element method

    NARCIS (Netherlands)

    Curti, M.; Paulides, J.J.H.; Lomonova, E.

    2017-01-01

    The field calculus for electrical machines (EMs) is realized solving subdomain problems. Most often, the latter are solved using either finite element analysis (FEA) or the semi-analytical solution of a Laplace or Poisson equation obtained by separation of variables. The first option can capture

  5. Machine Learning Methods for Knowledge Discovery in Medical Data on Atherosclerosis

    Czech Academy of Sciences Publication Activity Database

    Serrano, J.I.; Tomečková, Marie; Zvárová, Jana

    2006-01-01

    Roč. 1, - (2006), s. 6-33 ISSN 1801-5603 Institutional research plan: CEZ:AV0Z10300504 Keywords : knowledge discovery * supervised machine learning * biomedical data mining * risk factors of atherosclerosis Subject RIV: BB - Applied Statistics, Operational Research

  6. A study on dynamic evaluation methods for human-machine interfaces in advanced control rooms

    International Nuclear Information System (INIS)

    Park, Jin Kyun

    1998-02-01

    Extensive efforts have been performed to reveal factors that largely affect to the safety of nuclear power plants (NPPs). Among them, human factors were known as a dominant cause of a severe accident, such as Three Mile Island and Chernobyl accidents. Thus a lot of efforts to resolve human factors related problems have been spent, and one of these efforts is an advanced control room (ACR) design to enhance human performance and the safety of NPPs. There are two important trends in the design of ACRs. The first one is increasing automation level, and the second one is the development of computer based compact workstations for control room operations including intelligent operator aid systems. However, several problems have been reported when another factors are not properly incorporated into the design of ACRs. Among them, one of the most important factors that significantly affect to operator performance is the design of human machine interfaces (HMIs). Thus, HMI evaluation should be emphasized to ensure appropriateness of HMI designs and the safety of NPPs. In general, two kinds of evaluations have been frequently used to assess appropriateness of the proposed HMI design. The one is the static evaluation and the other is the dynamic evaluation. Here, the static evaluation is the one based on guidelines that are extracted from various researches on HMI designs. And the dynamic evaluation generally attempts to evaluate and predict human performance through a model that can describe cognitive behaviors of human or interactions between HMIs and human. However, the static evaluation seems to be inappropriate because it can't properly capture context of task environment that strongly affects to human performance. In addition, in case of dynamic evaluations, development of a model that can sufficiently describe interactions or cognitive behaviors of human operators is very arduous and laborious. To overcome these problems, dynamic evaluation methods that can

  7. A SEMI-AUTOMATIC RULE SET BUILDING METHOD FOR URBAN LAND COVER CLASSIFICATION BASED ON MACHINE LEARNING AND HUMAN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    H. Y. Gu

    2017-09-01

    Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  8. Designing Focused Chemical Libraries Enriched in Protein-Protein Interaction Inhibitors using Machine-Learning Methods

    Science.gov (United States)

    Reynès, Christelle; Host, Hélène; Camproux, Anne-Claude; Laconde, Guillaume; Leroux, Florence; Mazars, Anne; Deprez, Benoit; Fahraeus, Robin; Villoutreix, Bruno O.; Sperandio, Olivier

    2010-01-01

    Protein-protein interactions (PPIs) may represent one of the next major classes of therapeutic targets. So far, only a minute fraction of the estimated 650,000 PPIs that comprise the human interactome are known with a tiny number of complexes being drugged. Such intricate biological systems cannot be cost-efficiently tackled using conventional high-throughput screening methods. Rather, time has come for designing new strategies that will maximize the chance for hit identification through a rationalization of the PPI inhibitor chemical space and the design of PPI-focused compound libraries (global or target-specific). Here, we train machine-learning-based models, mainly decision trees, using a dataset of known PPI inhibitors and of regular drugs in order to determine a global physico-chemical profile for putative PPI inhibitors. This statistical analysis unravels two important molecular descriptors for PPI inhibitors characterizing specific molecular shapes and the presence of a privileged number of aromatic bonds. The best model has been transposed into a computer program, PPI-HitProfiler, that can output from any drug-like compound collection a focused chemical library enriched in putative PPI inhibitors. Our PPI inhibitor profiler is challenged on the experimental screening results of 11 different PPIs among which the p53/MDM2 interaction screened within our own CDithem platform, that in addition to the validation of our concept led to the identification of 4 novel p53/MDM2 inhibitors. Collectively, our tool shows a robust behavior on the 11 experimental datasets by correctly profiling 70% of the experimentally identified hits while removing 52% of the inactive compounds from the initial compound collections. We strongly believe that this new tool can be used as a global PPI inhibitor profiler prior to screening assays to reduce the size of the compound collections to be experimentally screened while keeping most of the true PPI inhibitors. PPI-HitProfiler is

  9. Designing focused chemical libraries enriched in protein-protein interaction inhibitors using machine-learning methods.

    Directory of Open Access Journals (Sweden)

    Christelle Reynès

    2010-03-01

    Full Text Available Protein-protein interactions (PPIs may represent one of the next major classes of therapeutic targets. So far, only a minute fraction of the estimated 650,000 PPIs that comprise the human interactome are known with a tiny number of complexes being drugged. Such intricate biological systems cannot be cost-efficiently tackled using conventional high-throughput screening methods. Rather, time has come for designing new strategies that will maximize the chance for hit identification through a rationalization of the PPI inhibitor chemical space and the design of PPI-focused compound libraries (global or target-specific. Here, we train machine-learning-based models, mainly decision trees, using a dataset of known PPI inhibitors and of regular drugs in order to determine a global physico-chemical profile for putative PPI inhibitors. This statistical analysis unravels two important molecular descriptors for PPI inhibitors characterizing specific molecular shapes and the presence of a privileged number of aromatic bonds. The best model has been transposed into a computer program, PPI-HitProfiler, that can output from any drug-like compound collection a focused chemical library enriched in putative PPI inhibitors. Our PPI inhibitor profiler is challenged on the experimental screening results of 11 different PPIs among which the p53/MDM2 interaction screened within our own CDithem platform, that in addition to the validation of our concept led to the identification of 4 novel p53/MDM2 inhibitors. Collectively, our tool shows a robust behavior on the 11 experimental datasets by correctly profiling 70% of the experimentally identified hits while removing 52% of the inactive compounds from the initial compound collections. We strongly believe that this new tool can be used as a global PPI inhibitor profiler prior to screening assays to reduce the size of the compound collections to be experimentally screened while keeping most of the true PPI inhibitors. PPI

  10. Designing focused chemical libraries enriched in protein-protein interaction inhibitors using machine-learning methods.

    Science.gov (United States)

    Reynès, Christelle; Host, Hélène; Camproux, Anne-Claude; Laconde, Guillaume; Leroux, Florence; Mazars, Anne; Deprez, Benoit; Fahraeus, Robin; Villoutreix, Bruno O; Sperandio, Olivier

    2010-03-05

    Protein-protein interactions (PPIs) may represent one of the next major classes of therapeutic targets. So far, only a minute fraction of the estimated 650,000 PPIs that comprise the human interactome are known with a tiny number of complexes being drugged. Such intricate biological systems cannot be cost-efficiently tackled using conventional high-throughput screening methods. Rather, time has come for designing new strategies that will maximize the chance for hit identification through a rationalization of the PPI inhibitor chemical space and the design of PPI-focused compound libraries (global or target-specific). Here, we train machine-learning-based models, mainly decision trees, using a dataset of known PPI inhibitors and of regular drugs in order to determine a global physico-chemical profile for putative PPI inhibitors. This statistical analysis unravels two important molecular descriptors for PPI inhibitors characterizing specific molecular shapes and the presence of a privileged number of aromatic bonds. The best model has been transposed into a computer program, PPI-HitProfiler, that can output from any drug-like compound collection a focused chemical library enriched in putative PPI inhibitors. Our PPI inhibitor profiler is challenged on the experimental screening results of 11 different PPIs among which the p53/MDM2 interaction screened within our own CDithem platform, that in addition to the validation of our concept led to the identification of 4 novel p53/MDM2 inhibitors. Collectively, our tool shows a robust behavior on the 11 experimental datasets by correctly profiling 70% of the experimentally identified hits while removing 52% of the inactive compounds from the initial compound collections. We strongly believe that this new tool can be used as a global PPI inhibitor profiler prior to screening assays to reduce the size of the compound collections to be experimentally screened while keeping most of the true PPI inhibitors. PPI-HitProfiler is

  11. 100 Area soil washing bench-scale test procedures

    International Nuclear Information System (INIS)

    Freeman, H.D.; Gerber, M.A.; Mattigod, S.V.; Serne, R.J.

    1993-03-01

    This document describes methodologies and procedures for conducting soil washing treatability tests in accordance with the 100 Area Soil Washing Treatability Test Plan (DOE-RL 1992, Draft A). The objective of this treatability study is to evaluate the use of physical separation systems and chemical extraction methods as a means of separating chemically and radioactively contaminated soil fractions from uncontaminated soil fractions. These data will be primarily used for determining feasibility of the individual unit operations and defining the requirements for a system, or systems, for pilot-scale testing

  12. Effectiveness of a nonrinse, alcohol-free antiseptic hand wash.

    Science.gov (United States)

    Moadab, A; Rupley, K F; Wadhams, P

    2001-06-01

    This study evaluated the efficacy of a novel surfactant, allantoin, and benzalkonium chloride hand sanitizer using the US Food and Drug Administration's method for testing antiseptic hand washes that podiatric physicians and other health-care personnel use. The alcohol-free product, HandClens, was compared with an alcohol-based product, Purell. Independent researchers from the California College of Podiatric Medicine conducted the study using 40 volunteer students from the class of 2001. The results show that HandClens outperformed Purell and met the regulatory requirements for a hand sanitizer. Purell failed as an antimicrobial hand wash and was less effective than a control soap used in the study.

  13. Machine learning methods as a tool to analyse incomplete or irregularly sampled radon time series data.

    Science.gov (United States)

    Janik, M; Bossew, P; Kurihara, O

    2018-07-15

    Machine learning is a class of statistical techniques which has proven to be a powerful tool for modelling the behaviour of complex systems, in which response quantities depend on assumed controls or predictors in a complicated way. In this paper, as our first purpose, we propose the application of machine learning to reconstruct incomplete or irregularly sampled data of time series indoor radon ( 222 Rn). The physical assumption underlying the modelling is that Rn concentration in the air is controlled by environmental variables such as air temperature and pressure. The algorithms "learn" from complete sections of multivariate series, derive a dependence model and apply it to sections where the controls are available, but not the response (Rn), and in this way complete the Rn series. Three machine learning techniques are applied in this study, namely random forest, its extension called the gradient boosting machine and deep learning. For a comparison, we apply the classical multiple regression in a generalized linear model version. Performance of the models is evaluated through different metrics. The performance of the gradient boosting machine is found to be superior to that of the other techniques. By applying learning machines, we show, as our second purpose, that missing data or periods of Rn series data can be reconstructed and resampled on a regular grid reasonably, if data of appropriate physical controls are available. The techniques also identify to which degree the assumed controls contribute to imputing missing Rn values. Our third purpose, though no less important from the viewpoint of physics, is identifying to which degree physical, in this case environmental variables, are relevant as Rn predictors, or in other words, which predictors explain most of the temporal variability of Rn. We show that variables which contribute most to the Rn series reconstruction, are temperature, relative humidity and day of the year. The first two are physical

  14. 100 Areas soil washing tradeoff study

    International Nuclear Information System (INIS)

    Belden, R.D.

    1995-11-01

    The complex nature of cost analysis and systems work demands a level of effort to ensure that decisions made support the best interests of all parties. This tradeoff study will act as a formal decision analysis method for the evaluation of many variables. The documentation of the decision rationale and system design is essential for successful planning and implementation of any system. The Hanford Site offers unique problems for economic analysis of remediation alternatives. The variations in the size of sites, geographic locations, and possible cleanup scenarios all add to the complexity of the tradeoff analysis. A thorough examination of all alternatives must be held to a level of detail appropriate to current regulatory and budgetary considerations. This study will compare the economics of two specific alternatives for remediation of soils at the Hanford Site. Remove and dispose is compared to remove, treat, and dispose. The treatment analyzed in this study is volume reduction through soil washing

  15. Soil washing and post-wash biological treatment of petroleum hydrocarbon contaminated soils

    OpenAIRE

    Bhandari, Alok

    1992-01-01

    A laboratory scale study was conducted to investigate the treatability of petroleum contaminated soils by soil washing and subsequent biological treatment of the different soil fractions. In addition to soils obtained from contaminated sites, studies were also performed on soils contaminated in the laboratory. Soil washing was performed using a bench-scale soil washing system. Washing was carried out with simultaneous fractionation of the bulk soil into sand, silt and clay fractions. Cl...

  16. Centrifugation-free washing: A novel approach for removing immunoglobulin A from stored red blood cells.

    Science.gov (United States)

    Vörös, Eszter; Piety, Nathaniel Z; Strachan, Briony C; Lu, Madeleine; Shevkoplyas, Sergey S

    2018-08-01

    Washed red blood cells (RBCs) are indicated for immunoglobulin A (IgA) deficient recipients. Centrifugation-based cell processors commonly used by hospital blood banks cannot consistently reduce IgA below the recommended levels, hence double washing is frequently required. Here, we describe a prototype of a simple, portable, disposable system capable of washing stored RBCs without centrifugation, while reducing IgA below 0.05 mg/dL in a single run. Samples from RBC units (n = 8, leukoreduced, 4-6 weeks storage duration) were diluted with normal saline to a hematocrit of 10%, and then washed using either the prototype washing system, or via conventional centrifugation. The efficiency of the two washing methods was quantified and compared by measuring several key in vitro quality metrics. The prototype of the washing system was able to process stored RBCs at a rate of 300 mL/hour, producing a suspension of washed RBCs with 43 ± 3% hematocrit and 86 ± 7% cell recovery. Overall, the two washing methods performed similarly for most measured parameters, lowering the concentration of free hemoglobin by >4-fold and total free protein by >10-fold. Importantly, the new washing system reduced the IgA level to 0.02 ± 0.01 mg/mL, a concentration 5-fold lower than that produced by conventional centrifugation. This proof-of-concept study showed that centrifugation may be unnecessary for washing stored RBCs. A simple, disposable, centrifugation-free washing system could be particularly useful in smaller medical facilities and resource limited settings that may lack access to centrifugation-based cell processors. © 2017 Wiley Periodicals, Inc.

  17. Prediction of hot spot residues at protein-protein interfaces by combining machine learning and energy-based methods

    Directory of Open Access Journals (Sweden)

    Pontil Massimiliano

    2009-10-01

    Full Text Available Abstract Background Alanine scanning mutagenesis is a powerful experimental methodology for investigating the structural and energetic characteristics of protein complexes. Individual amino-acids are systematically mutated to alanine and changes in free energy of binding (ΔΔG measured. Several experiments have shown that protein-protein interactions are critically dependent on just a few residues ("hot spots" at the interface. Hot spots make a dominant contribution to the free energy of binding and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there is a need for accurate and reliable computational methods. Such methods would also add to our understanding of the determinants of affinity and specificity in protein-protein recognition. Results We present a novel computational strategy to identify hot spot residues, given the structure of a complex. We consider the basic energetic terms that contribute to hot spot interactions, i.e. van der Waals potentials, solvation energy, hydrogen bonds and Coulomb electrostatics. We treat them as input features and use machine learning algorithms such as Support Vector Machines and Gaussian Processes to optimally combine and integrate them, based on a set of training examples of alanine mutations. We show that our approach is effective in predicting hot spots and it compares favourably to other available methods. In particular we find the best performances using Transductive Support Vector Machines, a semi-supervised learning scheme. When hot spots are defined as those residues for which ΔΔG ≥ 2 kcal/mol, our method achieves a precision and a recall respectively of 56% and 65%. Conclusion We have developed an hybrid scheme in which energy terms are used as input features of machine learning models. This strategy combines the strengths of machine learning and energy-based methods. Although so far these two types of approaches have mainly been

  18. Methods of stabilizing a permanent maglev rotator in heart pumps and other rotary machines

    Directory of Open Access Journals (Sweden)

    kun-xi qian

    2014-10-01

    Full Text Available Permanent maglev rotator in a rotary machine could be stabilized according to the author’s experiences, by use of a non-PM (permanent magnetic force acting together with the PM force, and a non-PM bearing functioning together with the PM bearing, or a so-called gyro-effect which can stabilize all rotators including permanent maglev rotator. This paper presents both axially and radially driven permanent maglev centrifugal heart pumps, as well as a permanent maglev turbine machine and an industrially used permanent maglev centrifugal pump. In all this devices permanent maglev rotators achieve stable equilibrium by different approaches described in detail. Finally, the principle exhibition of gyro-effect and the route chart to stabilization of permanent maglev rotator are presented.

  19. Comparison between Genetic Algorithms and Particle Swarm Optimization Methods on Standard Test Functions and Machine Design

    DEFF Research Database (Denmark)

    Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika

    2013-01-01

    , genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application.......Nowadays the requirements imposed by the industry and economy ask for better quality and performance while the price must be maintained in the same range. To achieve this goal optimization must be introduced in the design process. Two of the best known optimization algorithms for machine design...

  20. System and method for smoothing a salient rotor in electrical machines

    Science.gov (United States)

    Raminosoa, Tsarafidy; Alexander, James Pellegrino; El-Refaie, Ayman Mohamed Fawzi; Torrey, David A.

    2016-12-13

    An electrical machine exhibiting reduced friction and windage losses is disclosed. The electrical machine includes a stator and a rotor assembly configured to rotate relative to the stator, wherein the rotor assembly comprises a rotor core including a plurality of salient rotor poles that are spaced apart from one another around an inner hub such that an interpolar gap is formed between each adjacent pair of salient rotor poles, with an opening being defined by the rotor core in each interpolar gap. Electrically non-conductive and non-magnetic inserts are positioned in the gaps formed between the salient rotor poles, with each of the inserts including a mating feature formed an axially inner edge thereof that is configured to mate with a respective opening being defined by the rotor core, so as to secure the insert to the rotor core against centrifugal force experienced during rotation of the rotor assembly.

  1. Comparative Study of Powdered Ginger Drink Processed by Different Method:Traditional and using Evaporation Machine

    Science.gov (United States)

    Apriyana, Wuri; Taufika Rosyida, Vita; Nur Hayati, Septi; Darsih, Cici; Dewi Poeloengasih, Crescentiana

    2017-12-01

    Ginger drink is one of the traditional beverage that became one of the products of interest by consumers in Indonesia. This drink is believed to have excellent properties for the health of the body. In this study, we have compared the moisture content, ash content, metal content and the identified compound of product which processed with traditional technique and using an evaporator machine. The results show that both of products fulfilled some parameters of the Indonesian National Standard for the traditional powdered drink. GC-MS analysis data showed the identified compound of both product. The major of hydrocarbon groups that influenced the flavor such as zingiberene, camphene, beta-phelladrine, beta-sesquepelladrine, curcumene, and beta-bisabolene were found higher in ginger drink powder treated with a machine than those processed traditionally.

  2. Mothers’ Hand washing Practice and Diarrhea Cases in Children under Five in Baleendah, Bandung

    Directory of Open Access Journals (Sweden)

    M Syafril Firdaus

    2015-06-01

    Full Text Available ackground: Diarrhea is a disease, especially in children, with high mortality and morbidity rate in developing countries, including Indonesia. Diarrhea can be prevented if people can apply clean and healthy behaviors, especially hand washing. Hand washing is the cheapest, simplest, and the most effective methods for prevention of diarrhea. The objective of this study is to identify the knowledge, attitude, and practice of mothers’ hand washing and diarrhea cases in children under five in Baleendah District, Bandung. Methods: A descriptive study was conducted during September−November 2012 to 210 mothers who had children (ages 12−59 months in Baleendah using rapid survey technique. The time allocated for each village was adjusted to the population proportion for each region. The data were analyzed using computer and was represented using frequency distribution. Results: This study showed that the respondents had good level of knowledge and attitude of hand washing (83.8% and 61%, respectively, but only 21% of the respondents’ practices of hand washing was in good level. Most of the respondents did not wash their hands according to the 7 steps of correct hand washing. Moreover the percentage of children with diarrhea in Baleendah was 43.8% (92 cases during the study. Conclusions: There should be dissemination of information about the benefit of washing hands with 7 steps of correct hands washing so that families can practice it and can prevent diarrhea in children under five.

  3. Gas turbine cleaning upgrade (compressor wash)

    Energy Technology Data Exchange (ETDEWEB)

    Asplund, P. [Gas Turbine Efficiency, Jarfalla (Sweden)

    1998-12-31

    The influence of gas turbine degradation on operating costs is high. Gas turbine cleaning is one of many actions taken for power recovery and is to consider as preventive maintenance. It is generally performed within the industrial field and occasionally within the aero sector. In order to meet the gas turbine development win high blade loads and ever-increasing temperatures, together with emission Aces and environmental regulations, more efficient and careful cleaning methods are needed. Following a survey about potentials for cost reduction in gas turbine operation a new man-hour and water saving cleaning method has been evaluated for a standard process. Compared with traditional cleaning methods, the new method is water,- cost,- weight and space saving due to a new washing technique. Traditional methods are based on using different nozzles for ON and OFF-line cleaning, which rise the demand for complicated systems. In the new method the same nozzle installation, same liquid flow and pressure is used for both ON and OFF-line cleaning. This gives a cost reduction of appr. 20.000 - 30.000 USD per gas turbine depending on installation and size. Evaluation of the new method shows significantly improved ON -line cleaning performance and thus OFF -line cleaning is required only during scheduled stops. (orig.) 10 refs.

  4. Gas turbine cleaning upgrade (compressor wash)

    Energy Technology Data Exchange (ETDEWEB)

    Asplund, P [Gas Turbine Efficiency, Jarfalla (Sweden)

    1999-12-31

    The influence of gas turbine degradation on operating costs is high. Gas turbine cleaning is one of many actions taken for power recovery and is to consider as preventive maintenance. It is generally performed within the industrial field and occasionally within the aero sector. In order to meet the gas turbine development win high blade loads and ever-increasing temperatures, together with emission Aces and environmental regulations, more efficient and careful cleaning methods are needed. Following a survey about potentials for cost reduction in gas turbine operation a new man-hour and water saving cleaning method has been evaluated for a standard process. Compared with traditional cleaning methods, the new method is water,- cost,- weight and space saving due to a new washing technique. Traditional methods are based on using different nozzles for ON and OFF-line cleaning, which rise the demand for complicated systems. In the new method the same nozzle installation, same liquid flow and pressure is used for both ON and OFF-line cleaning. This gives a cost reduction of appr. 20.000 - 30.000 USD per gas turbine depending on installation and size. Evaluation of the new method shows significantly improved ON -line cleaning performance and thus OFF -line cleaning is required only during scheduled stops. (orig.) 10 refs.

  5. EFRT M-12 Issue Resolution: Solids Washing

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, David L.; Schonewill, Philip P.; Toth, James J.; Huckaby, James L.; Eslinger, Paul W.; Hanson, Brady D.; Kurath, Dean E.; Minette, Michael J.

    2009-08-14

    Pacific Northwest National Laboratory (PNNL) has been tasked by Bechtel National Inc. (BNI) on the River Protection Project-Hanford Tank Waste Treatment and Immobilization Plant (RPP-WTP) project to perform research and development activities to resolve technical issues identified for the Pretreatment Facility (PTF). The Pretreatment Engineering Platform (PEP) was designed, constructed, and operated as part of a plan to respond to issue M12, “Undemonstrated Leaching Processes.” The PEP is a 1/4.5-scale test platform designed to simulate the WTP pretreatment caustic leaching, oxidative leaching, ultrafiltration solids concentration, and slurry washing processes. The PEP replicates the WTP leaching processes using prototypic equipment and control strategies. Two operating scenarios were evaluated for the ultrafiltration process (UFP) and leaching operations. The first scenario has caustic leaching performed in the UFP-VSL-T01A/B ultrafiltration feed vessels, identified as Integrated Test A. The second scenario has caustic leaching conducted in the UFP-VSL-T02A ultrafiltration feed preparation vessel, identified as Integrated Test B. Washing operations in PEP Integrated Tests A and B were conducted successfully as per the approved run sheets. However, various minor instrumental problems occurred, and some of the process conditions specified in the run sheet were not met during the wash operations, such as filter-loop flow-rate targets not being met. Five analytes were selected based on full solubility and monitored in the post-caustic-leach wash as successful indicators of washing efficiency. These were aluminum, sulfate, nitrate, nitrite, and free hydroxide. Other analytes, including sodium, oxalate, phosphate, and total dissolved solids, showed indications of changing solubility; therefore, they were unsuitable for monitoring washing efficiency. In the post-oxidative-leach wash, two analytes with full solubility were selected as suitable indicators of washing

  6. Hand washing practices amongst medical students in Port Harcourt ...

    African Journals Online (AJOL)

    Alasia Datonye

    Background: Hand washing with soap and water is one of ... Method: This was a descriptive cross sectional survey ... simple questionnaire exploring perceptions, attitudes and ... Many studies have shown that doctors decontaminating their hands between seeing patients ..... countries: a systematic review and meta analysis.

  7. A comparative study of machine learning methods for time-to-event survival data for radiomics risk modelling.

    Science.gov (United States)

    Leger, Stefan; Zwanenburg, Alex; Pilz, Karoline; Lohaus, Fabian; Linge, Annett; Zöphel, Klaus; Kotzerke, Jörg; Schreiber, Andreas; Tinhofer, Inge; Budach, Volker; Sak, Ali; Stuschke, Martin; Balermpas, Panagiotis; Rödel, Claus; Ganswindt, Ute; Belka, Claus; Pigorsch, Steffi; Combs, Stephanie E; Mönnich, David; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Troost, Esther G C; Löck, Steffen; Richter, Christian

    2017-10-16

    Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g. C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.

  8. Machine tool structures

    CERN Document Server

    Koenigsberger, F

    1970-01-01

    Machine Tool Structures, Volume 1 deals with fundamental theories and calculation methods for machine tool structures. Experimental investigations into stiffness are discussed, along with the application of the results to the design of machine tool structures. Topics covered range from static and dynamic stiffness to chatter in metal cutting, stability in machine tools, and deformations of machine tool structures. This volume is divided into three sections and opens with a discussion on stiffness specifications and the effect of stiffness on the behavior of the machine under forced vibration c

  9. Classification of follicular lymphoma images: a holistic approach with symbol-based machine learning methods.

    Science.gov (United States)

    Zorman, Milan; Sánchez de la Rosa, José Luis; Dinevski, Dejan

    2011-12-01

    It is not very often to see a symbol-based machine learning approach to be used for the purpose of image classification and recognition. In this paper we will present such an approach, which we first used on the follicular lymphoma images. Lymphoma is a broad term encompassing a variety of cancers of the lymphatic system. Lymphoma is differentiated by the type of cell that multiplies and how the cancer presents itself. It is very important to get an exact diagnosis regarding lymphoma and to determine the treatments that will be most effective for the patient's condition. Our work was focused on the identification of lymphomas by finding follicles in microscopy images provided by the Laboratory of Pathology in the University Hospital of Tenerife, Spain. We divided our work in two stages: in the first stage we did image pre-processing and feature extraction, and in the second stage we used different symbolic machine learning approaches for pixel classification. Symbolic machine learning approaches are often neglected when looking for image analysis tools. They are not only known for a very appropriate knowledge representation, but also claimed to lack computational power. The results we got are very promising and show that symbolic approaches can be successful in image analysis applications.

  10. High speed internal permanent magnet machine and method of manufacturing the same

    Science.gov (United States)

    Alexander, James Pellegrino [Ballston Lake, NY; EL-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Lokhandwalla, Murtuza [Clifton Park, NY; Shah, Manoj Ramprasad [Latham, NY; VanDam, Jeremy Daniel [West Coxsackie, NY

    2011-09-13

    An internal permanent magnet (IPM) machine is provided. The IPM machine includes a stator assembly and a stator core. The stator core also includes multiple stator teeth. The stator assembly is further configured with stator windings to generate a magnetic field when excited with alternating currents and extends along a longitudinal axis with an inner surface defining a cavity. The IPM machine also includes a rotor assembly and a rotor core. The rotor core is disposed inside the cavity and configured to rotate about the longitudinal axis. The rotor assembly further includes a shaft. The shaft further includes multiple protrusions alternately arranged relative to multiple bottom structures provided on the shaft. The rotor assembly also includes multiple stacks of laminations disposed on the protrusions and dovetailed circumferentially around the shaft. The rotor assembly further includes multiple permanent magnets for generating a magnetic field, which interacts with the stator magnetic field to produce torque. The permanent magnets are disposed between the stacks. The rotor assembly also includes multiple bottom wedges disposed on the bottom structures of the shaft and configured to hold the multiple stacks and the multiple permanent magnets.

  11. Optimal Portfolio Choice with Wash Sale Constraints

    DEFF Research Database (Denmark)

    Astrup Jensen, Bjarne; Marekwica, Marcel

    2011-01-01

    We analytically solve the portfolio choice problem in the presence of wash sale constraints in a two-period model with one risky asset. Our results show that wash sale constraints can heavily affect portfolio choice of investors with unrealized losses. The trading behavior of such investors...

  12. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods.

    Science.gov (United States)

    Burlina, Philippe; Billings, Seth; Joshi, Neil; Albayda, Jemima

    2017-01-01

    To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.

  13. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods.

    Directory of Open Access Journals (Sweden)

    Philippe Burlina

    Full Text Available To evaluate the use of ultrasound coupled with machine learning (ML and deep learning (DL techniques for automated or semi-automated classification of myositis.Eighty subjects comprised of 19 with inclusion body myositis (IBM, 14 with polymyositis (PM, 14 with dermatomyositis (DM, and 33 normal (N subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally were acquired. We considered three problems of classification including (A normal vs. affected (DM, PM, IBM; (B normal vs. IBM patients; and (C IBM vs. other types of myositis (DM or PM. We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification.The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A, 86.6% ± 2.4% for (B and 74.8% ± 3.9% for (C, while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A, 84.3% ± 2.3% for (B and 68.9% ± 2.5% for (C.This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.

  14. Contaminant resorption during soil washing

    International Nuclear Information System (INIS)

    Gombert, D.

    1993-01-01

    To evaluate the applicability of soil washing to a specific site requires some basic research in how contaminants are bound. Much can be learned from sequential extraction methodology based on micronutrient bioavailability studies wherein the soil matrix is chemically dissected to selectively remove particular fixation mechanisms independently. This procedure uses a series of progressively more aggressive solvents to dissolve the principle phases that make up a soil, however, the published studies do not appear to consider the potential for a contaminant released from one type of site to resorb on another site during an extraction. This physical model assumes no ion exchange or adsorption at sites either previously occupied by other ions, or exposed by the dissolution. Therefore, to make engineering use of the sequential extraction data, the release of contamination must be evaluated relative to the effects of resorption. Time release studies were conducted to determine the optimum duration for extraction to maximize complete destruction of the target matrix fraction while minimizing contaminant resorption. Tests with and without a potassium brine present to inhibit cesium resorption indicated extraction efficiency could be enhanced by as much as a factor of ten using the brine

  15. NON-CONVENTIONAL MACHINING PROCESSES SELECTION USING MULTI-OBJECTIVE OPTIMIZATION ON THE BASIS OF RATIO ANALYSIS METHOD

    Directory of Open Access Journals (Sweden)

    MILOŠ MADIĆ

    2015-11-01

    Full Text Available The role of non-conventional machining processes (NCMPs in today’s manufacturing environment has been well acknowledged. For effective utilization of the capabilities and advantages of different NCMPs, selection of the most appropriate NCMP for a given machining application requires consideration of different conflicting criteria. The right choice of the NCMP is critical to the success and competitiveness of the company. As the NCMP selection problem involves consideration of different conflicting criteria, of different relative importance, the multi-criteria decision making (MCDM methods are very useful in systematical selection of the most appropriate NCMP. This paper presents the application of a recent MCDM method, i.e., the multi-objective optimization on the basis of ratio analysis (MOORA method to solve NCMP selection which has been defined considering different performance criteria of four most widely used NCMPs. In order to determine the relative significance of considered quality criteria a pair-wise comparison matrix of the analytic hierarchy process was used. The results obtained using the MOORA method showed perfect correlation with those obtained by the technique for order preference by similarity to ideal solution (TOPSIS method which proves the applicability and potentiality of this MCDM method for solving complex NCMP selection problems.

  16. A method for simultaneous determination of wear-resistance of structurally identical machine parts with mutual friction

    International Nuclear Information System (INIS)

    Valigura, V.; Volyn'ski, A.

    1979-01-01

    There are mechanisms in the technique, boundary state of separate parts of which is determined by the process of wear of three elements of the mechanism, made of the same material (for example antifriction bearings). The main concept of wear determination in the case of such a mechanism is to conduct measurements by means of simultaneous application of three different methods, for example: the method of roentgen-fluorescence; the tracer technique with application of activation by means of irradiation by different elementary particles, practically by means of neutrons and protons; the method of artifical bases application. In the paper the method is presented having been developed in the Institute of Working Machines of the Poznan Polytechnic. This method of simultaneous investigation in the wear process of bearing rings, inner and outer, and rolling elements of roller bearings. All these elements of bearings are made of LX-15 steel. The method developed, permits to determine the wear sufficient for practical purposes sensitivity [ru

  17. Kennard-Stone combined with least square support vector machine method for noncontact discriminating human blood species

    Science.gov (United States)

    Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling

    2017-11-01

    Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.

  18. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    Directory of Open Access Journals (Sweden)

    Shanshan He

    2015-10-01

    Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.

  19. Use of a Machine-learning Method for Predicting Highly Cited Articles Within General Radiology Journals.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Doshi, Ankur M; Ginocchio, Luke A; Aphinyanaphongs, Yindalon

    2016-12-01

    This study aimed to assess the performance of a text classification machine-learning model in predicting highly cited articles within the recent radiological literature and to identify the model's most influential article features. We downloaded from PubMed the title, abstract, and medical subject heading terms for 10,065 articles published in 25 general radiology journals in 2012 and 2013. Three machine-learning models were applied to predict the top 10% of included articles in terms of the number of citations to the article in 2014 (reflecting the 2-year time window in conventional impact factor calculations). The model having the highest area under the curve was selected to derive a list of article features (words) predicting high citation volume, which was iteratively reduced to identify the smallest possible core feature list maintaining predictive power. Overall themes were qualitatively assigned to the core features. The regularized logistic regression (Bayesian binary regression) model had highest performance, achieving an area under the curve of 0.814 in predicting articles in the top 10% of citation volume. We reduced the initial 14,083 features to 210 features that maintain predictivity. These features corresponded with topics relating to various imaging techniques (eg, diffusion-weighted magnetic resonance imaging, hyperpolarized magnetic resonance imaging, dual-energy computed tomography, computed tomography reconstruction algorithms, tomosynthesis, elastography, and computer-aided diagnosis), particular pathologies (prostate cancer; thyroid nodules; hepatic adenoma, hepatocellular carcinoma, non-alcoholic fatty liver disease), and other topics (radiation dose, electroporation, education, general oncology, gadolinium, statistics). Machine learning can be successfully applied to create specific feature-based models for predicting articles likely to achieve high influence within the radiological literature. Copyright © 2016 The Association of University

  20. An improved method of early diagnosis of smoking-induced respiratory changes using machine learning algorithms.

    Science.gov (United States)

    Amaral, Jorge L M; Lopes, Agnaldo J; Jansen, José M; Faria, Alvaro C D; Melo, Pedro L

    2013-12-01

    The purpose of this study was to develop an automatic classifier to increase the accuracy of the forced oscillation technique (FOT) for diagnosing early respiratory abnormalities in smoking patients. The data consisted of FOT parameters obtained from 56 volunteers, 28 healthy and 28 smokers with low tobacco consumption. Many supervised learning techniques were investigated, including logistic linear classifiers, k nearest neighbor (KNN), neural networks and support vector machines (SVM). To evaluate performance, the ROC curve of the most accurate parameter was established as baseline. To determine the best input features and classifier parameters, we used genetic algorithms and a 10-fold cross-validation using the average area under the ROC curve (AUC). In the first experiment, the original FOT parameters were used as input. We observed a significant improvement in accuracy (KNN=0.89 and SVM=0.87) compared with the baseline (0.77). The second experiment performed a feature selection on the original FOT parameters. This selection did not cause any significant improvement in accuracy, but it was useful in identifying more adequate FOT parameters. In the third experiment, we performed a feature selection on the cross products of the FOT parameters. This selection resulted in a further increase in AUC (KNN=SVM=0.91), which allows for high diagnostic accuracy. In conclusion, machine learning classifiers can help identify early smoking-induced respiratory alterations. The use of FOT cross products and the search for the best features and classifier parameters can markedly improve the performance of machine learning classifiers. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Using machine-learning methods to analyze economic loss function of quality management processes

    Science.gov (United States)

    Dzedik, V. A.; Lontsikh, P. A.

    2018-05-01

    During analysis of quality management systems, their economic component is often analyzed insufficiently. To overcome this issue, it is necessary to withdraw the concept of economic loss functions from tolerance thinking and address it. Input data about economic losses in processes have a complex form, thus, using standard tools to solve this problem is complicated. Use of machine learning techniques allows one to obtain precise models of the economic loss function based on even the most complex input data. Results of such analysis contain data about the true efficiency of a process and can be used to make investment decisions.

  2. A review of machine learning methods to predict the solubility of overexpressed recombinant proteins in Escherichia coli.

    Science.gov (United States)

    Habibi, Narjeskhatoon; Mohd Hashim, Siti Z; Norouzi, Alireza; Samian, Mohammed Razip

    2014-05-08

    Over the last 20 years in biotechnology, the production of recombinant proteins has been a crucial bioprocess in both biopharmaceutical and research arena in terms of human health, scientific impact and economic volume. Although logical strategies of genetic engineering have been established, protein overexpression is still an art. In particular, heterologous expression is often hindered by low level of production and frequent fail due to opaque reasons. The problem is accentuated because there is no generic solution available to enhance heterologous overexpression. For a given protein, the extent of its solubility can indicate the quality of its function. Over 30% of synthesized proteins are not soluble. In certain experimental circumstances, including temperature, expression host, etc., protein solubility is a feature eventually defined by its sequence. Until now, numerous methods based on machine learning are proposed to predict the solubility of protein merely from its amino acid sequence. In spite of the 20 years of research on the matter, no comprehensive review is available on the published methods. This paper presents an extensive review of the existing models to predict protein solubility in Escherichia coli recombinant protein overexpression system. The models are investigated and compared regarding the datasets used, features, feature selection methods, machine learning techniques and accuracy of prediction. A discussion on the models is provided at the end. This study aims to investigate extensively the machine learning based methods to predict recombinant protein solubility, so as to offer a general as well as a detailed understanding for researches in the field. Some of the models present acceptable prediction performances and convenient user interfaces. These models can be considered as valuable tools to predict recombinant protein overexpression results before performing real laboratory experiments, thus saving labour, time and cost.

  3. Texture Analysis using The Neutron Diffraction Method on The Non Standardized Austenitic Steel Process by Machining,Annealing, and Rolling

    Directory of Open Access Journals (Sweden)

    Tri Hardi Priyanto

    2016-04-01

    Full Text Available Austenitic steel is one type of stainless steel which is widely used in the industry. Many studies on  austenitic stainless steel have been performed to determine the physicalproperties using various types of equipment and methods. In this study, the neutron diffraction method is used to characterize the materials which have been made from  minerals extracted from the mines in Indonesia. The materials consist of a granular ferro-scrap, nickel, ferro-chrome, ferro-manganese, and ferro-silicon added with a little titanium. Characterization of the materials was carried out in threeprocesses, namely: machining, annealing, and rolling. Experimental results obtained from the machining process generally produces a texture in the 〈100〉direction. From the machining to annealing process, the texture index decreases from 3.0164 to 2.434.Texture strength in the machining process (BA2N sample is  8.13 mrd and it then decreases to 6.99 in the annealing process (A2DO sample. In the annealing process the three-component texture appears, cube-on-edge type texture{110}〈001〉, cube-type texture {001}〈100〉, and brass-type {110}〈112〉. The texture is very strong leading to the direction of orientation {100}〈001〉, while the {011}〈100〉is weaker than that of the {001}, and texture withorientation {110}〈112〉is weak. In the annealing process stress release occurred, and this was shown by more randomly pole compared to stress release by the machining process. In the rolling process a brass-type texture{110}〈112〉with a spread towards the goss-type texture {110}〈001〉 appeared,  and  the  brass  component  is markedly  reinforced  compared  to  the undeformed state (before rolling. Moreover, the presence of an additional {110} component was observed at the center of the (110 pole figure. The pole density of three components increases withthe increasing degree of thickness reduction. By increasing degrees

  4. Application of generalized Hough transform for detecting sugar beet plant from weed using machine vision method

    Directory of Open Access Journals (Sweden)

    A Bakhshipour Ziaratgahi

    2017-05-01

    Full Text Available Introduction Sugar beet (Beta vulgaris L. as the second most important world’s sugar source after sugarcane is one of the major industrial crops. The presence of weeds in sugar beet fields, especially at early growth stages, results in a substantial decrease in the crop yield. It is very important to efficiently eliminate weeds at early growing stages. The first step of precision weed control is accurate detection of weeds location in the field. This operation can be performed by machine vision techniques. Hough transform is one of the shape feature extraction methods for object tracking in image processing which is basically used to identify lines or other geometrical shapes in an image. Generalized Hough transform (GHT is a modified version of the Hough transform used not only for geometrical forms, but also for detecting any arbitrary shape. This method is based on a pattern matching principle that uses a set of vectors of feature points (usually object edge points to a reference point to construct a pattern. By comparing this pattern with a set pattern, the desired shape is detected. The aim of this study was to identify the sugar beet plant from some common weeds in a field using the GHT. Materials and Methods Images required for this study were taken at the four-leaf stage of sugar beet as the beginning of the critical period of weed control. A shelter was used to avoid direct sunlight and prevent leaf shadows on each other. The obtained images were then introduced to the Image Processing Toolbox of MATLAB programming software for further processing. Green and Red color components were extracted from primary RGB images. In the first step, binary images were obtained by applying the optimal threshold on the G-R images. A comprehensive study of several sugar beet images revealed that there is a unique feature in sugar beet leaves which makes them differentiable from the weeds. The feature observed in all sugar beet plants at the four

  5. Constructing and validating readability models: the method of integrating multilevel linguistic features with machine learning.

    Science.gov (United States)

    Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En

    2015-06-01

    Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.

  6. 3D Magnetic field modeling of a new superconducting synchronous machine using reluctance network method

    Science.gov (United States)

    Kelouaz, Moussa; Ouazir, Youcef; Hadjout, Larbi; Mezani, Smail; Lubin, Thiery; Berger, Kévin; Lévêque, Jean

    2018-05-01

    In this paper a new superconducting inductor topology intended for synchronous machine is presented. The studied machine has a standard 3-phase armature and a new kind of 2-poles inductor (claw-pole structure) excited by two coaxial superconducting coils. The air-gap spatial variation of the radial flux density is obtained by inserting a superconducting bulk, which deviates the magnetic field due to the coils. The complex geometry of this inductor usually needs 3D finite elements (FEM) for its analysis. However, to avoid a long computational time inherent to 3D FEM, we propose in this work an alternative modeling, which uses a 3D meshed reluctance network. The results obtained with the developed model are compared to 3D FEM computations as well as to measurements carried out on a laboratory prototype. Finally, a 3D FEM study of the shielding properties of the superconducting screen demonstrates the suitability of using a diamagnetic-like model of the superconducting screen.

  7. Performance Evaluation of Machine Learning Methods for Leaf Area Index Retrieval from Time-Series MODIS Reflectance Data

    Science.gov (United States)

    Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang

    2017-01-01

    Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size. PMID:28045443

  8. A Hybrid Machine Learning Method for Fusing fMRI and Genetic Data: Combining both Improves Classification of Schizophrenia

    Directory of Open Access Journals (Sweden)

    Honghui Yang

    2010-10-01

    Full Text Available We demonstrate a hybrid machine learning method to classify schizophrenia patients and healthy controls, using functional magnetic resonance imaging (fMRI and single nucleotide polymorphism (SNP data. The method consists of four stages: (1 SNPs with the most discriminating information between the healthy controls and schizophrenia patients are selected to construct a support vector machine ensemble (SNP-SVME. (2 Voxels in the fMRI map contributing to classification are selected to build another SVME (Voxel-SVME. (3 Components of fMRI activation obtained with independent component analysis (ICA are used to construct a single SVM classifier (ICA-SVMC. (4 The above three models are combined into a single module using a majority voting approach to make a final decision (Combined SNP-fMRI. The method was evaluated by a fully-validated leave-one-out method using 40 subjects (20 patients and 20 controls. The classification accuracy was: 0.74 for SNP-SVME, 0.82 for Voxel-SVME, 0.83 for ICA-SVMC, and 0.87 for Combined SNP-fMRI. Experimental results show that better classification accuracy was achieved by combining genetic and fMRI data than using either alone, indicating that genetic and brain function representing different, but partially complementary aspects, of schizophrenia etiopathology. This study suggests an effective way to reassess biological classification of individuals with schizophrenia, which is also potentially useful for identifying diagnostically important markers for the disorder.

  9. Diagnosis of Dementia by Machine learning methods in Epidemiological studies: a pilot exploratory study from south India.

    Science.gov (United States)

    Bhagyashree, Sheshadri Iyengar Raghavan; Nagaraj, Kiran; Prince, Martin; Fall, Caroline H D; Krishna, Murali

    2018-01-01

    There are limited data on the use of artificial intelligence methods for the diagnosis of dementia in epidemiological studies in low- and middle-income country (LMIC) settings. A culture and education fair battery of cognitive tests was developed and validated for population based studies in low- and middle-income countries including India by the 10/66 Dementia Research Group. We explored the machine learning methods based on the 10/66 battery of cognitive tests for the diagnosis of dementia based in a birth cohort study in South India. The data sets for 466 men and women for this study were obtained from the on-going Mysore Studies of Natal effect of Health and Ageing (MYNAH), in south India. The data sets included: demographics, performance on the 10/66 cognitive function tests, the 10/66 diagnosis of mental disorders and population based normative data for the 10/66 battery of cognitive function tests. Diagnosis of dementia from the rule based approach was compared against the 10/66 diagnosis of dementia. We have applied machine learning techniques to identify minimal number of the 10/66 cognitive function tests required for diagnosing dementia and derived an algorithm to improve the accuracy of dementia diagnosis. Of 466 subjects, 27 had 10/66 diagnosis of dementia, 19 of whom were correctly identified as having dementia by Jrip classification with 100% accuracy. This pilot exploratory study indicates that machine learning methods can help identify community dwelling older adults with 10/66 criterion diagnosis of dementia with good accuracy in a LMIC setting such as India. This should reduce the duration of the diagnostic assessment and make the process easier and quicker for clinicians, patients and will be useful for 'case' ascertainment in population based epidemiological studies.

  10. Oral cancer prognosis based on clinicopathologic and genomic markers using a hybrid of feature selection and machine learning methods

    Science.gov (United States)

    2013-01-01

    Background Machine learning techniques are becoming useful as an alternative approach to conventional medical diagnosis or prognosis as they are good for handling noisy and incomplete data, and significant results can be attained despite a small sample size. Traditionally, clinicians make prognostic decisions based on clinicopathologic markers. However, it is not easy for the most skilful clinician to come out with an accurate prognosis by using these markers alone. Thus, there is a need to use genomic markers to improve the accuracy of prognosis. The main aim of this research is to apply a hybrid of feature selection and machine learning methods in oral cancer prognosis based on the parameters of the correlation of clinicopathologic and genomic markers. Results In the first stage of this research, five feature selection methods have been proposed and experimented on the oral cancer prognosis dataset. In the second stage, the model with the features selected from each feature selection methods are tested on the proposed classifiers. Four types of classifiers are chosen; these are namely, ANFIS, artificial neural network, support vector machine and logistic regression. A k-fold cross-validation is implemented on all types of classifiers due to the small sample size. The hybrid model of ReliefF-GA-ANFIS with 3-input features of drink, invasion and p63 achieved the best accuracy (accuracy = 93.81%; AUC = 0.90) for the oral cancer prognosis. Conclusions The results revealed that the prognosis is superior with the presence of both clinicopathologic and genomic markers. The selected features can be investigated further to validate the potential of becoming as significant prognostic signature in the oral cancer studies. PMID:23725313

  11. Surface electromyography based muscle fatigue detection using high-resolution time-frequency methods and machine learning algorithms.

    Science.gov (United States)

    Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S

    2018-02-01

    Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to

  12. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    Science.gov (United States)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts

  13. Late washing filter cleaning cycle demonstration

    International Nuclear Information System (INIS)

    Meyer, M.L.; McCabe, D.J.

    1992-01-01

    The DWPF Late Washing Facility will filter cesium and potassium tetraphenyl borate (TPB) solids using a Mott sintered metal filter, identical to the filter now used in the In-tank Precipitation Facility. The purpose of the late wash step is primarily to remove the nitrite salts from the slurry prior to delivery to DWPF. Periodic chemical cleaning of the filter will be required, presumably after each batch although the actual required frequency could not be determined on the lab-scale. Minimization of chemical cleaning solution volumes is key to maximizing the attainment of the Late Wash facility. This report summarizes work completed in experiments designed to identify minimum cleaning solution requirements

  14. Cooling rate and microstructure of surface layers of 5KhNM steel, machined by electroerosion method

    International Nuclear Information System (INIS)

    Foteev, N.K.; Ploshkin, V.V.; Lyakishev, V.A.; Shirokov, S.V.

    1982-01-01

    The cooling rate and microstructure of surface layers of steel 5KhNM machined by electroerosion method have been studied. It is shown that the difference in heating rate of the surface layers with electric discharge over the 5KhNM steel samples depth results in the intensive size reduction of the microstructure. In the surface layer alongside with martensite residual austenite is present, the lattice period of which increases with the increase of pulse duration, carbide phase of complex composition appears, and concentrational heterogeneity in alloying elements (except carbon) is absent

  15. Cooling rate and microstructure of surface layers of 5KhNM steel, machined by electroerosion method

    Energy Technology Data Exchange (ETDEWEB)

    Foteev, N.K.; Ploshkin, V.V.; Lyakishev, V.A.; Shirokov, S.V.

    1982-01-01

    The cooling rate and microstructure of surface layers of steel 5KhNM machined by electroerosion method have been studied. It is shown that the difference in heating rate of the surface layers with electric discharge over the 5KhNM steel samples depth results in the intensive size reduction of the microstructure. In the surface layer alongside with martensite residual austenite is present, the lattice period of which increases with the increase of pulse duration, carbide phase of complex composition appears, and concentrational heterogeneity in alloying elements (except carbon) is absent.

  16. A heuristic method for simulating open-data of arbitrary complexity that can be used to compare and evaluate machine learning methods.

    Science.gov (United States)

    Moore, Jason H; Shestov, Maksim; Schmitt, Peter; Olson, Randal S

    2018-01-01

    A central challenge of developing and evaluating artificial intelligence and machine learning methods for regression and classification is access to data that illuminates the strengths and weaknesses of different methods. Open data plays an important role in this process by making it easy for computational researchers to easily access real data for this purpose. Genomics has in some examples taken a leading role in the open data effort starting with DNA microarrays. While real data from experimental and observational studies is necessary for developing computational methods it is not sufficient. This is because it is not possible to know what the ground truth is in real data. This must be accompanied by simulated data where that balance between signal and noise is known and can be directly evaluated. Unfortunately, there is a lack of methods and software for simulating data with the kind of complexity found in real biological and biomedical systems. We present here the Heuristic Identification of Biological Architectures for simulating Complex Hierarchical Interactions (HIBACHI) method and prototype software for simulating complex biological and biomedical data. Further, we introduce new methods for developing simulation models that generate data that specifically allows discrimination between different machine learning methods.

  17. In silico prediction of Tetrahymena pyriformis toxicity for diverse industrial chemicals with substructure pattern recognition and machine learning methods.

    Science.gov (United States)

    Cheng, Feixiong; Shen, Jie; Yu, Yue; Li, Weihua; Liu, Guixia; Lee, Philip W; Tang, Yun

    2011-03-01

    There is an increasing need for the rapid safety assessment of chemicals by both industries and regulatory agencies throughout the world. In silico techniques are practical alternatives in the environmental hazard assessment. It is especially true to address the persistence, bioaccumulative and toxicity potentials of organic chemicals. Tetrahymena pyriformis toxicity is often used as a toxic endpoint. In this study, 1571 diverse unique chemicals were collected from the literature and composed of the largest diverse data set for T. pyriformis toxicity. Classification predictive models of T. pyriformis toxicity were developed by substructure pattern recognition and different machine learning methods, including support vector machine (SVM), C4.5 decision tree, k-nearest neighbors and random forest. The results of a 5-fold cross-validation showed that the SVM method performed better than other algorithms. The overall predictive accuracies of the SVM classification model with radial basis functions kernel was 92.2% for the 5-fold cross-validation and 92.6% for the external validation set, respectively. Furthermore, several representative substructure patterns for characterizing T. pyriformis toxicity were also identified via the information gain analysis methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection.

    Science.gov (United States)

    Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho

    2017-03-01

    Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Comparative analysis of expert and machine-learning methods for classification of body cavity effusions in companion animals.

    Science.gov (United States)

    Hotz, Christine S; Templeton, Steven J; Christopher, Mary M

    2005-03-01

    A rule-based expert system using CLIPS programming language was created to classify body cavity effusions as transudates, modified transudates, exudates, chylous, and hemorrhagic effusions. The diagnostic accuracy of the rule-based system was compared with that produced by 2 machine-learning methods: Rosetta, a rough sets algorithm and RIPPER, a rule-induction method. Results of 508 body cavity fluid analyses (canine, feline, equine) obtained from the University of California-Davis Veterinary Medical Teaching Hospital computerized patient database were used to test CLIPS and to test and train RIPPER and Rosetta. The CLIPS system, using 17 rules, achieved an accuracy of 93.5% compared with pathologist consensus diagnoses. Rosetta accurately classified 91% of effusions by using 5,479 rules. RIPPER achieved the greatest accuracy (95.5%) using only 10 rules. When the original rules of the CLIPS application were replaced with those of RIPPER, the accuracy rates were identical. These results suggest that both rule-based expert systems and machine-learning methods hold promise for the preliminary classification of body fluids in the clinical laboratory.

  20. Experimental investigation of surface quality in ultrasonic machining of WC-Co composites through Taguchi method

    Directory of Open Access Journals (Sweden)

    B. S. Pabla

    2016-08-01

    Full Text Available In manufacturing industries, the demand of WC-Co composite is flourishing because of the distinctive characteristics it offers such as: toughness (with hardness, good dimensional stability, higher mechanical strength etc. However, the difficulties in its machining restrict the application and competitiveness of this material. The current article has been targeted at evaluation of the effect of process conditions (varying power rating, cobalt content, tool material, part thickness, tool geometry, and size of abrasive particle on surface roughness in ultrasonic drilling of WC-Co composite. Results showed that abrasive grit size is most influential factor. From the microstructure analysis, the mode of material deformation has been observed and the parameters, i.e. work material properties, grit size, and power rating was revealed as the most crucial for the deformation mode.