WorldWideScience

Sample records for model near-term machines

  1. Evaluating Modeled Impact Metrics for Human Health, Agriculture Growth, and Near-Term Climate

    Science.gov (United States)

    Seltzer, K. M.; Shindell, D. T.; Faluvegi, G.; Murray, L. T.

    2017-12-01

    Simulated metrics that assess impacts on human health, agriculture growth, and near-term climate were evaluated using ground-based and satellite observations. The NASA GISS ModelE2 and GEOS-Chem models were used to simulate the near-present chemistry of the atmosphere. A suite of simulations that varied by model, meteorology, horizontal resolution, emissions inventory, and emissions year were performed, enabling an analysis of metric sensitivities to various model components. All simulations utilized consistent anthropogenic global emissions inventories (ECLIPSE V5a or CEDS), and an evaluation of simulated results were carried out for 2004-2006 and 2009-2011 over the United States and 2014-2015 over China. Results for O3- and PM2.5-based metrics featured minor differences due to the model resolutions considered here (2.0° × 2.5° and 0.5° × 0.666°) and model, meteorology, and emissions inventory each played larger roles in variances. Surface metrics related to O3 were consistently high biased, though to varying degrees, demonstrating the need to evaluate particular modeling frameworks before O3 impacts are quantified. Surface metrics related to PM2.5 were diverse, indicating that a multimodel mean with robust results are valuable tools in predicting PM2.5-related impacts. Oftentimes, the configuration that captured the change of a metric best over time differed from the configuration that captured the magnitude of the same metric best, demonstrating the challenge in skillfully simulating impacts. These results highlight the strengths and weaknesses of these models in simulating impact metrics related to air quality and near-term climate. With such information, the reliability of historical and future simulations can be better understood.

  2. Modeling the Near-Term Risk of Climate Uncertainty: Interdependencies among the U.S. States

    Science.gov (United States)

    Lowry, T. S.; Backus, G.; Warren, D.

    2010-12-01

    Decisions made to address climate change must start with an understanding of the risk of an uncertain future to human systems, which in turn means understanding both the consequence as well as the probability of a climate induced impact occurring. In other words, addressing climate change is an exercise in risk-informed policy making, which implies that there is no single correct answer or even a way to be certain about a single answer; the uncertainty in future climate conditions will always be present and must be taken as a working-condition for decision making. In order to better understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions, this study estimates the impacts from responses to climate change on U.S. state- and national-level economic activity by employing a risk-assessment methodology for evaluating uncertain future climatic conditions. Using the results from the Intergovernmental Panel on Climate Change’s (IPCC) Fourth Assessment Report (AR4) as a proxy for climate uncertainty, changes in hydrology over the next 40 years were mapped and then modeled to determine the physical consequences on economic activity and to perform a detailed 70-industry analysis of the economic impacts among the interacting lower-48 states. The analysis determines industry-level effects, employment impacts at the state level, interstate population migration, consequences to personal income, and ramifications for the U.S. trade balance. The conclusions show that the average risk of damage to the U.S. economy from climate change is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs. Further analysis shows that an increase in uncertainty raises this risk. This paper will present the methodology behind the approach, a summary of the underlying models, as well as the path forward for improving the approach.

  3. Revision of the inventory and recycling scenario of active material in near-term PPCS models

    International Nuclear Information System (INIS)

    Pampin, R.; Massaut, V.; Taylor, N.P.

    2007-01-01

    A sound approach to the recycling of fusion irradiated material is being developed. Study of industry experience, and consideration of realistic processing routes and techniques, provide a more sensible estimation of recycling feasibility than earlier studies based on purely radiological criteria. Under this approach, the analysis of active material in two models of the power plant conceptual study (PPCS) has been revised in more detail and accounting for the latest design features, nuclear data and international guidelines. A careful inventory of the materials has been performed, and estimation made of the radiological characteristics of all PPCS tokamak components, for the first time studying individual constituents and materials. Evaluation has been made of time scales for the radioactivity to decay to predetermined levels, which represent the spectrum of technological difficulties posed by the nature of the irradiated material. Three main mechanisms for the optimization of the materials management strategy have been identified during the assessments: segregation of components into individual materials, in situ refurbishment and stringent impurity control

  4. Establishing a Near Term Lunar Farside Gravity Model via Inexpensive Add-on Navigation Payload

    Science.gov (United States)

    Folta, David; Mesarch, Michael; Miller, Ronald; Bell, David; Jedrey, Tom; Butman, Stanley; Asmar, Sami

    2007-01-01

    The Space Communications and Navigation, Constellation Integration Project (SCIP) is tasked with defining, developing, deploying and operating an evolving multi-decade communications and navigation (C/N) infrastructure including services and subsystems that will support both robotic and human exploration activities at the Moon. This paper discusses an early far side gravitational mapping service and related telecom subsystem that uses an existing spacecraft (WIND) and the Lunar Reconnaissance Orbiter (LRO) to collect data that would address several needs of the SCIP. An important aspect of such an endeavor is to vastly improve the current lunar gravity model while demonstrating the navigation and stationkeeping of a relay spacecraft. We describe a gravity data acquisition activity and the trajectory design of the relay orbit in an Earth-Moon L2 co-linear libration orbit. Several phases of the transfer from an Earth-Sun to the Earth-Moon region are discussed along with transfers within the Earth-Moon system. We describe a proposed, but not integrated, add-on to LRO scheduled to be launched in October of 2008. LRO provided a real host spacecraft against which we designed the science payload and mission activities. From a strategic standpoint, LRO was a very exciting first flight opportunity for gravity science data collection. Gravity Science data collection requires the use of one or more low altitude lunar polar orbiters. Variations in the lunar gravity field will cause measurable variations in the orbit of a low altitude lunar orbiter. The primary means to capture these induced motions is to monitor the Doppler shift of a radio signal to or from the low altitude spacecraft, given that the signal is referenced to a stable frequency reference. For the lunar far side, a secondary orbiting radio signal platform is required. We provide an in-depth look at link margins, trajectory design, and hardware implications. Our approach posed minimum risk to a host mission while

  5. Student Modeling and Machine Learning

    OpenAIRE

    Sison , Raymund; Shimura , Masamichi

    1998-01-01

    After identifying essential student modeling issues and machine learning approaches, this paper examines how machine learning techniques have been used to automate the construction of student models as well as the background knowledge necessary for student modeling. In the process, the paper sheds light on the difficulty, suitability and potential of using machine learning for student modeling processes, and, to a lesser extent, the potential of using student modeling techniques in machine le...

  6. Formal modeling of virtual machines

    Science.gov (United States)

    Cremers, A. B.; Hibbard, T. N.

    1978-01-01

    Systematic software design can be based on the development of a 'hierarchy of virtual machines', each representing a 'level of abstraction' of the design process. The reported investigation presents the concept of 'data space' as a formal model for virtual machines. The presented model of a data space combines the notions of data type and mathematical machine to express the close interaction between data and control structures which takes place in a virtual machine. One of the main objectives of the investigation is to show that control-independent data type implementation is only of limited usefulness as an isolated tool of program development, and that the representation of data is generally dictated by the control context of a virtual machine. As a second objective, a better understanding is to be developed of virtual machine state structures than was heretofore provided by the view of the state space as a Cartesian product.

  7. Long-term functional outcomes and correlation with regional brain connectivity by MRI diffusion tractography metrics in a near-term rabbit model of intrauterine growth restriction.

    Directory of Open Access Journals (Sweden)

    Miriam Illa

    Full Text Available BACKGROUND: Intrauterine growth restriction (IUGR affects 5-10% of all newborns and is associated with increased risk of memory, attention and anxiety problems in late childhood and adolescence. The neurostructural correlates of long-term abnormal neurodevelopment associated with IUGR are unknown. Thus, the aim of this study was to provide a comprehensive description of the long-term functional and neurostructural correlates of abnormal neurodevelopment associated with IUGR in a near-term rabbit model (delivered at 30 days of gestation and evaluate the development of quantitative imaging biomarkers of abnormal neurodevelopment based on diffusion magnetic resonance imaging (MRI parameters and connectivity. METHODOLOGY: At +70 postnatal days, 10 cases and 11 controls were functionally evaluated with the Open Field Behavioral Test which evaluates anxiety and attention and the Object Recognition Task that evaluates short-term memory and attention. Subsequently, brains were collected, fixed and a high resolution MRI was performed. Differences in diffusion parameters were analyzed by means of voxel-based and connectivity analysis measuring the number of fibers reconstructed within anxiety, attention and short-term memory networks over the total fibers. PRINCIPAL FINDINGS: The results of the neurobehavioral and cognitive assessment showed a significant higher degree of anxiety, attention and memory problems in cases compared to controls in most of the variables explored. Voxel-based analysis (VBA revealed significant differences between groups in multiple brain regions mainly in grey matter structures, whereas connectivity analysis demonstrated lower ratios of fibers within the networks in cases, reaching the statistical significance only in the left hemisphere for both networks. Finally, VBA and connectivity results were also correlated with functional outcome. CONCLUSIONS: The rabbit model used reproduced long-term functional impairments and their

  8. Long-term functional outcomes and correlation with regional brain connectivity by MRI diffusion tractography metrics in a near-term rabbit model of intrauterine growth restriction.

    Science.gov (United States)

    Illa, Miriam; Eixarch, Elisenda; Batalle, Dafnis; Arbat-Plana, Ariadna; Muñoz-Moreno, Emma; Figueras, Francesc; Gratacos, Eduard

    2013-01-01

    Intrauterine growth restriction (IUGR) affects 5-10% of all newborns and is associated with increased risk of memory, attention and anxiety problems in late childhood and adolescence. The neurostructural correlates of long-term abnormal neurodevelopment associated with IUGR are unknown. Thus, the aim of this study was to provide a comprehensive description of the long-term functional and neurostructural correlates of abnormal neurodevelopment associated with IUGR in a near-term rabbit model (delivered at 30 days of gestation) and evaluate the development of quantitative imaging biomarkers of abnormal neurodevelopment based on diffusion magnetic resonance imaging (MRI) parameters and connectivity. At +70 postnatal days, 10 cases and 11 controls were functionally evaluated with the Open Field Behavioral Test which evaluates anxiety and attention and the Object Recognition Task that evaluates short-term memory and attention. Subsequently, brains were collected, fixed and a high resolution MRI was performed. Differences in diffusion parameters were analyzed by means of voxel-based and connectivity analysis measuring the number of fibers reconstructed within anxiety, attention and short-term memory networks over the total fibers. The results of the neurobehavioral and cognitive assessment showed a significant higher degree of anxiety, attention and memory problems in cases compared to controls in most of the variables explored. Voxel-based analysis (VBA) revealed significant differences between groups in multiple brain regions mainly in grey matter structures, whereas connectivity analysis demonstrated lower ratios of fibers within the networks in cases, reaching the statistical significance only in the left hemisphere for both networks. Finally, VBA and connectivity results were also correlated with functional outcome. The rabbit model used reproduced long-term functional impairments and their neurostructural correlates of abnormal neurodevelopment associated with IUGR

  9. Long-Term Functional Outcomes and Correlation with Regional Brain Connectivity by MRI Diffusion Tractography Metrics in a Near-Term Rabbit Model of Intrauterine Growth Restriction

    Science.gov (United States)

    Illa, Miriam; Eixarch, Elisenda; Batalle, Dafnis; Arbat-Plana, Ariadna; Muñoz-Moreno, Emma; Figueras, Francesc; Gratacos, Eduard

    2013-01-01

    Background Intrauterine growth restriction (IUGR) affects 5–10% of all newborns and is associated with increased risk of memory, attention and anxiety problems in late childhood and adolescence. The neurostructural correlates of long-term abnormal neurodevelopment associated with IUGR are unknown. Thus, the aim of this study was to provide a comprehensive description of the long-term functional and neurostructural correlates of abnormal neurodevelopment associated with IUGR in a near-term rabbit model (delivered at 30 days of gestation) and evaluate the development of quantitative imaging biomarkers of abnormal neurodevelopment based on diffusion magnetic resonance imaging (MRI) parameters and connectivity. Methodology At +70 postnatal days, 10 cases and 11 controls were functionally evaluated with the Open Field Behavioral Test which evaluates anxiety and attention and the Object Recognition Task that evaluates short-term memory and attention. Subsequently, brains were collected, fixed and a high resolution MRI was performed. Differences in diffusion parameters were analyzed by means of voxel-based and connectivity analysis measuring the number of fibers reconstructed within anxiety, attention and short-term memory networks over the total fibers. Principal Findings The results of the neurobehavioral and cognitive assessment showed a significant higher degree of anxiety, attention and memory problems in cases compared to controls in most of the variables explored. Voxel-based analysis (VBA) revealed significant differences between groups in multiple brain regions mainly in grey matter structures, whereas connectivity analysis demonstrated lower ratios of fibers within the networks in cases, reaching the statistical significance only in the left hemisphere for both networks. Finally, VBA and connectivity results were also correlated with functional outcome. Conclusions The rabbit model used reproduced long-term functional impairments and their neurostructural

  10. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  11. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    Science.gov (United States)

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  12. Parallel Boltzmann machines : a mathematical model

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.

    1991-01-01

    A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a

  13. Model-Agnostic Interpretability of Machine Learning

    OpenAIRE

    Ribeiro, Marco Tulio; Singh, Sameer; Guestrin, Carlos

    2016-01-01

    Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. In some applications, such models are as accurate as non-interpretable ones, and thus are preferred f...

  14. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    Science.gov (United States)

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  15. Thermal models of pulse electrochemical machining

    International Nuclear Information System (INIS)

    Kozak, J.

    2004-01-01

    Pulse electrochemical machining (PECM) provides an economical and effective method for machining high strength, heat-resistant materials into complex shapes such as turbine blades, die, molds and micro cavities. Pulse Electrochemical Machining involves the application of a voltage pulse at high current density in the anodic dissolution process. Small interelectrode gap, low electrolyte flow rate, gap state recovery during the pulse off-times lead to improved machining accuracy and surface finish when compared with ECM using continuous current. This paper presents a mathematical model for PECM and employs this model in a computer simulation of the PECM process for determination of the thermal limitation and energy consumption in PECM. The experimental results and discussion of the characteristics PECM are presented. (authors)

  16. Short-acting sulfonamides near term and neonatal jaundice

    DEFF Research Database (Denmark)

    Klarskov, Pia; Andersen, Jon Trærup; Jimenez-Solem, Espen

    2013-01-01

    To investigate the association between maternal use of sulfamethizole near term and the risk of neonatal jaundice.......To investigate the association between maternal use of sulfamethizole near term and the risk of neonatal jaundice....

  17. Prototype-based models in machine learning

    NARCIS (Netherlands)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2016-01-01

    An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of

  18. Understanding and modelling Man-Machine Interaction

    International Nuclear Information System (INIS)

    Cacciabue, P.C.

    1991-01-01

    This paper gives an overview of the current state of the art in man machine systems interaction studies, focusing on the problems derived from highly automated working environments and the role of humans in the control loop. In particular, it is argued that there is a need for sound approaches to design and analysis of Man-Machine Interaction (MMI), which stem from the contribution of three expertises in interfacing domains, namely engineering, computer science and psychology: engineering for understanding and modelling plants and their material and energy conservation principles; psychology for understanding and modelling humans and their cognitive behaviours; computer science for converting models in sound simulations running in appropriate computer architectures. (author)

  19. Understanding and modelling man-machine interaction

    International Nuclear Information System (INIS)

    Cacciabue, P.C.

    1996-01-01

    This paper gives an overview of the current state of the art in man-machine system interaction studies, focusing on the problems derived from highly automated working environments and the role of humans in the control loop. In particular, it is argued that there is a need for sound approaches to the design and analysis of man-machine interaction (MMI), which stem from the contribution of three expertises in interfacing domains, namely engineering, computer science and psychology: engineering for understanding and modelling plants and their material and energy conservation principles; psychology for understanding and modelling humans an their cognitive behaviours; computer science for converting models in sound simulations running in appropriate computer architectures. (orig.)

  20. Electromechanical model of machine for vibroabrasive treatment of machine parts

    OpenAIRE

    Gorbatiyk, Ruslan; Palamarchuk, Igor; Chubyk, Roman

    2015-01-01

    A lot of operations on trimming clean and finishing – stripping up treatment, first of all, removing of burrs, rounding and processing of borders, until recently time was carried out by hand, and hardly exposed to automation and became a serious obstacle in subsequent growth of the labor productivity. Machines with free kinematics connection between a tool and the treating parts is provided by the printing-down of all of the surface of the machine parts, that allows us to effectively treat bo...

  1. VIRTUAL MODELING OF A NUMERICAL CONTROL MACHINE TOOL USED FOR COMPLEX MACHINING OPERATIONS

    Directory of Open Access Journals (Sweden)

    POPESCU Adrian

    2015-11-01

    Full Text Available This paper presents the 3D virtual model of the numerical control machine Modustar 100, in terms of machine elements. This is a CNC machine of modular construction, all components allowing the assembly in various configurations. The paper focused on the design of the subassemblies specific to the axes numerically controlled by means of CATIA v5, which contained different drive kinematic chains of different translation modules that ensures translation on X, Y and Z axis. Machine tool development for high speed and highly precise cutting demands employment of advanced simulation techniques witch it reflect on cost of total development of the machine.

  2. comparative study of moore and mealy machine models adaptation

    African Journals Online (AJOL)

    user

    automata model was developed for ABS manufacturing process using Moore and Mealy Finite State Machines. Simulation ... The simulation results showed that the Mealy Machine is faster than the Moore ..... random numbers from MATLAB.

  3. Screening for Prediabetes Using Machine Learning Models

    Directory of Open Access Journals (Sweden)

    Soo Beom Choi

    2014-01-01

    Full Text Available The global prevalence of diabetes is rapidly increasing. Studies support the necessity of screening and interventions for prediabetes, which could result in serious complications and diabetes. This study aimed at developing an intelligence-based screening model for prediabetes. Data from the Korean National Health and Nutrition Examination Survey (KNHANES were used, excluding subjects with diabetes. The KNHANES 2010 data (n=4685 were used for training and internal validation, while data from KNHANES 2011 (n=4566 were used for external validation. We developed two models to screen for prediabetes using an artificial neural network (ANN and support vector machine (SVM and performed a systematic evaluation of the models using internal and external validation. We compared the performance of our models with that of a screening score model based on logistic regression analysis for prediabetes that had been developed previously. The SVM model showed the areas under the curve of 0.731 in the external datasets, which is higher than those of the ANN model (0.729 and the screening score model (0.712, respectively. The prescreening methods developed in this study performed better than the screening score model that had been developed previously and may be more effective method for prediabetes screening.

  4. Machine Directional Register System Modeling for Shaft-Less Drive Gravure Printing Machines

    Directory of Open Access Journals (Sweden)

    Shanhui Liu

    2013-01-01

    Full Text Available In the latest type of gravure printing machines referred to as the shaft-less drive system, each gravure printing roller is driven by an individual servo motor, and all motors are electrically synchronized. The register error is regulated by a speed difference between the adjacent printing rollers. In order to improve the control accuracy of register system, an accurate mathematical model of the register system should be investigated for the latest machines. Therefore, the mathematical model of the machine directional register (MDR system is studied for the multicolor gravure printing machines in this paper. According to the definition of the MDR error, the model is derived, and then it is validated by the numerical simulation and experiments carried out in the experimental setup of the four-color gravure printing machines. The results show that the established MDR system model is accurate and reliable.

  5. Upgrades, Current Capabilities and Near-Term Plans of the NASA ARC Mars Climate

    Science.gov (United States)

    Hollingsworth, J. L.; Kahre, Melinda April; Haberle, Robert M.; Schaeffer, James R.

    2012-01-01

    We describe and review recent upgrades to the ARC Mars climate modeling framework, in particular, with regards to physical parameterizations (i.e., testing, implementation, modularization and documentation); the current climate modeling capabilities; selected research topics regarding current/past climates; and then, our near-term plans related to the NASA ARC Mars general circulation modeling (GCM) project.

  6. Prototype-based models in machine learning.

    Science.gov (United States)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2016-01-01

    An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of potentially high-dimensional, complex datasets. We discuss basic schemes of competitive vector quantization as well as the so-called neural gas approach and Kohonen's topology-preserving self-organizing map. Supervised learning in prototype systems is exemplified in terms of learning vector quantization. Most frequently, the familiar Euclidean distance serves as a dissimilarity measure. We present extensions of the framework to nonstandard measures and give an introduction to the use of adaptive distances in relevance learning. © 2016 Wiley Periodicals, Inc.

  7. AREVA HTR concept for near-term deployment

    Energy Technology Data Exchange (ETDEWEB)

    Lommers, L.J., E-mail: lewis.lommers@areva.com [AREVA Inc., 2101 Horn Rapids Road, Richland, WA 99354 (United States); Shahrokhi, F. [AREVA Inc., Lynchburg, VA (United States); Mayer, J.A. [AREVA Inc., Marlborough, MA (United States); Southworth, F.H. [AREVA Inc., Lynchburg, VA (United States)

    2012-10-15

    This paper introduces AREVA's High Temperature Reactor (HTR) steam cycle concept for near-term industrial deployment. Today, nuclear power primarily impacts only electricity generation. The process heat and transportation fuel sectors are completely dependent on fossil fuels. In order to impact this energy sector as rapidly as possible, AREVA has focused its HTR development effort on the steam cycle HTR concept. This reduces near-term development risk and minimizes the delay before a useful contribution to this sector of the energy economy can be realized. It also provides a stepping stone to longer term very high temperature concepts which might serve additional markets. A general description of the current AREVA steam cycle HTR concept is provided. This concept provides a flexible system capable of serving a variety of process heat and cogeneration markets in the near-term.

  8. AREVA HTR concept for near-term deployment

    International Nuclear Information System (INIS)

    Lommers, L.J.; Shahrokhi, F.; Mayer, J.A.; Southworth, F.H.

    2012-01-01

    This paper introduces AREVA's High Temperature Reactor (HTR) steam cycle concept for near-term industrial deployment. Today, nuclear power primarily impacts only electricity generation. The process heat and transportation fuel sectors are completely dependent on fossil fuels. In order to impact this energy sector as rapidly as possible, AREVA has focused its HTR development effort on the steam cycle HTR concept. This reduces near-term development risk and minimizes the delay before a useful contribution to this sector of the energy economy can be realized. It also provides a stepping stone to longer term very high temperature concepts which might serve additional markets. A general description of the current AREVA steam cycle HTR concept is provided. This concept provides a flexible system capable of serving a variety of process heat and cogeneration markets in the near-term.

  9. Impurity control in near-term tokamak reactors

    International Nuclear Information System (INIS)

    Stacey, W.M. Jr.; Smith, D.L.; Brooks, J.N.

    1976-10-01

    Several methods for reducing impurity contamination in near-term tokamak reactors by modifying the first-wall surface with a low-Z or low-sputter material are examined. A review of the sputtering data and an assessment of the technological feasibility of various wall modification schemes are presented. The power performance of a near-term tokamak reactor is simulated for various first-wall surface materials, with and without a divertor, in order to evaluate the likely effect of plasma contamination associated with these surface materials

  10. Simulation Tools for Electrical Machines Modelling: Teaching and ...

    African Journals Online (AJOL)

    Simulation tools are used both for research and teaching to allow a good comprehension of the systems under study before practical implementations. This paper illustrates the way MATLAB is used to model non-linearites in synchronous machine. The machine is modeled in rotor reference frame with currents as state ...

  11. Investigation of approximate models of experimental temperature characteristics of machines

    Science.gov (United States)

    Parfenov, I. V.; Polyakov, A. N.

    2018-05-01

    This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.

  12. Virtual NC machine model with integrated knowledge data

    International Nuclear Information System (INIS)

    Sidorenko, Sofija; Dukovski, Vladimir

    2002-01-01

    The concept of virtual NC machining was established for providing a virtual product that could be compared with an appropriate designed product, in order to make NC program correctness evaluation, without real experiments. This concept is applied in the intelligent CAD/CAM system named VIRTUAL MANUFACTURE. This paper presents the first intelligent module that enables creation of the virtual models of existed NC machines and virtual creation of new ones, applying modular composition. Creation of a virtual NC machine is carried out via automatic knowledge data saving (features of the created NC machine). (Author)

  13. Testing and Modeling of Machine Properties in Resistance Welding

    DEFF Research Database (Denmark)

    Wu, Pei

    The objective of this work has been to test and model the machine properties including the mechanical properties and the electrical properties in resistance welding. The results are used to simulate the welding process more accurately. The state of the art in testing and modeling machine properties...... as real projection welding tests, is easy to realize in industry, since tests may be performed in situ. In part II, an approach of characterizing the electrical properties of AC resistance welding machines is presented, involving testing and mathematical modelling of the weld current, the firing angle...... in resistance welding has been described based on a comprehensive literature study. The present thesis has been subdivided into two parts: Part I: Mechanical properties of resistance welding machines. Part II: Electrical properties of resistance welding machines. In part I, the electrode force in the squeeze...

  14. Testing and Modeling of Mechanical Characteristics of Resistance Welding Machines

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels

    2003-01-01

    for both upper and lower electrode systems. This has laid a foundation for modeling the welding process and selecting the welding parameters considering the machine factors. The method is straightforward and easy to be applied in industry since the whole procedure is based on tests with no requirements......The dynamic mechanical response of resistance welding machine is very important to the weld quality in resistance welding especially in projection welding when collapse or deformation of work piece occurs. It is mainly governed by the mechanical parameters of machine. In this paper, a mathematical...... model for characterizing the dynamic mechanical responses of machine and a special test set-up called breaking test set-up are developed. Based on the model and the test results, the mechanical parameters of machine are determined, including the equivalent mass, damping coefficient, and stiffness...

  15. MODELING AND INVESTIGATION OF ASYNCHRONOUS TWO-MACHINE SYSTEM MODES

    Directory of Open Access Journals (Sweden)

    V. S. Safaryan

    2014-01-01

    Full Text Available The paper considers stationary and transient processes of an asynchronous two-machine system. A mathematical model for investigation of stationary and transient modes, static characteristics and research results of dynamic process pertaining to starting-up the asynchronous two-machine system has been given in paper.

  16. Boltzmann machines as a model for parallel annealing

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.

    1991-01-01

    The potential of Boltzmann machines to cope with difficult combinatorial optimization problems is investigated. A discussion of various (parallel) models of Boltzmann machines is given based on the theory of Markov chains. A general strategy is presented for solving (approximately) combinatorial

  17. Near-term hybrid vehicle program, phase 1

    Science.gov (United States)

    1979-01-01

    The preliminary design of a hybrid vehicle which fully meets or exceeds the requirements set forth in the Near Term Hybrid Vehicle Program is documented. Topics addressed include the general layout and styling, the power train specifications with discussion of each major component, vehicle weight and weight breakdown, vehicle performance, measures of energy consumption, and initial cost and ownership cost. Alternative design options considered and their relationship to the design adopted, computer simulation used, and maintenance and reliability considerations are also discussed.

  18. Advanced wind turbine near-term product development. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1996-01-01

    In 1990 the US Department of Energy initiated the Advanced Wind Turbine (AWT) Program to assist the growth of a viable wind energy industry in the US. This program, which has been managed through the National Renewable Energy Laboratory (NREL) in Golden, Colorado, has been divided into three phases: (1) conceptual design studies, (2) near-term product development, and (3) next-generation product development. The goals of the second phase were to bring into production wind turbines which would meet the cost goal of $0.05 kWh at a site with a mean (Rayleigh) windspeed of 5.8 m/s (13 mph) and a vertical wind shear exponent of 0.14. These machines were to allow a US-based industry to compete domestically with other sources of energy and to provide internationally competitive products. Information is given in the report on design values of peak loads and of fatigue spectra and the results of the design process are summarized in a table. Measured response is compared with the results from mathematical modeling using the ADAMS code and is discussed. Detailed information is presented on the estimated costs of maintenance and on spare parts requirements. A failure modes and effects analysis was carried out and resulted in approximately 50 design changes including the identification of ten previously unidentified failure modes. The performance results of both prototypes are examined and adjusted for air density and for correlation between the anemometer site and the turbine location. The anticipated energy production at the reference site specified by NREL is used to calculate the final cost of energy using the formulas indicated in the Statement of Work. The value obtained is $0.0514/kWh in January 1994 dollars. 71 figs., 30 tabs.

  19. Discrete Model Reference Adaptive Control System for Automatic Profiling Machine

    Directory of Open Access Journals (Sweden)

    Peng Song

    2012-01-01

    Full Text Available Automatic profiling machine is a movement system that has a high degree of parameter variation and high frequency of transient process, and it requires an accurate control in time. In this paper, the discrete model reference adaptive control system of automatic profiling machine is discussed. Firstly, the model of automatic profiling machine is presented according to the parameters of DC motor. Then the design of the discrete model reference adaptive control is proposed, and the control rules are proven. The results of simulation show that adaptive control system has favorable dynamic performances.

  20. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  1. Experimental force modeling for deformation machining stretching ...

    Indian Academy of Sciences (India)

    ARSHPREET SINGH

    requires different machining techniques such as use of long ... thin structure to a desired shape incrementally using com- .... 4.1c Influence of wall angle (a): The average resultant ..... [3] Agrawal A, Smith S, Woody B and Cao J 2012 Study of.

  2. Developing Parametric Models for the Assembly of Machine Fixtures for Virtual Multiaxial CNC Machining Centers

    Science.gov (United States)

    Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.

    2018-01-01

    This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.

  3. On the Conditioning of Machine-Learning-Assisted Turbulence Modeling

    Science.gov (United States)

    Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng

    2017-11-01

    Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.

  4. Trustless Machine Learning Contracts; Evaluating and Exchanging Machine Learning Models on the Ethereum Blockchain

    OpenAIRE

    Kurtulmus, A. Besir; Daniel, Kenny

    2018-01-01

    Using blockchain technology, it is possible to create contracts that offer a reward in exchange for a trained machine learning model for a particular data set. This would allow users to train machine learning models for a reward in a trustless manner. The smart contract will use the blockchain to automatically validate the solution, so there would be no debate about whether the solution was correct or not. Users who submit the solutions won't have counterparty risk that they won't get paid fo...

  5. Modeling demagnetization effects in permanent magnet synchronous machines

    NARCIS (Netherlands)

    Kral, C.; Sprangers, R.L.J.; Waarma, J.; Haumer, A.; Winter, O.; Lomonova, E.

    2010-01-01

    This paper presents a permanent magnet model which takes temperature dependencies and demagnetization effects into account. The proposed model is integrated into a magnetic fundamental wave machine model using the model- ing language Modelica. For different rotor types permanent magnet models are

  6. Probabilistic models and machine learning in structural bioinformatics

    DEFF Research Database (Denmark)

    Hamelryck, Thomas

    2009-01-01

    . Recently, probabilistic models and machine learning methods based on Bayesian principles are providing efficient and rigorous solutions to challenging problems that were long regarded as intractable. In this review, I will highlight some important recent developments in the prediction, analysis...

  7. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Empirical model for estimating the surface roughness of machined ... as well as surface finish is one of the most critical quality measure in mechanical products. ... various cutting speed have been developed using regression analysis software.

  8. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  9. Developing hydrogen infrastructure through near-term intermediate technology

    International Nuclear Information System (INIS)

    Arthur, D.M.; Checkel, M.D.; Koch, C.R.

    2003-01-01

    The development of a vehicular hydrogen fuelling infrastructure is a necessary first step towards the widespread use of hydrogen-powered vehicles. This paper proposes the case for using a near-term, intermediate technology to stimulate and support the development of that infrastructure. 'Dynamic Hydrogen Multifuel' (DHM) is an engine control and fuel system technology that uses flexible blending of hydrogen and another fuel to optimize emissions and overall fuel economy in a spark ignition engine. DHM vehicles can enhance emissions and fuel economy using techniques such as cold-starting or idling on pure hydrogen. Blending hydrogen can extend lean operation and exhaust gas recirculation limits while normal engine power and vehicle range can be maintained by the conventional fuel. Essentially DHM vehicles are a near-term intermediate technology which provides significant emissions benefits in a vehicle which is sufficiently economical, practical and familiar to achieve significant production numbers and significant fuel station load. The factors leading to successful implementation of current hydrogen filling stations must also be understood if the infrastructure is to be developed further. The paper discusses important lessons on the development of alternative fuel infrastructure that have been learned from natural gas; why were natural gas vehicle conversions largely successful in Argentina while failing in Canada and New Zealand? What ideas can be distilled from the previous successes and failures of the attempted introduction of a new vehicle fuel? It is proposed that hydrogen infrastructure can be developed by introducing a catalytic, near-term technology to provide fuel station demand and operating experience. However, it is imperative to understand the lessons of historic failures and present successes. (author)

  10. Electric power from near-term fusion reactors

    International Nuclear Information System (INIS)

    Longhurst, G.R.; Deis, G.A.; Miller, L.G.

    1981-01-01

    This paper examines requirements and possbilities of electric power production on near-term fusion reactors using low temperature cycle technology similar to that used in some geothermal power systems. Requirements include the need for a working fluid with suitable thermodynamics properties and which is free of oxygen and hydrogen to facilitate tritium management. Thermal storage will also be required due to the short system thermal time constants on near-time reactors. It is possbile to use the FED shield in a binary power cycle, and results are presented of thermodynamic analyses of this system

  11. Practical methods for near-term piloted Mars missions

    Science.gov (United States)

    Zubrin, Robert M.; Weaver, David B.

    1993-01-01

    An evaluation is made of ways of using near-term technologies for direct and semidirect manned Mars missions. A notable feature of the present schemes is the in situ propellant production of CH4/O2 and H2O on the Martian surface in order to reduce surface consumable and return propellant requirements. Medium-energy conjunction class trajectories are shown to be optimal for such missions. Attention is given to the backup plans and abort philosophy of these missions. Either the Russian Energia B or U.S. Saturn VII launch vehicles may be used.

  12. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  13. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  14. Modelling, Construction, and Testing of a Simple HTS Machine Demonstrator

    DEFF Research Database (Denmark)

    Jensen, Bogi Bech; Abrahamsen, Asger Bech

    2011-01-01

    This paper describes the construction, modeling and experimental testing of a high temperature superconducting (HTS) machine prototype employing second generation (2G) coated conductors in the field winding. The prototype is constructed in a simple way, with the purpose of having an inexpensive way...... of validating finite element (FE) simulations and gaining a better understanding of HTS machines. 3D FE simulations of the machine are compared to measured current vs. voltage (IV) curves for the tape on its own. It is validated that this method can be used to predict the critical current of the HTS tape...... installed in the machine. The measured torque as a function of rotor position is also reproduced by the 3D FE model....

  15. World oil market fundamentals - Part One: The near term outlook

    International Nuclear Information System (INIS)

    Dwarkin, J.; Morton, K.; Datta, R.

    1998-03-01

    Potential implications of a number of uncertainties currently affecting the world oil market are assessed. The influence of the interplay of geopolitical events on demand and supply, inventories, prices and price trends are reviewed. Reference prices which industry and governments can use for investment and policy evaluations are provided. In this volume, the emphasis is on near term developments, with a review of the uncertainties surrounding these projections. Three different scenarios are postulated for the near term, each one taking into account different levels of Iraqi exports during the period which would effect available inventories, and hence price. Depending on which of the three scenarios actually comes to pass, unless refiners are prepared to build up inventories well beyond seasonal norms, or producers shut in, the prevailing view is that oil prices will be under severe pressure during most of 1998 and 1999. Over the longer term, however, the analysis suggests that an average real value of US$18.00 - $18.50 per barrel remains a reasonable expectation as a sustainable price. 34 refs., tabs., figs

  16. An abstract machine model of dynamic module replacement

    OpenAIRE

    Walton, Chris; Kırlı, Dilsun; Gilmore, Stephen

    2000-01-01

    In this paper we define an abstract machine model for the mλ typed intermediate language. This abstract machine is used to give a formal description of the operation of run-time module replacement for the programming language Dynamic ML. The essential technical device which we employ for module replacement is a modification of two-space copying garbage collection. We show how the operation of module replacement could be applied to other garbage-collected languages such as Java.

  17. Modelling machine ensembles with discrete event dynamical system theory

    Science.gov (United States)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  18. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    International Nuclear Information System (INIS)

    Saleem, A; Ahmed, N; Salah, M; Silberschmidt, V V

    2013-01-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance

  19. Learning About Climate and Atmospheric Models Through Machine Learning

    Science.gov (United States)

    Lucas, D. D.

    2017-12-01

    From the analysis of ensemble variability to improving simulation performance, machine learning algorithms can play a powerful role in understanding the behavior of atmospheric and climate models. To learn about model behavior, we create training and testing data sets through ensemble techniques that sample different model configurations and values of input parameters, and then use supervised machine learning to map the relationships between the inputs and outputs. Following this procedure, we have used support vector machines, random forests, gradient boosting and other methods to investigate a variety of atmospheric and climate model phenomena. We have used machine learning to predict simulation crashes, estimate the probability density function of climate sensitivity, optimize simulations of the Madden Julian oscillation, assess the impacts of weather and emissions uncertainty on atmospheric dispersion, and quantify the effects of model resolution changes on precipitation. This presentation highlights recent examples of our applications of machine learning to improve the understanding of climate and atmospheric models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  20. Twin support vector machines models, extensions and applications

    CERN Document Server

    Jayadeva; Chandra, Suresh

    2017-01-01

    This book provides a systematic and focused study of the various aspects of twin support vector machines (TWSVM) and related developments for classification and regression. In addition to presenting most of the basic models of TWSVM and twin support vector regression (TWSVR) available in the literature, it also discusses the important and challenging applications of this new machine learning methodology. A chapter on “Additional Topics” has been included to discuss kernel optimization and support tensor machine topics, which are comparatively new but have great potential in applications. It is primarily written for graduate students and researchers in the area of machine learning and related topics in computer science, mathematics, electrical engineering, management science and finance.

  1. Near term hybrid passenger vehicle development program, phase 1

    Science.gov (United States)

    1980-01-01

    Missions for hybrid vehicles that promise to yield high petroleum impact were identified and a preliminary design, was developed that satisfies the mission requirements and performance specifications. Technologies that are critical to successful vehicle design, development and fabrication were determined. Trade-off studies to maximize fuel savings were used to develop initial design specifications of the near term hybrid vehicle. Various designs were "driven" through detailed computer simulations which calculate the petroleum consumption in standard driving cycles, the petroleum and electricity consumptions over the specified missions, and the vehicle's life cycle costs over a 10 year vehicle lifetime. Particular attention was given to the selection of the electric motor, heat engine, drivetrain, battery pack and control system. The preliminary design reflects a modified current compact car powered by a currently available turbocharged diesel engine and a 24 kW (peak) compound dc electric motor.

  2. Near-term electric vehicle program: Phase I, final report

    Energy Technology Data Exchange (ETDEWEB)

    Rowlett, B. H.; Murry, R.

    1977-08-01

    A final report is given for an Energy Research and Development Administration effort aimed at a preliminary design of an energy-efficient electric commuter car. An electric-powered passenger vehicle using a regenerative power system was designed to meet the near-term ERDA electric automobile goals. The program objectives were to (1) study the parameters that affect vehicle performance, range, and cost; (2) design an entirely new electric vehicle that meets performance and economic requirements; and (3) define a program to develop this vehicle design for production in the early 1980's. The design and performance features of the preliminary (baseline) electric-powered passenger vehicle design are described, including the baseline power system, system performance, economic analysis, reliability and safety, alternate designs and options, development plan, and conclusions and recommendations. All aspects of the baseline design were defined in sufficient detail to verify performance expectations and system feasibility.

  3. Near-term benefits of the plant life extension program

    International Nuclear Information System (INIS)

    Kaushansky, M.M.

    1987-01-01

    The aging process can be expected to reduce the availability and increase the production costs of nuclear power plants over time. To mitigate this process and recover or enhance plant availability, capacity, thermal efficiency, and maintenance expenditures, the utility must dedicate increased attention and commitment to a comprehensive plant life extension (PLEX) program. Improvements must be justified by balancing the cost of the recommended modifications with the economic value of benefits obtained from its implementation. It is often extremely difficult for utility management to make an optimal selection from among hundreds of proposed projects, most of which are cost-effective. A properly structured PLEX program with an emphasis on near-term benefits should provide the utility with a means of evaluating proposed projects, thus determining the optimum combination for authorization and implementation

  4. Runtime Optimizations for Tree-Based Machine Learning Models

    NARCIS (Netherlands)

    N. Asadi; J.J.P. Lin (Jimmy); A.P. de Vries (Arjen)

    2014-01-01

    htmlabstractTree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression

  5. Comparative study of Moore and Mealy machine models adaptation ...

    African Journals Online (AJOL)

    Information and Communications Technology has influenced the need for automated machines that can carry out important production procedures and, automata models are among the computational models used in design and construction of industrial processes. The production process of the popular African Black Soap ...

  6. Comparative analysis of various methods for modelling permanent magnet machines

    NARCIS (Netherlands)

    Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.

    2017-01-01

    In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air

  7. A Multiple Model Prediction Algorithm for CNC Machine Wear PHM

    Directory of Open Access Journals (Sweden)

    Huimin Chen

    2011-01-01

    Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.

  8. Analytical model for Stirling cycle machine design

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F. [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France); Despesse, G. [Laboratoire Capteurs Actionneurs et Recuperation d' Energie, CEA-LETI-MINATEC, Grenoble (France)

    2010-10-15

    In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined. (author)

  9. Neural Machine Translation with Recurrent Attention Modeling

    OpenAIRE

    Yang, Zichao; Hu, Zhiting; Deng, Yuntian; Dyer, Chris; Smola, Alex

    2016-01-01

    Knowing which words have been attended to in previous time steps while generating a translation is a rich source of information for predicting what words will be attended to in the future. We improve upon the attention model of Bahdanau et al. (2014) by explicitly modeling the relationship between previous and subsequent attention levels for each word using one recurrent network per input word. This architecture easily captures informative features, such as fertility and regularities in relat...

  10. Status and near-term plans for DIII-D

    International Nuclear Information System (INIS)

    Davis, L.G.; Callis, R.W.; Luxon, J.L.; Stambaugh, R.D.

    1987-10-01

    The DIII-D tokamak at GA Technologies began plasma operation in February of 1986 and is dedicated to the study of highly non-circular plasmas. High beta operation with enhanced energy confinement is paramount among the goals of the DIII-D research program. Commissioning of the device and facility has verified the design capability including coil and vessel loading, volt-second consumption, bakeout temperature, vessel armor, and neutral beamline thermal integrity and control systems performance. Initial experimental results demonstrate the DIII-D is capable of attaining high confinement (H-mode) discharges in a divertor configuration using modest neutral beam heating or ECH. Record values of I/sub p/aB/sub T/ have been achieved with ohmic heating as a first step toward operation at high values of toroidal beta and record values of beta have been achieved using neutral beam heating. This paper summarizes results to date and gives the near term plans for the facility. 13 refs., 6 figs., 1 tab

  11. An incremental anomaly detection model for virtual machines

    Science.gov (United States)

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  12. Innovative model of business process reengineering at machine building enterprises

    Science.gov (United States)

    Nekrasov, R. Yu; Tempel, Yu A.; Tempel, O. A.

    2017-10-01

    The paper provides consideration of business process reengineering viewed as amanagerial innovation accepted by present day machine building enterprises, as well as waysto improve its procedure. A developed innovative model of reengineering measures isdescribed and is based on the process approach and other principles of company management.

  13. An incremental anomaly detection model for virtual machines.

    Directory of Open Access Journals (Sweden)

    Hancui Zhang

    Full Text Available Self-Organizing Map (SOM algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.

  14. Online State Space Model Parameter Estimation in Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Z. Gallehdari

    2014-06-01

    The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.

  15. Assessing Implicit Knowledge in BIM Models with Machine Learning

    DEFF Research Database (Denmark)

    Krijnen, Thomas; Tamke, Martin

    2015-01-01

    architects and engineers are able to deduce non-explicitly explicitly stated information, which is often the core of the transported architectural information. This paper investigates how machine learning approaches allow a computational system to deduce implicit knowledge from a set of BIM models....

  16. Cutting force model for high speed machining process

    International Nuclear Information System (INIS)

    Haber, R. E.; Jimenez, J. E.; Jimenez, A.; Lopez-Coronado, J.

    2004-01-01

    This paper presents cutting force-based models able to describe a high speed machining process. The model considers the cutting force as output variable, essential for the physical processes that are taking place in high speed machining. Moreover, this paper shows the mathematical development to derive the integral-differential equations, and the algorithms implemented in MATLAB to predict the cutting force in real time MATLAB is a software tool for doing numerical computations with matrices and vectors. It can also display information graphically and includes many toolboxes for several research and applications areas. Two end mill shapes are considered (i. e. cylindrical and ball end mill) for real-time implementation of the developed algorithms. the developed models are validated in slot milling operations. The results corroborate the importance of the cutting force variable for predicting tool wear in high speed machining operations. The developed models are the starting point for future work related with vibration analysis, process stability and dimensional surface finish in high speed machining processes. (Author) 19 refs

  17. Modeling RHIC using the standard machine formal accelerator description

    International Nuclear Information System (INIS)

    Pilat, F.; Trahern, C.G.; Wei, J.

    1997-01-01

    The Standard Machine Format (SMF) is a structured description of accelerator lattices which supports both the hierarchy of beam lines and generic lattice objects as well as those deviations (field errors, alignment efforts, etc.) associated with each component of the as-installed machine. In this paper we discuss the use of SMF to describe the Relativistic Heavy Ion Collider (RHIC) as well as the ancillary data structures (such as field quality measurements) that are necessarily incorporated into the RHIC SMF model. Future applications of SMF are outlined, including its use in the RHIC operational environment

  18. Control of discrete event systems modeled as hierarchical state machines

    Science.gov (United States)

    Brave, Y.; Heymann, M.

    1991-01-01

    The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

  19. Machine learning models in breast cancer survival prediction.

    Science.gov (United States)

    Montazeri, Mitra; Montazeri, Mohadeseh; Montazeri, Mahdieh; Beigzadeh, Amin

    2016-01-01

    Breast cancer is one of the most common cancers with a high mortality rate among women. With the early diagnosis of breast cancer survival will increase from 56% to more than 86%. Therefore, an accurate and reliable system is necessary for the early diagnosis of this cancer. The proposed model is the combination of rules and different machine learning techniques. Machine learning models can help physicians to reduce the number of false decisions. They try to exploit patterns and relationships among a large number of cases and predict the outcome of a disease using historical cases stored in datasets. The objective of this study is to propose a rule-based classification method with machine learning techniques for the prediction of different types of Breast cancer survival. We use a dataset with eight attributes that include the records of 900 patients in which 876 patients (97.3%) and 24 (2.7%) patients were females and males respectively. Naive Bayes (NB), Trees Random Forest (TRF), 1-Nearest Neighbor (1NN), AdaBoost (AD), Support Vector Machine (SVM), RBF Network (RBFN), and Multilayer Perceptron (MLP) machine learning techniques with 10-cross fold technique were used with the proposed model for the prediction of breast cancer survival. The performance of machine learning techniques were evaluated with accuracy, precision, sensitivity, specificity, and area under ROC curve. Out of 900 patients, 803 patients and 97 patients were alive and dead, respectively. In this study, Trees Random Forest (TRF) technique showed better results in comparison to other techniques (NB, 1NN, AD, SVM and RBFN, MLP). The accuracy, sensitivity and the area under ROC curve of TRF are 96%, 96%, 93%, respectively. However, 1NN machine learning technique provided poor performance (accuracy 91%, sensitivity 91% and area under ROC curve 78%). This study demonstrates that Trees Random Forest model (TRF) which is a rule-based classification model was the best model with the highest level of

  20. Latent domain models for statistical machine translation

    NARCIS (Netherlands)

    Hoàng, C.

    2017-01-01

    A data-driven approach to model translation suffers from the data mismatch problem and demands domain adaptation techniques. Given parallel training data originating from a specific domain, training an MT system on the data would result in a rather suboptimal translation for other domains. But does

  1. Global ocean modeling on the Connection Machine

    International Nuclear Information System (INIS)

    Smith, R.D.; Dukowicz, J.K.; Malone, R.C.

    1993-01-01

    The authors have developed a version of the Bryan-Cox-Semtner ocean model (Bryan, 1969; Semtner, 1976; Cox, 1984) for massively parallel computers. Such models are three-dimensional, Eulerian models that use latitude and longitude as the horizontal spherical coordinates and fixed depth levels as the vertical coordinate. The incompressible Navier-Stokes equations, with a turbulent eddy viscosity, and mass continuity equation are solved, subject to the hydrostatic and Boussinesq approximations. The traditional model formulation uses a rigid-lid approximation (vertical velocity = 0 at the ocean surface) to eliminate fast surface waves. These waves would otherwise require that a very short time step be used in numerical simulations, which would greatly increase the computational cost. To solve the equations with the rigid-lid assumption, the equations of motion are split into two parts: a set of twodimensional ''barotropic'' equations describing the vertically-averaged flow, and a set of three-dimensional ''baroclinic'' equations describing temperature, salinity and deviations of the horizontal velocities from the vertically-averaged flow

  2. A comparative study of machine learning models for ethnicity classification

    Science.gov (United States)

    Trivedi, Advait; Bessie Amali, D. Geraldine

    2017-11-01

    This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.

  3. Modeling Geomagnetic Variations using a Machine Learning Framework

    Science.gov (United States)

    Cheung, C. M. M.; Handmer, C.; Kosar, B.; Gerules, G.; Poduval, B.; Mackintosh, G.; Munoz-Jaramillo, A.; Bobra, M.; Hernandez, T.; McGranaghan, R. M.

    2017-12-01

    We present a framework for data-driven modeling of Heliophysics time series data. The Solar Terrestrial Interaction Neural net Generator (STING) is an open source python module built on top of state-of-the-art statistical learning frameworks (traditional machine learning methods as well as deep learning). To showcase the capability of STING, we deploy it for the problem of predicting the temporal variation of geomagnetic fields. The data used includes solar wind measurements from the OMNI database and geomagnetic field data taken by magnetometers at US Geological Survey observatories. We examine the predictive capability of different machine learning techniques (recurrent neural networks, support vector machines) for a range of forecasting times (minutes to 12 hours). STING is designed to be extensible to other types of data. We show how STING can be used on large sets of data from different sensors/observatories and adapted to tackle other problems in Heliophysics.

  4. Machine learning modelling for predicting soil liquefaction susceptibility

    Directory of Open Access Journals (Sweden)

    P. Samui

    2011-01-01

    Full Text Available This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN based on multi-layer perceptions (MLP that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT [(N160] and cyclic stress ratio (CSR. Further, an attempt has been made to simplify the models, requiring only the two parameters [(N160 and peck ground acceleration (amax/g], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.

  5. Support vector machine based battery model for electric vehicles

    International Nuclear Information System (INIS)

    Wang Junping; Chen Quanshi; Cao Binggang

    2006-01-01

    The support vector machine (SVM) is a novel type of learning machine based on statistical learning theory that can map a nonlinear function successfully. As a battery is a nonlinear system, it is difficult to establish the relationship between the load voltage and the current under different temperatures and state of charge (SOC). The SVM is used to model the battery nonlinear dynamics in this paper. Tests are performed on an 80Ah Ni/MH battery pack with the Federal Urban Driving Schedule (FUDS) cycle to set up the SVM model. Compared with the Nernst and Shepherd combined model, the SVM model can simulate the battery dynamics better with small amounts of experimental data. The maximum relative error is 3.61%

  6. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  7. Antimatter Production for Near-Term Propulsion Applications

    Science.gov (United States)

    Gerrish, Harold P.; Schmidt, George R.

    1999-01-01

    This presentation discusses the use and potential of power generated from Proton-Antiproton Annihilation. The problem is that there is not enough production of anti-protons, and that the production methods are inefficient. The cost for 1 gram of antiprotons is estimated at 62.5 trillion dollars. Applications which require large quantities (i.e., about 1 kg) will require dramatic improvements in the efficiency of the production of the antiprotons. However, applications which involve small quantities (i.e., 1 to 10 micrograms may be practical with a relative expansion of capacities. There are four "conventional" antimatter propulsion concepts which are: (1) the solid core, (2) the gas core, (3) the plasma core, and the (4) beam core. These are compared in terms of specific impulse, propulsive energy utilization and vehicle structure/propellant mass ratio. Antimatter-catalyzed fusion propulsion is also evaluated. The improvements outlined in the presentation to the Fermilab production, and other sites. capability would result in worldwide capacity of several micrograms per year, by the middle of the next decade. The conclusions drawn are: (1) the Conventional antimatter propulsion IS not practical due to large p-bar requirement; (2) Antimatter-catalyzed systems can be reasonably considered this "solves" energy cost problem by employing substantially smaller quantities; (3) With current infrastructure, cost for 1 microgram of p-bars is $62.5 million, but with near-term improvements cost should drop; (4) Milligram-scale facility would require a $15 billion investment, but could produce 1 mg, at $0.1/kW-hr, for $6.25 million.

  8. "Near-term" Natural Catastrophe Risk Management and Risk Hedging in a Changing Climate

    Science.gov (United States)

    Michel, Gero; Tiampo, Kristy

    2014-05-01

    Competing with analytics - Can the insurance market take advantage of seasonal or "near-term" forecasting and temporal changes in risk? Natural perils (re)insurance has been based on models following climatology i.e. the long-term "historical" average. This is opposed to considering the "near-term" and forecasting hazard and risk for the seasons or years to come. Variability and short-term changes in risk are deemed abundant for almost all perils. In addition to hydrometeorological perils whose changes are vastly discussed, earthquake activity might also change over various time-scales affected by earlier local (or even global) events, regional changes in the distribution of stresses and strains and more. Only recently has insurance risk modeling of (stochastic) hurricane-years or extratropical-storm-years started considering our ability to forecast climate variability herewith taking advantage of apparent correlations between climate indicators and the activity of storm events. Once some of these "near-term measures" were in the market, rating agencies and regulators swiftly adopted these concepts demanding companies to deploy a selection of more conservative "time-dependent" models. This was despite the fact that the ultimate effect of some of these measures on insurance risk was not well understood. Apparent short-term success over the last years in near-term seasonal hurricane forecasting was brought to a halt in 2013 when these models failed to forecast the exceptional shortage of hurricanes herewith contradicting an active-year forecast. The focus of earthquake forecasting has in addition been mostly on high rather than low temporal and regional activity despite the fact that avoiding losses does not by itself create a product. This presentation sheds light on new risk management concepts for over-regional and global (re)insurance portfolios that take advantage of forecasting changes in risk. The presentation focuses on the "upside" and on new opportunities

  9. Customer requirement modeling and mapping of numerical control machine

    Directory of Open Access Journals (Sweden)

    Zhongqi Sheng

    2015-10-01

    Full Text Available In order to better obtain information about customer requirement and develop products meeting customer requirement, it is necessary to systematically analyze and handle the customer requirement. This article uses the product service system of numerical control machine as research objective and studies the customer requirement modeling and mapping oriented toward configuration design. It introduces the conception of requirement unit, expounds the customer requirement decomposition rules, and establishes customer requirement model; it builds the house of quality using quality function deployment and confirms the weight of technical feature of product and service; it explores the relevance rules between data using rough set theory, establishes rule database, and solves the target value of technical feature of product. Using economical turning center series numerical control machine as an example, it verifies the rationality of proposed customer requirement model.

  10. Building Better Ecological Machines: Complexity Theory and Alternative Economic Models

    Directory of Open Access Journals (Sweden)

    Jess Bier

    2016-12-01

    Full Text Available Computer models of the economy are regularly used to predict economic phenomena and set financial policy. However, the conventional macroeconomic models are currently being reimagined after they failed to foresee the current economic crisis, the outlines of which began to be understood only in 2007-2008. In this article we analyze the most prominent of this reimagining: Agent-Based models (ABMs. ABMs are an influential alternative to standard economic models, and they are one focus of complexity theory, a discipline that is a more open successor to the conventional chaos and fractal modeling of the 1990s. The modelers who create ABMs claim that their models depict markets as ecologies, and that they are more responsive than conventional models that depict markets as machines. We challenge this presentation, arguing instead that recent modeling efforts amount to the creation of models as ecological machines. Our paper aims to contribute to an understanding of the organizing metaphors of macroeconomic models, which we argue is relevant conceptually and politically, e.g., when models are used for regulatory purposes.

  11. Modelling of destructive ability of water-ice-jet while machine processing of machine elements

    Directory of Open Access Journals (Sweden)

    Burnashov Mikhail

    2017-01-01

    Full Text Available This paper represents the classification of the most common contaminants, appearing on the surfaces of machine elements after a long-term service.The existing well-known surface cleaning methods are described and analyzed in the framework of this paper. The article is intended to provide the reader with an understanding of the process of cleaning and removing contamination from machine elements surface by means of water-ice-jet with preprepared beforehand particles, as well as the process of water-ice-jet formation. The paper deals with the description of such advantages of this method as low costs, wastelessness, high quality of the surface, undergoing processing, minimization of harmful impact upon environment and eco-friendliness, which makes it differ radically from formerly known methods. The scheme of interection between the surface and ice particle is represented. A thermo-physical model of destruction of contaminants by means of a water-ice-jet cleaning technology was developed on its basis. The thermo-physical model allows us to make setting of processing mode and the parameters of water-ice-jet scientifically substantiated and well-grounded.

  12. Numerical modeling and optimization of machining duplex stainless steels

    Directory of Open Access Journals (Sweden)

    Rastee D. Koyee

    2015-01-01

    Full Text Available The shortcomings of the machining analytical and empirical models in combination with the industry demands have to be fulfilled. A three-dimensional finite element modeling (FEM introduces an attractive alternative to bridge the gap between pure empirical and fundamental scientific quantities, and fulfill the industry needs. However, the challenging aspects which hinder the successful adoption of FEM in the machining sector of manufacturing industry have to be solved first. One of the greatest challenges is the identification of the correct set of machining simulation input parameters. This study presents a new methodology to inversely calculate the input parameters when simulating the machining of standard duplex EN 1.4462 and super duplex EN 1.4410 stainless steels. JMatPro software is first used to model elastic–viscoplastic and physical work material behavior. In order to effectively obtain an optimum set of inversely identified friction coefficients, thermal contact conductance, Cockcroft–Latham critical damage value, percentage reduction in flow stress, and Taylor–Quinney coefficient, Taguchi-VIKOR coupled with Firefly Algorithm Neural Network System is applied. The optimization procedure effectively minimizes the overall differences between the experimentally measured performances such as cutting forces, tool nose temperature and chip thickness, and the numerically obtained ones at any specified cutting condition. The optimum set of input parameter is verified and used for the next step of 3D-FEM application. In the next stage of the study, design of experiments, numerical simulations, and fuzzy rule modeling approaches are employed to optimize types of chip breaker, insert shapes, process conditions, cutting parameters, and tool orientation angles based on many important performances. Through this study, not only a new methodology in defining the optimal set of controllable parameters for turning simulations is introduced, but also

  13. Near-term Forecasting of Solar Total and Direct Irradiance for Solar Energy Applications

    Science.gov (United States)

    Long, C. N.; Riihimaki, L. D.; Berg, L. K.

    2012-12-01

    Integration of solar renewable energy into the power grid, like wind energy, is hindered by the variable nature of the solar resource. One challenge of the integration problem for shorter time periods is the phenomenon of "ramping events" where the electrical output of the solar power system increases or decreases significantly and rapidly over periods of minutes or less. Advance warning, of even just a few minutes, allows power system operators to compensate for the ramping. However, the ability for short-term prediction on such local "point" scales is beyond the abilities of typical model-based weather forecasting. Use of surface-based solar radiation measurements has been recognized as a likely solution for providing input for near-term (5 to 30 minute) forecasts of solar energy availability and variability. However, it must be noted that while fixed-orientation photovoltaic panel systems use the total (global) downwelling solar radiation, tracking photovoltaic and solar concentrator systems use only the direct normal component of the solar radiation. Thus even accurate near-term forecasts of total solar radiation will under many circumstances include inherent inaccuracies with respect to tracking systems due to lack of information of the direct component of the solar radiation. We will present examples and statistical analyses of solar radiation partitioning showing the differences in the behavior of the total/direct radiation with respect to the near-term forecast issue. We will present an overview of the possibility of using a network of unique new commercially available total/diffuse radiometers in conjunction with a near-real-time adaptation of the Shortwave Radiative Flux Analysis methodology (Long and Ackerman, 2000; Long et al., 2006). The results are used, in conjunction with persistence and tendency forecast techniques, to provide more accurate near-term forecasts of cloudiness, and both total and direct normal solar irradiance availability and

  14. Impact of Model Detail of Synchronous Machines on Real-time Transient Stability Assessment

    DEFF Research Database (Denmark)

    Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Østergaard, Jacob

    2013-01-01

    In this paper, it is investigated how detailed the model of a synchronous machine needs to be in order to assess transient stability using a Single Machine Equivalent (SIME). The results will show how the stability mechanism and the stability assessment are affected by the model detail. In order...... of the machine models is varied. Analyses of the results suggest that a 4th-order model may be sufficient to represent synchronous machines in transient stability studies....

  15. Nuclear Reactor Technology Assessment for Near Term Deployment

    International Nuclear Information System (INIS)

    2013-01-01

    One of the IAEA's statutory objectives is to 'seek to accelerate and enlarge the contribution of atomic energy to peace, health and prosperity throughout the world.' One way this objective is achieved is through the publication of a range of technical series. Two of these are the IAEA Nuclear Energy Series and the IAEA Safety Standards Series. According to Article III.A.6 of the IAEA Statute, the safety standards establish 'standards of safety for protection of health and minimization of danger to life and property'. The safety standards include the Safety Fundamentals, Safety Requirements and Safety Guides. These standards are written primarily in a regulatory style, and are binding on the IAEA for its own programmes. The principal users are the regulatory bodies in Member States and other national authorities. The IAEA Nuclear Energy Series comprises reports designed to encourage and assist R and D on, and application of, nuclear energy for peaceful uses. This includes practical examples to be used by owners and operators of utilities in Member States, implementing organizations, academia, and government officials, among others. This information is presented in guides, reports on technology status and advances, and best practices for peaceful uses of nuclear energy based on inputs from international experts. The IAEA Nuclear Energy Series complements the IAEA Safety Standards Series. Several IAEA Member States have embarked recently on initiatives to establish or reinvigorate nuclear power programmes. In response, the IAEA has developed several guidance and technical publications to identify with Member States the complex tasks associated with such an undertaking and to recommend the processes that can be used in the performance of this work. A major challenge in this undertaking, especially for newcomer Member States, is the process associated with reactor technology assessment (RTA) for near term deployment. RTA permits the evaluation, selection and deployment

  16. Credit Risk Analysis Using Machine and Deep Learning Models

    Directory of Open Access Journals (Sweden)

    Peter Martey Addo

    2018-04-01

    Full Text Available Due to the advanced technology associated with Big Data, data availability and computing power, most banks or lending institutions are renewing their business models. Credit risk predictions, monitoring, model reliability and effective loan processing are key to decision-making and transparency. In this work, we build binary classifiers based on machine and deep learning models on real data in predicting loan default probability. The top 10 important features from these models are selected and then used in the modeling process to test the stability of binary classifiers by comparing their performance on separate data. We observe that the tree-based models are more stable than the models based on multilayer artificial neural networks. This opens several questions relative to the intensive use of deep learning systems in enterprises.

  17. Modeling Music Emotion Judgments Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Naresh N. Vempala

    2018-01-01

    Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  18. Model-Driven Engineering of Machine Executable Code

    Science.gov (United States)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  19. Artificial emotional model based on finite state machine

    Institute of Scientific and Technical Information of China (English)

    MENG Qing-mei; WU Wei-guo

    2008-01-01

    According to the basic emotional theory, the artificial emotional model based on the finite state machine(FSM) was presented. In finite state machine model of emotion, the emotional space included the basic emotional space and the multiple emotional spaces. The emotion-switching diagram was defined and transition function was developed using Markov chain and linear interpolation algorithm. The simulation model was built using Stateflow toolbox and Simulink toolbox based on the Matlab platform.And the model included three subsystems: the input one, the emotion one and the behavior one. In the emotional subsystem, the responses of different personalities to the external stimuli were described by defining personal space. This model takes states from an emotional space and updates its state depending on its current state and a state of its input (also a state-emotion). The simulation model realizes the process of switching the emotion from the neutral state to other basic emotions. The simulation result is proved to correspond to emotion-switching law of human beings.

  20. Inverse Analysis and Modeling for Tunneling Thrust on Shield Machine

    Directory of Open Access Journals (Sweden)

    Qian Zhang

    2013-01-01

    Full Text Available With the rapid development of sensor and detection technologies, measured data analysis plays an increasingly important role in the design and control of heavy engineering equipment. The paper proposed a method for inverse analysis and modeling based on mass on-site measured data, in which dimensional analysis and data mining techniques were combined. The method was applied to the modeling of the tunneling thrust on shield machines and an explicit expression for thrust prediction was established. Combined with on-site data from a tunneling project in China, the inverse identification of model coefficients was carried out using the multiple regression method. The model residual was analyzed by statistical methods. By comparing the on-site data and the model predicted results in the other two projects with different tunneling conditions, the feasibility of the model was discussed. The work may provide a scientific basis for the rational design and control of shield tunneling machines and also a new way for mass on-site data analysis of complex engineering systems with nonlinear, multivariable, time-varying characteristics.

  1. Near-term electric-vehicle program. Phase II. Mid-term review summary report

    Energy Technology Data Exchange (ETDEWEB)

    1978-07-27

    The general objective of the Near-Term Electric Vehicle Program is to confirm that, in fact, the complete spectrum of requirements placed on the automobile (e.g., safety, producibility, utility, etc.) can still be satisfied if electric power train concepts are incorporated in lieu of contemporary power train concepts, and that the resultant set of vehicle characteristics are mutually compatible, technologically achievable, and economically achievable. The focus of the approach to meeting this general objective involves the design, development, and fabrication of complete electric vehicles incorporating, where necessary, extensive technological advancements. A mid-term summary is presented of Phase II which is a continuation of the preliminary design study conducted in Phase I of the program. Information is included on vehicle performance and performance simulation models; battery subsystems; control equipment; power systems; vehicle design and components for suspension, steering, and braking; scale model testing; structural analysis; and vehicle dynamics analysis. (LCL)

  2. MODEL RESEARCH OF THE ACIVE VIBROIZOLATION CABS MACHINE

    Directory of Open Access Journals (Sweden)

    Jerzy MARGIELEWICZ

    2014-03-01

    Full Text Available The study was carried out computer simulations of mechatronic model bridge crane, which is intended to theoretical evaluation of the possibility of eliminating the mechanical vibrations affecting the operator's cab driven machine. The model studies used fixed value control, the controlled variable is selected as the vertical displacement of the cab. Also included in the research model rheological model of the operator's body. We examined four overhead cranes with lifting capacity of 50T, which are classified in accordance with the directive of the European Union concerning the design of cranes, the four classes of cranes HC stiffness. The use of an active vibration isolation system in which distinguishes two negative feedback loops, very well eliminate mechanical vibration to the operator.

  3. Electric machines modeling, condition monitoring, and fault diagnosis

    CERN Document Server

    Toliyat, Hamid A; Choi, Seungdeog; Meshgin-Kelk, Homayoun

    2012-01-01

    With countless electric motors being used in daily life, in everything from transportation and medical treatment to military operation and communication, unexpected failures can lead to the loss of valuable human life or a costly standstill in industry. To prevent this, it is important to precisely detect or continuously monitor the working condition of a motor. Electric Machines: Modeling, Condition Monitoring, and Fault Diagnosis reviews diagnosis technologies and provides an application guide for readers who want to research, develop, and implement a more effective fault diagnosis and condi

  4. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  5. Advanced Machine Learning Emulators of Radiative Transfer Models

    Science.gov (United States)

    Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.

    2017-12-01

    Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.

  6. Process Approach for Modeling of Machine and Tractor Fleet Structure

    Science.gov (United States)

    Dokin, B. D.; Aletdinova, A. A.; Kravchenko, M. S.; Tsybina, Y. S.

    2018-05-01

    The existing software complexes on modelling of the machine and tractor fleet structure are mostly aimed at solving the task of optimization. However, the creators, choosing only one optimization criterion and incorporating it in their software, provide grounds on why it is the best without giving a decision maker the opportunity to choose it for their enterprise. To analyze “bottlenecks” of machine and tractor fleet modelling, the authors of this article created a process model, in which they included adjustment to the plan of using machinery based on searching through alternative technologies. As a result, the following recommendations for software complex development have been worked out: the introduction of a database of alternative technologies; the possibility for a user to change the timing of the operations even beyond the allowable limits and in that case the calculation of the incurred loss; the possibility to rule out the solution of an optimization task, and if there is a necessity in it - the possibility to choose an optimization criterion; introducing graphical display of an annual complex of works, which could be enough for the development and adjustment of a business strategy.

  7. Applications and modelling of bulk HTSs in brushless ac machines

    International Nuclear Information System (INIS)

    Barnes, G.J.

    2000-01-01

    The use of high temperature superconducting material in its bulk form for engineering applications is attractive due to the large power densities that can be achieved. In brushless electrical machines, there are essentially four properties that can be exploited; their hysteretic nature, their flux shielding properties, their ability to trap large flux densities and their ability to produce levitation. These properties translate to hysteresis machines, reluctance machines, trapped-field synchronous machines and linear motors respectively. Each one of these machines is addressed separately and computer simulations that reveal the current and field distributions within the machines are used to explain their operation. (author)

  8. Modeling the Swift Bat Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  9. Modeling the Swift BAT Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2015-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.

  10. Machine learning based switching model for electricity load forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Shu; Lee, Wei-Jen [Energy Systems Research Center, The University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States); Chen, Luonan [Department of Electronics, Information and Communication Engineering, Osaka Sangyo University, 3-1-1 Nakagaito, Daito, Osaka 574-0013 (Japan)

    2008-06-15

    In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma. (author)

  11. Machine learning based switching model for electricity load forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Fan Shu [Energy Systems Research Center, University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States); Chen Luonan [Department of Electronics, Information and Communication Engineering, Osaka Sangyo University, 3-1-1 Nakagaito, Daito, Osaka 574-0013 (Japan); Lee, Weijen [Energy Systems Research Center, University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States)], E-mail: wlee@uta.edu

    2008-06-15

    In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma.

  12. Control volume based modelling of compressible flow in reciprocating machines

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik

    2004-01-01

    , and multidimensional effects must be calculated using empirical correlations; correlations for steady state flow can be used as an approximation. A transformation that assumes ideal gas is presented for transforming equations for masses and energies in control volumes into the corresponding pressures and temperatures......An approach to modelling unsteady compressible flow that is primarily one dimensional is presented. The approach was developed for creating distributed models of machines with reciprocating pistons but it is not limited to this application. The approach is based on the integral form of the unsteady...... conservation laws for mass, energy, and momentum applied to a staggered mesh consisting of two overlapping strings of control volumes. Loss mechanisms can be included directly in the governing equations of models by including them as terms in the conservation laws. Heat transfer, flow friction...

  13. Coal demand prediction based on a support vector machine model

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Cun-liang; Wu, Hai-shan; Gong, Dun-wei [China University of Mining & Technology, Xuzhou (China). School of Information and Electronic Engineering

    2007-01-15

    A forecasting model for coal demand of China using a support vector regression was constructed. With the selected embedding dimension, the output vectors and input vectors were constructed based on the coal demand of China from 1980 to 2002. After compared with lineal kernel and Sigmoid kernel, a radial basis function(RBF) was adopted as the kernel function. By analyzing the relationship between the error margin of prediction and the model parameters, the proper parameters were chosen. The support vector machines (SVM) model with multi-input and single output was proposed. Compared the predictor based on RBF neural networks with test datasets, the results show that the SVM predictor has higher precision and greater generalization ability. In the end, the coal demand from 2003 to 2006 is accurately forecasted. l0 refs., 2 figs., 4 tabs.

  14. Machine learning based switching model for electricity load forecasting

    International Nuclear Information System (INIS)

    Fan Shu; Chen Luonan; Lee, Weijen

    2008-01-01

    In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma

  15. Near-term prospects for information on b decay

    International Nuclear Information System (INIS)

    Thorndike, E.H.

    1981-01-01

    The weak decay of the b quark is one of the more hopeful ways of learning about the relations among the three families. Most of the experimental information on B decay in the next few years will come from the large magnetic detector CLEO at the Cornell Electron Storage Ring. Before I give you my estimate of what we will (and will not) learn, let me remind you of what we already know. As Tony Loomis told you yesterday, the b quark decays (it could have been stable). The results are consistent with the standard model. Some specific other models can be ruled out, but at present a wide range of non-standard models are allowed

  16. The Abstract Machine Model for Transaction-based System Control

    Energy Technology Data Exchange (ETDEWEB)

    Chassin, David P.

    2003-01-31

    Recent work applying statistical mechanics to economic modeling has demonstrated the effectiveness of using thermodynamic theory to address the complexities of large scale economic systems. Transaction-based control systems depend on the conjecture that when control of thermodynamic systems is based on price-mediated strategies (e.g., auctions, markets), the optimal allocation of resources in a market-based control system results in an emergent optimal control of the thermodynamic system. This paper proposes an abstract machine model as the necessary precursor for demonstrating this conjecture and establishes the dynamic laws as the basis for a special theory of emergence applied to the global behavior and control of complex adaptive systems. The abstract machine in a large system amounts to the analog of a particle in thermodynamic theory. The permit the establishment of a theory dynamic control of complex system behavior based on statistical mechanics. Thus we may be better able to engineer a few simple control laws for a very small number of devices types, which when deployed in very large numbers and operated as a system of many interacting markets yields the stable and optimal control of the thermodynamic system.

  17. Subspace identification of Hammer stein models using support vector machines

    International Nuclear Information System (INIS)

    Al-Dhaifallah, Mujahed

    2011-01-01

    System identification is the art of finding mathematical tools and algorithms that build an appropriate mathematical model of a system from measured input and output data. Hammerstein model, consisting of a memoryless nonlinearity followed by a dynamic linear element, is often a good trade-off as it can represent some dynamic nonlinear systems very accurately, but is nonetheless quite simple. Moreover, the extensive knowledge about LTI system representations can be applied to the dynamic linear block. On the other hand, finding an effective representation for the nonlinearity is an active area of research. Recently, support vector machines (SVMs) and least squares support vector machines (LS-SVMs) have demonstrated powerful abilities in approximating linear and nonlinear functions. In contrast with other approximation methods, SVMs do not require a-priori structural information. Furthermore, there are well established methods with guaranteed convergence (ordinary least squares, quadratic programming) for fitting LS-SVMs and SVMs. The general objective of this research is to develop new subspace algorithms for Hammerstein systems based on SVM regression.

  18. Hidden physics models: Machine learning of nonlinear partial differential equations

    Science.gov (United States)

    Raissi, Maziar; Karniadakis, George Em

    2018-03-01

    While there is currently a lot of enthusiasm about "big data", useful data is usually "small" and expensive to acquire. In this paper, we present a new paradigm of learning partial differential equations from small data. In particular, we introduce hidden physics models, which are essentially data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and nonlinear partial differential equations, to extract patterns from high-dimensional data generated from experiments. The proposed methodology may be applied to the problem of learning, system identification, or data-driven discovery of partial differential equations. Our framework relies on Gaussian processes, a powerful tool for probabilistic inference over functions, that enables us to strike a balance between model complexity and data fitting. The effectiveness of the proposed approach is demonstrated through a variety of canonical problems, spanning a number of scientific domains, including the Navier-Stokes, Schrödinger, Kuramoto-Sivashinsky, and time dependent linear fractional equations. The methodology provides a promising new direction for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data.

  19. Conceptual models in man-machine design verification

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1985-01-01

    The need for systematic methods for evaluation of design concepts for new man-machine systems has been rapidly increasing in consequence of the introduction of modern information technology. Direct empirical methods are difficult to apply when functions during rare conditions and support of operator decisions during emergencies are to be evaluated. In this paper, the problems of analytical evaluations based on conceptual models of the man-machine interaction are discussed, and the relations to system design and analytical risk assessment are considered. Finally, a conceptual framework for analytical evaluation is proposed, including several domains of description: 1. The problem space, in the form of a means-end hierarchy; 2. The structure of the decision process; 3. The mental strategies and heuristics used by operators; 4. The levels of cognitive control and the mechanisms related to human errors. Finally, the need for models representing operators' subjective criteria for choosing among available mental strategies and for accepting advice from intelligent interfaces is discussed

  20. Error modeling for surrogates of dynamical systems using machine learning

    Science.gov (United States)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-12-01

    A machine-learning-based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (e.g., random forests, LASSO) to map a large set of inexpensively computed `error indicators' (i.e., features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering), and subsequently constructs a `local' regression model to predict the time-instantaneous error within each identified region of feature space. We consider two uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance, and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (e.g., time-integrated errors). We apply the proposed framework to model errors in reduced-order models of nonlinear oil--water subsurface flow simulations. The reduced-order models used in this work entail application of trajectory piecewise linearization with proper orthogonal decomposition. When the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.

  1. Near term climate projections for invasive species distributions

    Science.gov (United States)

    Jarnevich, C.S.; Stohlgren, T.J.

    2009-01-01

    Climate change and invasive species pose important conservation issues separately, and should be examined together. We used existing long term climate datasets for the US to project potential climate change into the future at a finer spatial and temporal resolution than the climate change scenarios generally available. These fine scale projections, along with new species distribution modeling techniques to forecast the potential extent of invasive species, can provide useful information to aide conservation and invasive species management efforts. We created habitat suitability maps for Pueraria montana (kudzu) under current climatic conditions and potential average conditions up to 30 years in the future. We examined how the potential distribution of this species will be affected by changing climate, and the management implications associated with these changes. Our models indicated that P. montana may increase its distribution particularly in the Northeast with climate change and may decrease in other areas. ?? 2008 Springer Science+Business Media B.V.

  2. Attacking Machine Learning models as part of a cyber kill chain

    OpenAIRE

    Nguyen, Tam N.

    2017-01-01

    Machine learning is gaining popularity in the network security domain as many more network-enabled devices get connected, as malicious activities become stealthier, and as new technologies like Software Defined Networking emerge. Compromising machine learning model is a desirable goal. In fact, spammers have been quite successful getting through machine learning enabled spam filters for years. While previous works have been done on adversarial machine learning, none has been considered within...

  3. Near-term deployment of carbon capture and sequestration from biorefineries in the United States.

    Science.gov (United States)

    Sanchez, Daniel L; Johnson, Nils; McCoy, Sean T; Turner, Peter A; Mach, Katharine J

    2018-05-08

    Capture and permanent geologic sequestration of biogenic CO 2 emissions may provide critical flexibility in ambitious climate change mitigation. However, most bioenergy with carbon capture and sequestration (BECCS) technologies are technically immature or commercially unavailable. Here, we evaluate low-cost, commercially ready CO 2 capture opportunities for existing ethanol biorefineries in the United States. The analysis combines process engineering, spatial optimization, and lifecycle assessment to consider the technical, economic, and institutional feasibility of near-term carbon capture and sequestration (CCS). Our modeling framework evaluates least cost source-sink relationships and aggregation opportunities for pipeline transport, which can cost-effectively transport small CO 2 volumes to suitable sequestration sites; 216 existing US biorefineries emit 45 Mt CO 2 annually from fermentation, of which 60% could be captured and compressed for pipeline transport for under $25/tCO 2 A sequestration credit, analogous to existing CCS tax credits, of $60/tCO 2 could incent 30 Mt of sequestration and 6,900 km of pipeline infrastructure across the United States. Similarly, a carbon abatement credit, analogous to existing tradeable CO 2 credits, of $90/tCO 2 can incent 38 Mt of abatement. Aggregation of CO 2 sources enables cost-effective long-distance pipeline transport to distant sequestration sites. Financial incentives under the low-carbon fuel standard in California and recent revisions to existing federal tax credits suggest a substantial near-term opportunity to permanently sequester biogenic CO 2 This financial opportunity could catalyze the growth of carbon capture, transport, and sequestration; improve the lifecycle impacts of conventional biofuels; support development of carbon-negative fuels; and help fulfill the mandates of low-carbon fuel policies across the United States. Copyright © 2018 the Author(s). Published by PNAS.

  4. Functional networks inference from rule-based machine learning models.

    Science.gov (United States)

    Lazzarini, Nicola; Widera, Paweł; Williamson, Stuart; Heer, Rakesh; Krasnogor, Natalio; Bacardit, Jaume

    2016-01-01

    Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The

  5. Interactions among Amazon land use, forests and climate: prospects for a near-term forest tipping point

    OpenAIRE

    Nepstad, Daniel C; Stickler, Claudia M; Filho, Britaldo Soares-; Merry, Frank

    2008-01-01

    Some model experiments predict a large-scale substitution of Amazon forest by savannah-like vegetation by the end of the twenty-first century. Expanding global demands for biofuels and grains, positive feedbacks in the Amazon forest fire regime and drought may drive a faster process of forest degradation that could lead to a near-term forest dieback. Rising worldwide demands for biofuel and meat are creating powerful new incentives for agro-industrial expansion into Amazon forest regions. For...

  6. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Multimedia

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  7. A Reference Model for Virtual Machine Launching Overhead

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Hao; Ren, Shangping; Garzoglio, Gabriele; Timm, Steven; Bernabeu, Gerard; Chadwick, Keith; Noh, Seo-Young

    2016-07-01

    Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.

  8. Modelling open pit shovel-truck systems using the Machine Repair Model

    Energy Technology Data Exchange (ETDEWEB)

    Krause, A.; Musingwini, C. [CBH Resources Ltd., Sydney, NSW (Australia). Endeaver Mine

    2007-08-15

    Shovel-truck systems for loading and hauling material in open pit mines are now routinely analysed using simulation models or off-the-shelf simulation software packages, which can be very expensive for once-off or occasional use. The simulation models invariably produce different estimations of fleet sizes due to their differing estimations of cycle time. No single model or package can accurately estimate the required fleet size because the fleet operating parameters are characteristically random and dynamic. In order to improve confidence in sizing the fleet for a mining project, at least two estimation models should be used. This paper demonstrates that the Machine Repair Model can be modified and used as a model for estimating truck fleet size in an open pit shovel-truck system. The modified Machine Repair Model is first applied to a virtual open pit mine case study. The results compare favourably to output from other estimation models using the same input parameters for the virtual mine. The modified Machine Repair Model is further applied to an existing open pit coal operation, the Kwagga Section of Optimum Colliery as a case study. Again the results confirm those obtained from the virtual mine case study. It is concluded that the Machine Repair Model can be an affordable model compared to off-the-shelf generic software because it is easily modelled in Microsoft Excel, a software platform that most mines already use.

  9. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    Science.gov (United States)

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  10. Modeling of tool path for the CNC sheet cutting machines

    Science.gov (United States)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  11. Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling.

    Science.gov (United States)

    Cuperlovic-Culf, Miroslava

    2018-01-11

    Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies.

  12. Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling

    Science.gov (United States)

    Cuperlovic-Culf, Miroslava

    2018-01-01

    Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies. PMID:29324649

  13. Dynamic modeling of an asynchronous squirrel-cage machine; Modelisation dynamique d'une machine asynchrone a cage

    Energy Technology Data Exchange (ETDEWEB)

    Guerette, D.

    2009-07-01

    This document presented a detailed mathematical explanation and validation of the steps leading to the development of an asynchronous squirrel-cage machine. The MatLab/Simulink software was used to model a wind turbine at variable high speeds. The asynchronous squirrel-cage machine is an electromechanical system coupled to a magnetic circuit. The resulting electromagnetic circuit can be represented as a set of resistances, leakage inductances and mutual inductances. Different models were used for a comparison study, including the Munteanu, Boldea, Wind Turbine Blockset, and SimPowerSystem. MatLab/Simulink modeling results were in good agreement with the results from other comparable models. Simulation results were in good agreement with analytical calculations. 6 refs, 2 tabs, 9 figs.

  14. Developing a PLC-friendly state machine model: lessons learned

    Science.gov (United States)

    Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans

    2014-07-01

    Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we

  15. Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Krzysztof Gajowniczek

    2017-10-01

    Full Text Available Forecasting of electricity demand has become one of the most important areas of research in the electric power industry, as it is a critical component of cost-efficient power system management and planning. In this context, accurate and robust load forecasting is supposed to play a key role in reducing generation costs, and deals with the reliability of the power system. However, due to demand peaks in the power system, forecasts are inaccurate and prone to high numbers of errors. In this paper, our contributions comprise a proposed data-mining scheme for demand modeling through peak detection, as well as the use of this information to feed the forecasting system. For this purpose, we have taken a different approach from that of time series forecasting, representing it as a two-stage pattern recognition problem. We have developed a peak classification model followed by a forecasting model to estimate an aggregated demand volume. We have utilized a set of machine learning algorithms to benefit from both accurate detection of the peaks and precise forecasts, as applied to the Polish power system. The key finding is that the algorithms can detect 96.3% of electricity peaks (load value equal to or above the 99th percentile of the load distribution and deliver accurate forecasts, with mean absolute percentage error (MAPE of 3.10% and resistant mean absolute percentage error (r-MAPE of 2.70% for the 24 h forecasting horizon.

  16. Modeling the Virtual Machine Launching Overhead under Fermicloud

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, Gabriele [Fermilab; Wu, Hao [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Bernabeu, Gerard [Fermilab; Noh, Seo-Young [KISTI, Daejeon

    2014-11-12

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.

  17. An improved modelling of asynchronous machine with skin-effect ...

    African Journals Online (AJOL)

    The conventional method of analysis of Asynchronous machine fails to give accurate results especially when the machine is operated under high rotor frequency. At high rotor frequency, skin-effect dominates causing the rotor impedance to be frequency dependant. This paper therefore presents an improved method of ...

  18. Modelling and Simulation of a Synchronous Machine with Power Electronic Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede

    2005-01-01

    is modelled in SIMULINK as well. The resulting model can more accurately represent non-idea situations such as non-symmetrical parameters of the electrical machines and unbalance conditions. The model may be used for both steady state and large-signal dynamic analysis. This is particularly useful......This paper reports the modeling and simulation of a synchronous machine with a power electronic interface in direct phase model. The implementation of a direct phase model of synchronous machines in MATLAB/SIMULINK is presented .The power electronic system associated with the synchronous machine...... in the systems where a detailed study is needed in order to assess the overall system stability. Simulation studies are performed under various operation conditions. It is shown that the developed model could be used for studies of various applications of synchronous machines such as in renewable and DG...

  19. Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, M Mohan; Gorin, Alexander [School of Engineering and Science, Curtin University of Technology, Sarawak (Malaysia); Abou-El-Hossein, K A, E-mail: mohan.m@curtin.edu.my [Mechanical and Aeronautical Department, Nelson Mandela Metropolitan University, Port Elegebeth, 6031 (South Africa)

    2011-02-15

    Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.

  20. Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic

    International Nuclear Information System (INIS)

    Reddy, M Mohan; Gorin, Alexander; Abou-El-Hossein, K A

    2011-01-01

    Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.

  1. Modeling and simulation of five-axis virtual machine based on NX

    Science.gov (United States)

    Li, Xiaoda; Zhan, Xianghui

    2018-04-01

    Virtual technology in the machinery manufacturing industry has shown the role of growing. In this paper, the Siemens NX software is used to model the virtual CNC machine tool, and the parameters of the virtual machine are defined according to the actual parameters of the machine tool so that the virtual simulation can be carried out without loss of the accuracy of the simulation. How to use the machine builder of the CAM module to define the kinematic chain and machine components of the machine is described. The simulation of virtual machine can provide alarm information of tool collision and over cutting during the process to users, and can evaluate and forecast the rationality of the technological process.

  2. Crystal structure representations for machine learning models of formation energies

    Energy Technology Data Exchange (ETDEWEB)

    Faber, Felix [Department of Chemistry, Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, University of Basel Switzerland; Lindmaa, Alexander [Department of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping Sweden; von Lilienfeld, O. Anatole [Department of Chemistry, Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, University of Basel Switzerland; Argonne Leadership Computing Facility, Argonne National Laboratory, 9700 S. Cass Avenue Lemont Illinois 60439; Armiento, Rickard [Department of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping Sweden

    2015-04-20

    We introduce and evaluate a set of feature vector representations of crystal structures for machine learning (ML) models of formation energies of solids. ML models of atomization energies of organic molecules have been successful using a Coulomb matrix representation of the molecule. We consider three ways to generalize such representations to periodic systems: (i) a matrix where each element is related to the Ewald sum of the electrostatic interaction between two different atoms in the unit cell repeated over the lattice; (ii) an extended Coulomb-like matrix that takes into account a number of neighboring unit cells; and (iii) an ansatz that mimics the periodicity and the basic features of the elements in the Ewald sum matrix using a sine function of the crystal coordinates of the atoms. The representations are compared for a Laplacian kernel with Manhattan norm, trained to reproduce formation energies using a dataset of 3938 crystal structures obtained from the Materials Project. For training sets consisting of 3000 crystals, the generalization error in predicting formation energies of new structures corresponds to (i) 0.49, (ii) 0.64, and (iii) 0.37eV/atom for the respective representations.

  3. A self-calibrating robot based upon a virtual machine model of parallel kinematics

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Eiríksson, Eyþór Rúnar; Hansen, Hans Nørgaard

    2016-01-01

    A delta-type parallel kinematics system for Additive Manufacturing has been created, which through a probing system can recognise its geometrical deviations from nominal and compensate for these in the driving inverse kinematic model of the machine. Novelty is that this model is derived from...... a virtual machine of the kinematics system, built on principles from geometrical metrology. Relevant mathematically non-trivial deviations to the ideal machine are identified and decomposed into elemental deviations. From these deviations, a routine is added to a physical machine tool, which allows...

  4. Development of Mathematical Model for Lifecycle Management Process of New Type of Multirip Saw Machine

    Directory of Open Access Journals (Sweden)

    B. V. Phung

    2017-01-01

    Full Text Available The subject of research is a new type of the multirip saw machine with circular reciprocating saw blades. This machine has a number of advantages in comparison with other machines of similar purpose. The paper presents an overview of different types of saw equipment and describes basic characteristics of the machine under investigation.Using the concept of lifecycle management of the considered machine in a unified information space is necessary to improve quality and competitiveness in the current production environment. In this lifecycle all the members, namely designers, technologists, customers, etc., have a philosophy to tend to optimize the overall machine design as much as possible. However, it is not always possible to achieve. Conversely, at the boundary between the phases there are several mismatching situations, if not even conflicting inconsistencies. For example, improvement of mass characteristics can lead to poor stability and rigidity of the saw blade. Machine output improvement through increasing frequency of the machine motor rotation, on the other side, results in reducing stable ability of the saw blades and so on.In order to provide a coherent framework for the collaborative environment between the members of the life cycle, the article presents a technique to construct a mathematical model that allows combining all different members’ requirements in the unified information model. The article also gives analysis of kinematic and dynamic behavior and technological characteristics of the machine. Describes in detail all the controlled parameters, functional constraints, and quality criteria of the machine under consideration. Depending on the controlled parameters, the analytical relationships formulate functional constraints and quality criteria of the machine. The proposed algorithm allows fast and exact calculation of all the functional constraints and quality criteria of the machine for a given vector of the control

  5. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  6. Quasilinear Extreme Learning Machine Model Based Internal Model Control for Nonlinear Process

    Directory of Open Access Journals (Sweden)

    Dazi Li

    2015-01-01

    Full Text Available A new strategy for internal model control (IMC is proposed using a regression algorithm of quasilinear model with extreme learning machine (QL-ELM. Aimed at the chemical process with nonlinearity, the learning process of the internal model and inverse model is derived. The proposed QL-ELM is constructed as a linear ARX model with a complicated nonlinear coefficient. It shows some good approximation ability and fast convergence. The complicated coefficients are separated into two parts. The linear part is determined by recursive least square (RLS, while the nonlinear part is identified through extreme learning machine. The parameters of linear part and the output weights of ELM are estimated iteratively. The proposed internal model control is applied to CSTR process. The effectiveness and accuracy of the proposed method are extensively verified through numerical results.

  7. Multi-objective optimization model of CNC machining to minimize processing time and environmental impact

    Science.gov (United States)

    Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad

    2017-11-01

    Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.

  8. A machine learning model with human cognitive biases capable of learning from small and biased datasets.

    Science.gov (United States)

    Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro

    2018-05-09

    Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.

  9. The near-term impacts of carbon mitigation policies on manufacturing industries

    International Nuclear Information System (INIS)

    Morgenstern, Richard D.; Ho Mun; Shih, J.-S.; Zhang Xuehua

    2004-01-01

    Who pays for new policies to reduce carbon dioxide and other greenhouse gas emissions in the United States? This paper considers a slice of the question by examining the near-term impact on domestic manufacturing industries of both upstream (economy-wide) and downstream (electric power industry only) carbon mitigation policies. Detailed Census data on the electricity use of four-digit manufacturing industries are combined with input-output information on inter-industry purchases to paint a detailed picture of carbon use, including effects on final demand. Regional information on electricity supply and use by region is also incorporated. A relatively simple model is developed which yields estimates of the relative burdens within the manufacturing sector of alternative carbon policies. Overall, the principal conclusion is that within the manufacturing sector (which by definition excludes coal production and electricity generation), only a small number of industries would bear a disproportionate short-term burden of a carbon tax or similar policy. Not surprisingly, an electricity-only policy affects very different manufacturing industries than an economy-wide carbon tax

  10. Landmine policy in the near-term: a framework for technology analysis and action

    Energy Technology Data Exchange (ETDEWEB)

    Eimerl, D., LLNL

    1997-08-01

    Any effective solution to the problem of leftover landmines and other post-conflict unexploded ordnance (UXO) must take into account the real capabilities of demining technologies and the availability of sufficient resources to carry out demining operations. Economic and operational factors must be included in analyses of humanitarian demining. These factors will provide a framework for using currently available resources and technologies to complete this task in a time frame that is both practical and useful. Since it is likely that reliable advanced technologies for demining are still several years away, this construct applies to the intervening period. It may also provide a framework for utilizing advanced technologies as they become available. This study is an economic system model for demining operations carried out by the developed nations that clarifies the role and impact of technology on the economic performance and viability of these operations. It also provides a quantitative guide to assess the performance penalties arising from gaps in current technology, as well as the potential advantages and desirable features of new technologies that will significantly affect the international community`s ability to address this problem. Implications for current and near-term landmine and landmine technology policies are drawn.

  11. Isolation systems influence in the seismic loading propagation analysis applied to an innovative near term reactor

    International Nuclear Information System (INIS)

    Lo Frano, R.; Forasassi, G.

    2010-01-01

    Integrity of a Nuclear Power Plant (NPP) must be ensured during the plant life in any design condition and, particularly, in the event of a severe earthquake. To investigate the seismic resistance capability of as-built structures systems and components, in the event of a Safe Shutdown Earthquake (SSE), and analyse its related effects on a near term deployment reactor and its internals, a deterministic methodological approach, based on the evaluation of the propagation of seismic waves along the structure, was applied considering, also, the use of innovative anti-seismic techniques. In this paper the attention is focused on the use and influence of seismic isolation technologies (e.g. isolators based on passive energy dissipation) that seem able to ensure the full integrity and operability of NPP structures, to enhance the seismic safety (improving the design of new NPPs and if possible, to retrofit existing facilities) and to attain a standardization plant design. To the purpose of this study a numerical assessment of dynamic response/behaviour of the structures was accomplished by means of the finite element approach and setting up, as accurately as possible, a representative three-dimensional model of mentioned NPP structures. The obtained results in terms of response spectra (carried out from both cases of isolated and not isolated seismic analyses) are herein presented and compared in order to highlight the isolation technique effectiveness.

  12. Developing robust arsenic awareness prediction models using machine learning algorithms.

    Science.gov (United States)

    Singh, Sushant K; Taylor, Robert W; Rahman, Mohammad Mahmudur; Pradhan, Biswajeet

    2018-04-01

    Arsenic awareness plays a vital role in ensuring the sustainability of arsenic mitigation technologies. Thus far, however, few studies have dealt with the sustainability of such technologies and its associated socioeconomic dimensions. As a result, arsenic awareness prediction has not yet been fully conceptualized. Accordingly, this study evaluated arsenic awareness among arsenic-affected communities in rural India, using a structured questionnaire to record socioeconomic, demographic, and other sociobehavioral factors with an eye to assessing their association with and influence on arsenic awareness. First a logistic regression model was applied and its results compared with those produced by six state-of-the-art machine-learning algorithms (Support Vector Machine [SVM], Kernel-SVM, Decision Tree [DT], k-Nearest Neighbor [k-NN], Naïve Bayes [NB], and Random Forests [RF]) as measured by their accuracy at predicting arsenic awareness. Most (63%) of the surveyed population was found to be arsenic-aware. Significant arsenic awareness predictors were divided into three types: (1) socioeconomic factors: caste, education level, and occupation; (2) water and sanitation behavior factors: number of family members involved in water collection, distance traveled and time spent for water collection, places for defecation, and materials used for handwashing after defecation; and (3) social capital and trust factors: presence of anganwadi and people's trust in other community members, NGOs, and private agencies. Moreover, individuals' having higher social network positively contributed to arsenic awareness in the communities. Results indicated that both the SVM and the RF algorithms outperformed at overall prediction of arsenic awareness-a nonlinear classification problem. Lower-caste, less educated, and unemployed members of the population were found to be the most vulnerable, requiring immediate arsenic mitigation. To this end, local social institutions and NGOs could play a

  13. Modelling Machine Tools using Structure Integrated Sensors for Fast Calibration

    Directory of Open Access Journals (Sweden)

    Benjamin Montavon

    2018-02-01

    Full Text Available Monitoring of the relative deviation between commanded and actual tool tip position, which limits the volumetric performance of the machine tool, enables the use of contemporary methods of compensation to reduce tolerance mismatch and the uncertainties of on-machine measurements. The development of a primarily optical sensor setup capable of being integrated into the machine structure without limiting its operating range is presented. The use of a frequency-modulating interferometer and photosensitive arrays in combination with a Gaussian laser beam allows for fast and automated online measurements of the axes’ motion errors and thermal conditions with comparable accuracy, lower cost, and smaller dimensions as compared to state-of-the-art optical measuring instruments for offline machine tool calibration. The development is tested through simulation of the sensor setup based on raytracing and Monte-Carlo techniques.

  14. Towards an automatic model transformation mechanism from UML state machines to DEVS models

    Directory of Open Access Journals (Sweden)

    Ariel González

    2015-08-01

    Full Text Available The development of complex event-driven systems requires studies and analysis prior to deployment with the goal of detecting unwanted behavior. UML is a language widely used by the software engineering community for modeling these systems through state machines, among other mechanisms. Currently, these models do not have appropriate execution and simulation tools to analyze the real behavior of systems. Existing tools do not provide appropriate libraries (sampling from a probability distribution, plotting, etc. both to build and to analyze models. Modeling and simulation for design and prototyping of systems are widely used techniques to predict, investigate and compare the performance of systems. In particular, the Discrete Event System Specification (DEVS formalism separates the modeling and simulation; there are several tools available on the market that run and collect information from DEVS models. This paper proposes a model transformation mechanism from UML state machines to DEVS models in the Model-Driven Development (MDD context, through the declarative QVT Relations language, in order to perform simulations using tools, such as PowerDEVS. A mechanism to validate the transformation is proposed. Moreover, examples of application to analyze the behavior of an automatic banking machine and a control system of an elevator are presented.

  15. Underlying finite state machine for the social engineering attack detection model

    CSIR Research Space (South Africa)

    Mouton, Francois

    2017-08-01

    Full Text Available one to have a clearer overview of the mental processing performed within the model. While the current model provides a general procedural template for implementing detection mechanisms for social engineering attacks, the finite state machine provides a...

  16. Machine learning in updating predictive models of planning and scheduling transportation projects

    Science.gov (United States)

    1997-01-01

    A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...

  17. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    Science.gov (United States)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  18. Risk minimization for near-term deployment of the next generation nuclear plant

    International Nuclear Information System (INIS)

    Lommers, L.; Southworth, F.; Riou, B.; Lecomte, M.

    2008-01-01

    The NGNP program is developing the High Temperature Reactor for high efficiency electricity production and high temperature process heat such as direct hydrogen production. AREVA leads one of three vendor teams supporting the NGNP program. AREVA has developed an NGNP concept based on AREVA's ANTARES indirect cycle HTR concept. The ANTARES-based NGNP concept attempts to manage development risk by using a conservative design philosophy which balances performance and risk. Additional risk mitigation for rapid near-term deployment is also considered. Near-term markets may not require the full capability of the indirect cycle very high temperature concept. A steam cycle concept might better serve near-term markets for high temperature steam with reduced technical and schedule risk. (authors)

  19. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    Science.gov (United States)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  20. Preliminary seismic analysis of an innovative near term reactor: Methodology and application

    International Nuclear Information System (INIS)

    Lo Frano, R.; Pugliese, G.; Forasassi, G.

    2010-01-01

    Nuclear power plant (NPP) design is strictly dependent on seismic hazard and safety aspects concerned with the external events of the site. Earthquake resistant structures design requires realistic and accurate physical and theoretical models to describe the response of the nuclear power plants (NPPs) that depend on both the ground motion characteristics and the dynamic properties of the structures themselves. In order to improve the design of new NPPs and, at the same time, to retrofit existing ones the dynamic behaviour of structures subjected to critical seismic excitations that may occur during their expected service life must be evaluated. The aim of this work is to select new effective methods to assess NPPs vulnerability by properly capturing the effects of a safe shutdown earthquake (SSE) event on nuclear structures, like the near term deployment IRIS reactor, and to evaluate the seismic resistance capability of as-built structures systems and components. To attain the purpose a validated deterministic methodology based on an accurate finite element modelling coupled to substructure and time history approaches was employed for studying the overall dynamic behaviour of the NPP relevant components. Moreover the set up three-dimensional model was also validated to evaluate the performance and reliability of the adopted FEM code (mesh refinements and type element influence). This detailed numerical assessment, involving the most widely used finite element numerical codes (MSC.Marc and Ansys, allowed to solve, perform and simulate as accurately as possible the dynamic behaviour of structures which may withstand a lot of more or less complicate structural problems. To evaluate the accuracy and the reliability as well as to determine the related error of the set-up procedure, the obtained seismic analyses results in term of accelerations, propagated from the ground to the auxiliary building systems and components, and displacements were compared highlighting a

  1. Parametric and non-parametric models for lifespan modeling of insulation systems in electrical machines

    OpenAIRE

    Salameh , Farah; Picot , Antoine; Chabert , Marie; Maussion , Pascal

    2017-01-01

    International audience; This paper describes an original statistical approach for the lifespan modeling of electric machine insulation materials. The presented models aim to study the effect of three main stress factors (voltage, frequency and temperature) and their interactions on the insulation lifespan. The proposed methodology is applied to two different insulation materials tested in partial discharge regime. Accelerated ageing tests are organized according to experimental optimization m...

  2. International Workshop on Advanced Dynamics and Model Based Control of Structures and Machines

    CERN Document Server

    Belyaev, Alexander; Krommer, Michael

    2017-01-01

    The papers in this volume present and discuss the frontiers in the mechanics of controlled machines and structures. They are based on papers presented at the International Workshop on Advanced Dynamics and Model Based Control of Structures and Machines held in Vienna in September 2015. The workshop continues a series of international workshops held in Linz (2008) and St. Petersburg (2010).

  3. Assessing the near-term risk of climate uncertainty : interdependencies among the U.S. states.

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne W.; Lowry, Thomas Stephen; Malczynski, Leonard A.; Tidwell, Vincent Carroll; Stamber, Kevin Louis; Reinert, Rhonda K.; Backus, George A.; Warren, Drake E.; Zagonel, Aldo A.; Ehlen, Mark Andrew; Klise, Geoffrey T.; Vargas, Vanessa N.

    2010-04-01

    Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts of climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.

  4. The Effect of Unreliable Machine for Two Echelons Deteriorating Inventory Model

    Directory of Open Access Journals (Sweden)

    I Nyoman Sutapa

    2014-01-01

    Full Text Available Many researchers have developed two echelons supply chain, however only few of them consider deteriorating items and unreliable machine in their models In this paper, we develop an inventory deteriorating model for two echelons supply chain with unreliable machine. The unreliable machine time is assumed uniformly distributed. The model is solved using simple heuristic since a closed form model can not be derived. A numerical example is used to show how the model works. A sensitivity analysis is conducted to show effect of different lost sales cost in the model. The result shows that increasing lost sales cost will increase both manufacture and buyer costs however buyer’s total cost increase higher than manufacture’s total cost as manufacture’s machine is more unreliable.

  5. Comparative study for different statistical models to optimize cutting parameters of CNC end milling machines

    International Nuclear Information System (INIS)

    El-Berry, A.; El-Berry, A.; Al-Bossly, A.

    2010-01-01

    In machining operation, the quality of surface finish is an important requirement for many work pieces. Thus, that is very important to optimize cutting parameters for controlling the required manufacturing quality. Surface roughness parameter (Ra) in mechanical parts depends on turning parameters during the turning process. In the development of predictive models, cutting parameters of feed, cutting speed, depth of cut, are considered as model variables. For this purpose, this study focuses on comparing various machining experiments which using CNC vertical machining center, work pieces was aluminum 6061. Multiple regression models are used to predict the surface roughness at different experiments.

  6. Mathematical model of five-phase induction machine

    Czech Academy of Sciences Publication Activity Database

    Schreier, Luděk; Bendl, Jiří; Chomát, Miroslav

    2011-01-01

    Roč. 56, č. 2 (2011), s. 141-157 ISSN 0001-7043 R&D Projects: GA ČR GA102/08/0424 Institutional research plan: CEZ:AV0Z20570509 Keywords : five-phase induction machines * symmetrical components * spatial wave harmonics Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  7. Photovoltaic (PV) Pricing Trends: Historical, Recent, and Near-Term Projections

    Energy Technology Data Exchange (ETDEWEB)

    Feldman, D.; Barbose, G.; Margolis, R.; Wiser, R.; Darghouth, N.; Goodrich, A.

    2012-11-01

    This report helps to clarify the confusion surrounding different estimates of system pricing by distinguishing between past, current, and near-term projected estimates. It also discusses the different methodologies and factors that impact the estimated price of a PV system, such as system size, location, technology, and reporting methods.These factors, including timing, can have a significant impact on system pricing.

  8. Acute maternal rehydration increases the urine production rate in the near-term human fetus

    NARCIS (Netherlands)

    Haak, MC; Aarnoudse, JG; Oosterhof, H.

    OBJECTIVE: We sought to investigate the effect of a decrease of maternal plasma osmolality produced by hypotonic rehydration on the fetal urine production rate in normal near-term human fetuses. STUDY DESIGN: Twenty-one healthy pregnant women attending the clinic for antenatal care were studied

  9. Impacts of Near-Term Climate Change on Irrigation Demands and Crop Yields in the Columbia River Basin

    Science.gov (United States)

    Rajagopalan, K.; Chinnayakanahalli, K. J.; Stockle, C. O.; Nelson, R. L.; Kruger, C. E.; Brady, M. P.; Malek, K.; Dinesh, S. T.; Barber, M. E.; Hamlet, A. F.; Yorgey, G. G.; Adam, J. C.

    2018-03-01

    Adaptation to a changing climate is critical to address future global food and water security challenges. While these challenges are global, successful adaptation strategies are often generated at regional scales; therefore, regional-scale studies are critical to inform adaptation decision making. While climate change affects both water supply and demand, water demand is relatively understudied, especially at regional scales. The goal of this work is to address this gap, and characterize the direct impacts of near-term (for the 2030s) climate change and elevated CO2 levels on regional-scale crop yields and irrigation demands for the Columbia River basin (CRB). This question is addressed through a coupled crop-hydrology model that accounts for site-specific and crop-specific characteristics that control regional-scale response to climate change. The overall near-term outlook for agricultural production in the CRB is largely positive, with yield increases for most crops and small overall increases in irrigation demand. However, there are crop-specific and location-specific negative impacts as well, and the aggregate regional response of irrigation demands to climate change is highly sensitive to the spatial crop mix. Low-value pasture/hay varieties of crops—typically not considered in climate change assessments—play a significant role in determining the regional response of irrigation demands to climate change, and thus cannot be overlooked. While, the overall near-term outlook for agriculture in the region is largely positive, there may be potential for a negative outlook further into the future, and it is important to consider this in long-term planning.

  10. Do differences in future sulfate emission pathways matter for near-term climate? A case study for the Asian monsoon

    Science.gov (United States)

    Bartlett, Rachel E.; Bollasina, Massimo A.; Booth, Ben B. B.; Dunstone, Nick J.; Marenco, Franco; Messori, Gabriele; Bernie, Dan J.

    2018-03-01

    Anthropogenic aerosols could dominate over greenhouse gases in driving near-term hydroclimate change, especially in regions with high present-day aerosol loading such as Asia. Uncertainties in near-future aerosol emissions represent a potentially large, yet unexplored, source of ambiguity in climate projections for the coming decades. We investigated the near-term sensitivity of the Asian summer monsoon to aerosols by means of transient modelling experiments using HadGEM2-ES under two existing climate change mitigation scenarios selected to have similar greenhouse gas forcing, but to span a wide range of plausible global sulfur dioxide emissions. Increased sulfate aerosols, predominantly from East Asian sources, lead to large regional dimming through aerosol-radiation and aerosol-cloud interactions. This results in surface cooling and anomalous anticyclonic flow over land, while abating the western Pacific subtropical high. The East Asian monsoon circulation weakens and precipitation stagnates over Indochina, resembling the observed southern-flood-northern-drought pattern over China. Large-scale circulation adjustments drive suppression of the South Asian monsoon and a westward extension of the Maritime Continent convective region. Remote impacts across the Northern Hemisphere are also generated, including a northwestward shift of West African monsoon rainfall induced by the westward displacement of the Indian Ocean Walker cell, and temperature anomalies in northern midlatitudes linked to propagation of Rossby waves from East Asia. These results indicate that aerosol emissions are a key source of uncertainty in near-term projection of regional and global climate; a careful examination of the uncertainties associated with aerosol pathways in future climate assessments must be highly prioritised.

  11. Ecological and biomedical effects of effluents from near-term electric vehicle storage battery cycles

    Energy Technology Data Exchange (ETDEWEB)

    1980-05-01

    An assessment of the ecological and biomedical effects due to commercialization of storage batteries for electric and hybrid vehicles is given. It deals only with the near-term batteries, namely Pb/acid, Ni/Zn, and Ni/Fe, but the complete battery cycle is considered, i.e., mining and milling of raw materials, manufacture of the batteries, cases and covers; use of the batteries in electric vehicles, including the charge-discharge cycles; recycling of spent batteries; and disposal of nonrecyclable components. The gaseous, liquid, and solid emissions from various phases of the battery cycle are identified. The effluent dispersal in the environment is modeled and ecological effects are assessed in terms of biogeochemical cycles. The metabolic and toxic responses by humans and laboratory animals to constituents of the effluents are discussed. Pertinent environmental and health regulations related to the battery industry are summarized and regulatory implications for large-scale storage battery commercialization are discussed. Each of the seven sections were abstracted and indexed individually for EDB/ERA. Additional information is presented in the seven appendixes entitled; growth rate scenario for lead/acid battery development; changes in battery composition during discharge; dispersion of stack and fugitive emissions from battery-related operations; methodology for estimating population exposure to total suspended particulates and SO/sub 2/ resulting from central power station emissions for the daily battery charging demand of 10,000 electric vehicles; determination of As air emissions from Zn smelting; health effects: research related to EV battery technologies. (JGB)

  12. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Science.gov (United States)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  13. Non-linear hybrid control oriented modelling of a digital displacement machine

    DEFF Research Database (Denmark)

    Pedersen, Niels Henrik; Johansen, Per; Andersen, Torben O.

    2017-01-01

    Proper feedback control of digital fluid power machines (Pressure, flow, torque or speed control) requires a control oriented model, from where the system dynamics can be analyzed, stability can be proven and design criteria can be specified. The development of control oriented models for hydraulic...... Digital Displacement Machines (DDM) is complicated due to non-smooth machine behavior, where the dynamics comprises both analog, digital and non-linear elements. For a full stroke operated DDM the power throughput is altered in discrete levels based on the ratio of activated pressure chambers....... In this paper, a control oriented hybrid model is established, which combines the continuous non-linear pressure chamber dynamics and the discrete shaft position dependent activation of the pressure chambers. The hybrid machine model is further extended to describe the dynamics of a Digital Fluid Power...

  14. Modelling of human-machine interaction in equipment design of manufacturing cells

    Science.gov (United States)

    Cochran, David S.; Arinez, Jorge F.; Collins, Micah T.; Bi, Zhuming

    2017-08-01

    This paper proposes a systematic approach to model human-machine interactions (HMIs) in supervisory control of machining operations; it characterises the coexistence of machines and humans for an enterprise to balance the goals of automation/productivity and flexibility/agility. In the proposed HMI model, an operator is associated with a set of behavioural roles as a supervisor for multiple, semi-automated manufacturing processes. The model is innovative in the sense that (1) it represents an HMI based on its functions for process control but provides the flexibility for ongoing improvements in the execution of manufacturing processes; (2) it provides a computational tool to define functional requirements for an operator in HMIs. The proposed model can be used to design production systems at different levels of an enterprise architecture, particularly at the machine level in a production system where operators interact with semi-automation to accomplish the goal of 'autonomation' - automation that augments the capabilities of human beings.

  15. Conditions for Model Matching of Switched Asynchronous Sequential Machines with Output Feedback

    OpenAIRE

    Jung–Min Yang

    2016-01-01

    Solvability of the model matching problem for input/output switched asynchronous sequential machines is discussed in this paper. The control objective is to determine the existence condition and design algorithm for a corrective controller that can match the stable-state behavior of the closed-loop system to that of a reference model. Switching operations and correction procedures are incorporated using output feedback so that the controlled switched machine can show the ...

  16. Rotating magnetizations in electrical machines: Measurements and modeling

    Directory of Open Access Journals (Sweden)

    Andreas Thul

    2018-05-01

    Full Text Available This paper studies the magnetization process in electrical steel sheets for rotational magnetizations as they occur in the magnetic circuit of electrical machines. A four-pole rotational single sheet tester is used to generate the rotating magnetic flux inside the sample. A field-oriented control scheme is implemented to improve the control performance. The magnetization process of different non-oriented materials is analyzed and compared.

  17. Behavioral Modeling for Mental Health using Machine Learning Algorithms.

    Science.gov (United States)

    Srividya, M; Mohanavalli, S; Bhalaji, N

    2018-04-03

    Mental health is an indicator of emotional, psychological and social well-being of an individual. It determines how an individual thinks, feels and handle situations. Positive mental health helps one to work productively and realize their full potential. Mental health is important at every stage of life, from childhood and adolescence through adulthood. Many factors contribute to mental health problems which lead to mental illness like stress, social anxiety, depression, obsessive compulsive disorder, drug addiction, and personality disorders. It is becoming increasingly important to determine the onset of the mental illness to maintain proper life balance. The nature of machine learning algorithms and Artificial Intelligence (AI) can be fully harnessed for predicting the onset of mental illness. Such applications when implemented in real time will benefit the society by serving as a monitoring tool for individuals with deviant behavior. This research work proposes to apply various machine learning algorithms such as support vector machines, decision trees, naïve bayes classifier, K-nearest neighbor classifier and logistic regression to identify state of mental health in a target group. The responses obtained from the target group for the designed questionnaire were first subject to unsupervised learning techniques. The labels obtained as a result of clustering were validated by computing the Mean Opinion Score. These cluster labels were then used to build classifiers to predict the mental health of an individual. Population from various groups like high school students, college students and working professionals were considered as target groups. The research presents an analysis of applying the aforementioned machine learning algorithms on the target groups and also suggests directions for future work.

  18. Rotating magnetizations in electrical machines: Measurements and modeling

    Science.gov (United States)

    Thul, Andreas; Steentjes, Simon; Schauerte, Benedikt; Klimczyk, Piotr; Denke, Patrick; Hameyer, Kay

    2018-05-01

    This paper studies the magnetization process in electrical steel sheets for rotational magnetizations as they occur in the magnetic circuit of electrical machines. A four-pole rotational single sheet tester is used to generate the rotating magnetic flux inside the sample. A field-oriented control scheme is implemented to improve the control performance. The magnetization process of different non-oriented materials is analyzed and compared.

  19. Equivalent model of a dually-fed machine for electric drive control systems

    Science.gov (United States)

    Ostrovlyanchik, I. Yu; Popolzin, I. Yu

    2018-05-01

    The article shows that the mathematical model of a dually-fed machine is complicated because of the presence of a controlled voltage source in the rotor circuit. As a method of obtaining a mathematical model, the method of a generalized two-phase electric machine is applied and a rotating orthogonal coordinate system is chosen that is associated with the representing vector of a stator current. In the chosen coordinate system in the operator form the differential equations of electric equilibrium for the windings of the generalized machine (the Kirchhoff equation) are written together with the expression for the moment, which determines the electromechanical energy transformation in the machine. Equations are transformed so that they connect the currents of the windings, that determine the moment of the machine, and the voltages on these windings. The structural diagram of the machine is assigned to the written equations. Based on the written equations and accepted assumptions, expressions were obtained for the balancing the EMF of windings, and on the basis of these expressions an equivalent mathematical model of a dually-fed machine is proposed, convenient for use in electric drive control systems.

  20. Prediction Model of Machining Failure Trend Based on Large Data Analysis

    Science.gov (United States)

    Li, Jirong

    2017-12-01

    The mechanical processing has high complexity, strong coupling, a lot of control factors in the machining process, it is prone to failure, in order to improve the accuracy of fault detection of large mechanical equipment, research on fault trend prediction requires machining, machining fault trend prediction model based on fault data. The characteristics of data processing using genetic algorithm K mean clustering for machining, machining feature extraction which reflects the correlation dimension of fault, spectrum characteristics analysis of abnormal vibration of complex mechanical parts processing process, the extraction method of the abnormal vibration of complex mechanical parts processing process of multi-component spectral decomposition and empirical mode decomposition Hilbert based on feature extraction and the decomposition results, in order to establish the intelligent expert system for the data base, combined with large data analysis method to realize the machining of the Fault trend prediction. The simulation results show that this method of fault trend prediction of mechanical machining accuracy is better, the fault in the mechanical process accurate judgment ability, it has good application value analysis and fault diagnosis in the machining process.

  1. A comparison of machine learning and Bayesian modelling for molecular serotyping.

    Science.gov (United States)

    Newton, Richard; Wernisch, Lorenz

    2017-08-11

    Streptococcus pneumoniae is a human pathogen that is a major cause of infant mortality. Identifying the pneumococcal serotype is an important step in monitoring the impact of vaccines used to protect against disease. Genomic microarrays provide an effective method for molecular serotyping. Previously we developed an empirical Bayesian model for the classification of serotypes from a molecular serotyping array. With only few samples available, a model driven approach was the only option. In the meanwhile, several thousand samples have been made available to us, providing an opportunity to investigate serotype classification by machine learning methods, which could complement the Bayesian model. We compare the performance of the original Bayesian model with two machine learning algorithms: Gradient Boosting Machines and Random Forests. We present our results as an example of a generic strategy whereby a preliminary probabilistic model is complemented or replaced by a machine learning classifier once enough data are available. Despite the availability of thousands of serotyping arrays, a problem encountered when applying machine learning methods is the lack of training data containing mixtures of serotypes; due to the large number of possible combinations. Most of the available training data comprises samples with only a single serotype. To overcome the lack of training data we implemented an iterative analysis, creating artificial training data of serotype mixtures by combining raw data from single serotype arrays. With the enhanced training set the machine learning algorithms out perform the original Bayesian model. However, for serotypes currently lacking sufficient training data the best performing implementation was a combination of the results of the Bayesian Model and the Gradient Boosting Machine. As well as being an effective method for classifying biological data, machine learning can also be used as an efficient method for revealing subtle biological

  2. Multi products single machine economic production quantity model with multiple batch size

    Directory of Open Access Journals (Sweden)

    Ata Allah Taleizadeh

    2011-04-01

    Full Text Available In this paper, a multi products single machine economic order quantity model with discrete delivery is developed. A unique cycle length is considered for all produced items with an assumption that all products are manufactured on a single machine with a limited capacity. The proposed model considers different items such as production, setup, holding, and transportation costs. The resulted model is formulated as a mixed integer nonlinear programming model. Harmony search algorithm, extended cutting plane and particle swarm optimization methods are used to solve the proposed model. Two numerical examples are used to analyze and to evaluate the performance of the proposed model.

  3. Interactions among Amazon land use, forests and climate: prospects for a near-term forest tipping point.

    Science.gov (United States)

    Nepstad, Daniel C; Stickler, Claudia M; Filho, Britaldo Soares-; Merry, Frank

    2008-05-27

    Some model experiments predict a large-scale substitution of Amazon forest by savannah-like vegetation by the end of the twenty-first century. Expanding global demands for biofuels and grains, positive feedbacks in the Amazon forest fire regime and drought may drive a faster process of forest degradation that could lead to a near-term forest dieback. Rising worldwide demands for biofuel and meat are creating powerful new incentives for agro-industrial expansion into Amazon forest regions. Forest fires, drought and logging increase susceptibility to further burning while deforestation and smoke can inhibit rainfall, exacerbating fire risk. If sea surface temperature anomalies (such as El Niño episodes) and associated Amazon droughts of the last decade continue into the future, approximately 55% of the forests of the Amazon will be cleared, logged, damaged by drought or burned over the next 20 years, emitting 15-26Pg of carbon to the atmosphere. Several important trends could prevent a near-term dieback. As fire-sensitive investments accumulate in the landscape, property holders use less fire and invest more in fire control. Commodity markets are demanding higher environmental performance from farmers and cattle ranchers. Protected areas have been established in the pathway of expanding agricultural frontiers. Finally, emerging carbon market incentives for reductions in deforestation could support these trends.

  4. M2 priority screening system for near-term activities: Project documentation. Final report December 11, 1992--May 31, 1994

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-08-12

    From May through August, 1993, the M-2 Group within M Division at LANL conducted with the support of the LANL Integration and Coordination Office (ICO) and Applied Decision Analysis, Inc. (ADA), whose purpose was to develop a system for setting priorities among activities. This phase of the project concentrated on prioritizing near-tenn activities (i.e., activities that must be conducted in the next six months) necessary for setting up this new group. Potential future project phases will concentrate on developing a tool for setting priorities and developing annual budgets for the group`s operations. The priority screening system designed to address the near-term problem was developed, applied in a series of meeting with the group managers, and used as an aid in the assignment of tasks to group members. The model was intended and used as a practical tool for documenting and explaining decisions about near-term priorities, and not as a substitute for M-2 management judgment and decision-making processes.

  5. A modeling method for hybrid energy behaviors in flexible machining systems

    International Nuclear Information System (INIS)

    Li, Yufeng; He, Yan; Wang, Yan; Wang, Yulin; Yan, Ping; Lin, Shenlong

    2015-01-01

    Increasingly environmental and economic pressures have led to great concerns regarding the energy consumption of machining systems. Understanding energy behaviors of flexible machining systems is a prerequisite for improving energy efficiency of these systems. This paper proposes a modeling method to predict energy behaviors in flexible machining systems. The hybrid energy behaviors not only depend on the technical specification related of machine tools and workpieces, but are significantly affected by individual production scenarios. In the method, hybrid energy behaviors are decomposed into Structure-related energy behaviors, State-related energy behaviors, Process-related energy behaviors and Assignment-related energy behaviors. The modeling method for the hybrid energy behaviors is proposed based on Colored Timed Object-oriented Petri Net (CTOPN). The former two types of energy behaviors are modeled by constructing the structure of CTOPN, whist the latter two types of behaviors are simulated by applying colored tokens and associated attributes. Machining on two workpieces in the experimental workshop were undertaken to verify the proposed modeling method. The results showed that the method can provide multi-perspective transparency on energy consumption related to machine tools, workpieces as well as production management, and is particularly suitable for flexible manufacturing system when frequent changes in machining systems are often encountered. - Highlights: • Energy behaviors in flexible machining systems are modeled in this paper. • Hybrid characteristics of energy behaviors are examined from multiple viewpoints. • Flexible modeling method CTOPN is used to predict the hybrid energy behaviors. • This work offers a multi-perspective transparency on energy consumption

  6. A Study of Synchronous Machine Model Implementations in Matlab/Simulink Simulations for New and Renewable Energy Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede; Iov, Florin

    2005-01-01

    A direct phase model of synchronous machines implemented in MA TLAB/SIMULINK is presented. The effects of the machine saturation have been included. Simulation studies are performed under various conditions. It has been demonstrated that the MATLAB/SIMULINK is an effective tool to study the compl...... synchronous machine and the implemented model could be used for studies of various applications of synchronous machines including in renewable and DG generation systems....

  7. Comparing statistical and machine learning classifiers: alternatives for predictive modeling in human factors research.

    Science.gov (United States)

    Carnahan, Brian; Meyer, Gérard; Kuntz, Lois-Ann

    2003-01-01

    Multivariate classification models play an increasingly important role in human factors research. In the past, these models have been based primarily on discriminant analysis and logistic regression. Models developed from machine learning research offer the human factors professional a viable alternative to these traditional statistical classification methods. To illustrate this point, two machine learning approaches--genetic programming and decision tree induction--were used to construct classification models designed to predict whether or not a student truck driver would pass his or her commercial driver license (CDL) examination. The models were developed and validated using the curriculum scores and CDL exam performances of 37 student truck drivers who had completed a 320-hr driver training course. Results indicated that the machine learning classification models were superior to discriminant analysis and logistic regression in terms of predictive accuracy. Actual or potential applications of this research include the creation of models that more accurately predict human performance outcomes.

  8. Vacation model for Markov machine repair problem with two heterogeneous unreliable servers and threshold recovery

    Science.gov (United States)

    Jain, Madhu; Meena, Rakesh Kumar

    2018-03-01

    Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.

  9. Near-Term Electric Vehicle Program. Phase II: Mid-Term Summary Report.

    Energy Technology Data Exchange (ETDEWEB)

    None

    1978-08-01

    The Near Term Electric Vehicle (NTEV) Program is a constituent elements of the overall national Electric and Hybrid Vehicle Program that is being implemented by the Department of Energy in accordance with the requirements of the Electric and Hybrid Vehicle Research, Development, and Demonstration Act of 1976. Phase II of the NTEV Program is focused on the detailed design and development, of complete electric integrated test vehicles that incorporate current and near-term technology, and meet specified DOE objectives. The activities described in this Mid-Term Summary Report are being carried out by two contractor teams. The prime contractors for these contractor teams are the General Electric Company and the Garrett Corporation. This report is divided into two discrete parts. Part 1 describes the progress of the General Electric team and Part 2 describes the progress of the Garrett team.

  10. Near-Term Nuclear Power Revival? A U.S. and International Perspective

    International Nuclear Information System (INIS)

    Braun, C.

    2004-01-01

    In this paper I review the causes for the renewed interest in the near-term revival of nuclear power in the U.S. and internationally. I comment on the progress already made in the U.S. in restarting a second era of commercial nuclear power plant construction, and on what is required going forwards, from a utilities perspective, to commit to and implement new plant orders. I review the specific nuclear projects discussed and committed to in the U.S. and abroad in terms of utilities, sites, vendor and suppliers teams, and project arrangements. I will then offer some tentative conclusions regarding the prospects for a near-term U.S. and global nuclear power revival

  11. Near-Term Opportunities for Carbon Dioxide Capture and Storage 2007

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-07-01

    This document contains the summary report of the workshop on global assessments for near-term opportunities for carbon dioxide capture and storage (CCS), which took place on 21-22 June 2007 in Oslo, Norway. It provided an opportunity for direct dialogue between concerned stakeholders in the global effort to accelerate the development and commercialisation of CCS technology. This is part of a series of three workshops on near-term opportunities for this important mitigation option that will feed into the G8 Plan of Action on Climate Change, Clean Energy and Sustainable Development. The ultimate goal of this effort is to present a report and policy recommendations to the G8 leaders at their 2008 summit meeting in Japan.

  12. Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View.

    Science.gov (United States)

    Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael

    2016-12-16

    As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.

  13. A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia

    Science.gov (United States)

    Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.

    2017-08-01

    In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.

  14. The Near-Term Impacts of Carbon Mitigation Policies on Manufacturing Industries

    OpenAIRE

    Morgenstern, Richard; Shih, Jhih-Shyang; Ho, Mun; Zhang, Xuehua

    2002-01-01

    Who will pay for new policies to reduce carbon dioxide and other greenhouse gas emissions in the United States? This paper considers a slice of the question by examining the near-term impact on domestic manufacturing industries of both upstream (economy-wide) and downstream (electric power industry only) carbon mitigation policies. Detailed Census data on the electricity use of four-digit manufacturing industries is combined with input-output information on interindustry purchases to paint a ...

  15. Mobile robotics for CANDU reactor maintenance: case studies and near-term improvements

    International Nuclear Information System (INIS)

    Lipsett, M. G.; Rody, K.H.

    1995-01-01

    Although robotics researchers have been promising that robotics would soon be performing tasks in hazardous environments, the reality has yet to live up to the hype. The presently available crop of robots suitable for deployment in industrial situations are remotely operated, requiring skilled users. This talk describes cases where mobile robots have been used successfully in CANDU stations, discusses the difficulties in using mobile robots for reactor maintenance, and provides near-term goals for achievable improvements in performance and usefulness. (author)

  16. Photovoltaic System Pricing Trends. Historical, Recent, and Near-Term Projections, 2015 Edition

    Energy Technology Data Exchange (ETDEWEB)

    Feldman, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Margolis, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bolinger, Mark [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Fu, Ran [National Renewable Energy Lab. (NREL), Golden, CO (United States); Seel, Joachim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Davidson, Carolyn [National Renewable Energy Lab. (NREL), Golden, CO (United States); Darghouth, Naïm [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wiser, Ryan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-08-25

    This presentation, based on research at Lawrence Berkeley National Laboratory and the National Renewable Energy Laboratory, provides a high-level overview of historical, recent, and projected near-term PV pricing trends in the United States focusing on the installed price of PV systems. It also attempts to provide clarity surrounding the wide variety of potentially conflicting data available about PV system prices. This PowerPoint is the fourth edition from this series.

  17. Computational Model for Impact-Resisting Critical Thickness of High-Speed Machine Outer Protective Plate

    Science.gov (United States)

    Wu, Huaying; Wang, Li Zhong; Wang, Yantao; Yuan, Xiaolei

    2018-05-01

    The blade or surface grinding blade of the hypervelocity grinding wheel may be damaged due to too high rotation rate of the spindle of the machine and then fly out. Its speed as a projectile may severely endanger the field persons. Critical thickness model of the protective plate of the high-speed machine is studied in this paper. For easy analysis, the shapes of the possible impact objects flying from the high-speed machine are simplified as sharp-nose model, ball-nose model and flat-nose model. Whose front ending shape to represent point, line and surface contacting. Impact analysis based on J-C model is performed for the low-carbon steel plate with different thicknesses in this paper. One critical thickness computational model for the protective plate of high-speed machine is established according to the damage characteristics of the thin plate to get relation among plate thickness and mass, shape and size and impact speed of impact object. The air cannon is used for impact test. The model accuracy is validated. This model can guide identification of the thickness of single-layer outer protective plate of a high-speed machine.

  18. The role of reduced aerosol precursor emissions in driving near-term warming

    International Nuclear Information System (INIS)

    Gillett, Nathan P; Von Salzen, Knut

    2013-01-01

    The representative concentration pathway (RCP) scenarios all assume stringent emissions controls on aerosols and their precursors, and hence include progressive decreases in aerosol and aerosol precursor emissions through the 21st century. Recent studies have suggested that the resultant decrease in aerosols could drive rapid near-term warming, which could dominate the effects of greenhouse gas (GHG) increases in the coming decades. In CanESM2 simulations, we find that under the RCP 2.6 scenario, which includes the fastest decrease in aerosol and aerosol precursor emissions, the contribution of aerosol reductions to warming between 2000 and 2040 is around 30%. Moreover, the rate of warming in the RCP 2.6 simulations declines gradually from its present-day value as GHG emissions decrease. Thus, while aerosol emission reductions contribute to gradual warming through the 21st century, we find no evidence that aerosol emission reductions drive particularly rapid near-term warming in this scenario. In the near-term, as in the long-term, GHG increases are the dominant driver of warming. (letter)

  19. Near-Term Actions to Address Long-Term Climate Risk

    Science.gov (United States)

    Lempert, R. J.

    2014-12-01

    Addressing climate change requires effective long-term policy making, which occurs when reflecting on potential events decades or more in the future causes policy makers to choose near-term actions different than those they would otherwise pursue. Contrary to some expectations, policy makers do sometimes make such long-term decisions, but not as commonly and successfully as climate change may require. In recent years however, the new capabilities of analytic decision support tools, combined with improved understanding of cognitive and organizational behaviors, has significantly improved the methods available for organizations to manage longer-term climate risks. In particular, these tools allow decision makers to understand what near-term actions consistently contribute to achieving both short- and long-term societal goals, even in the face of deep uncertainty regarding the long-term future. This talk will describe applications of these approaches for infrastructure, water, and flood risk management planning, as well as studies of how near-term choices about policy architectures can affect long-term greenhouse gas emission reduction pathways.

  20. Bayesian networks modeling for thermal error of numerical control machine tools

    Institute of Scientific and Technical Information of China (English)

    Xin-hua YAO; Jian-zhong FU; Zi-chen CHEN

    2008-01-01

    The interaction between the heat source location,its intensity,thermal expansion coefficient,the machine system configuration and the running environment creates complex thermal behavior of a machine tool,and also makes thermal error prediction difficult.To address this issue,a novel prediction method for machine tool thermal error based on Bayesian networks (BNs) was presented.The method described causal relationships of factors inducing thermal deformation by graph theory and estimated the thermal error by Bayesian statistical techniques.Due to the effective combination of domain knowledge and sampled data,the BN method could adapt to the change of running state of machine,and obtain satisfactory prediction accuracy.Ex-periments on spindle thermal deformation were conducted to evaluate the modeling performance.Experimental results indicate that the BN method performs far better than the least squares(LS)analysis in terms of modeling estimation accuracy.

  1. Thermal Error Test and Intelligent Modeling Research on the Spindle of High Speed CNC Machine Tools

    Science.gov (United States)

    Luo, Zhonghui; Peng, Bin; Xiao, Qijun; Bai, Lu

    2018-03-01

    Thermal error is the main factor affecting the accuracy of precision machining. Through experiments, this paper studies the thermal error test and intelligent modeling for the spindle of vertical high speed CNC machine tools in respect of current research focuses on thermal error of machine tool. Several testing devices for thermal error are designed, of which 7 temperature sensors are used to measure the temperature of machine tool spindle system and 2 displacement sensors are used to detect the thermal error displacement. A thermal error compensation model, which has a good ability in inversion prediction, is established by applying the principal component analysis technology, optimizing the temperature measuring points, extracting the characteristic values closely associated with the thermal error displacement, and using the artificial neural network technology.

  2. Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods

    OpenAIRE

    Shan, Min

    2017-01-01

    With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...

  3. Measuring and Modelling Delays in Robot Manipulators for Temporally Precise Control using Machine Learning

    DEFF Research Database (Denmark)

    Andersen, Thomas Timm; Amor, Heni Ben; Andersen, Nils Axel

    2015-01-01

    and separate. In this paper, we present a data-driven methodology for separating and modelling inherent delays during robot control. We show how both actuation and response delays can be modelled using modern machine learning methods. The resulting models can be used to predict the delays as well...

  4. An Introduction to Topic Modeling as an Unsupervised Machine Learning Way to Organize Text Information

    Science.gov (United States)

    Snyder, Robin M.

    2015-01-01

    The field of topic modeling has become increasingly important over the past few years. Topic modeling is an unsupervised machine learning way to organize text (or image or DNA, etc.) information such that related pieces of text can be identified. This paper/session will present/discuss the current state of topic modeling, why it is important, and…

  5. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    Science.gov (United States)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  6. Multiphysics Modeling of an Permanent Magnet Synchronous Machine

    Directory of Open Access Journals (Sweden)

    MARTIS Claudia

    2012-10-01

    Full Text Available This paper analyzes the noise and vibration in PMSMs. There are three types of vibrations in electrical machines: electromagnetic,mechanical and aerodynamic. Electromagnetic force are the main cause of noise and vibration in PMSMs. It is very important to calculate precisely the natural frequencies of the stator system. If oneradial force (which are the main cause for electromagnetic vibration has the frequency close to the natural frequency of the stator system for the same order of vibrational mode, then this force canproduce dangerous vibration in the stator system. The natural frequencies for a stator system of a PMSM have been calculated. Finally a Structural Analysis has been made , pointing out the radialdisplacement and stress for the chosen PMSM .

  7. Product Quality Modelling Based on Incremental Support Vector Machine

    International Nuclear Information System (INIS)

    Wang, J; Zhang, W; Qin, B; Shi, W

    2012-01-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  8. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  9. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  10. Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model

    Science.gov (United States)

    Pathak, Jaideep; Wikner, Alexander; Fussell, Rebeckah; Chandra, Sarthak; Hunt, Brian R.; Girvan, Michelle; Ott, Edward

    2018-04-01

    A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.

  11. A Hybrid Least Square Support Vector Machine Model with Parameters Optimization for Stock Forecasting

    Directory of Open Access Journals (Sweden)

    Jian Chai

    2015-01-01

    Full Text Available This paper proposes an EMD-LSSVM (empirical mode decomposition least squares support vector machine model to analyze the CSI 300 index. A WD-LSSVM (wavelet denoising least squares support machine is also proposed as a benchmark to compare with the performance of EMD-LSSVM. Since parameters selection is vital to the performance of the model, different optimization methods are used, including simplex, GS (grid search, PSO (particle swarm optimization, and GA (genetic algorithm. Experimental results show that the EMD-LSSVM model with GS algorithm outperforms other methods in predicting stock market movement direction.

  12. A rule-based approach to model checking of UML state machines

    Science.gov (United States)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  13. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhen [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Xia, Changliang [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Yan, Yan, E-mail: yanyan@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Geng, Qiang [Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Shi, Tingna [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2017-08-01

    Highlights: • A hybrid analytical model is developed for field calculation of multilayer IPM machines. • The rotor magnetic field is calculated by the magnetic equivalent circuit method. • The field in the stator and air-gap is calculated by subdomain technique. • The magnetic scalar potential on rotor surface is modeled as trapezoidal distribution. - Abstract: Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff’s law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell’s equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  14. Novel Simplified Model for Asynchronous Machine with Consideration of Frequency Characteristic

    Directory of Open Access Journals (Sweden)

    Changchun Cai

    2014-01-01

    Full Text Available The frequency characteristic of electric equipment should be considered in the digital simulation of power systems. The traditional asynchronous machine third-order transient model excludes not only the stator transient but also the frequency characteristics, thus decreasing the application sphere of the model and resulting in a large error under some special conditions. Based on the physical equivalent circuit and Park model for asynchronous machines, this study proposes a novel asynchronous third-order transient machine model with consideration of the frequency characteristic. In the new definitions of variables, the voltages behind the reactance are redefined as the linear equation of flux linkage. In this way, the rotor voltage equation is not associated with the derivative terms of frequency. However, the derivative terms of frequency should not always be ignored in the application of the traditional third-order transient model. Compared with the traditional third-order transient model, the novel simplified third-order transient model with consideration of the frequency characteristic is more accurate without increasing the order and complexity. Simulation results show that the novel third-order transient model for the asynchronous machine is suitable and effective and is more accurate than the widely used traditional simplified third-order transient model under some special conditions with drastic frequency fluctuations.

  15. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods.

    Science.gov (United States)

    Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-08-29

    To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care

  16. Advanced Model of Squirrel Cage Induction Machine for Broken Rotor Bars Fault Using Multi Indicators

    Directory of Open Access Journals (Sweden)

    Ilias Ouachtouk

    2016-01-01

    Full Text Available Squirrel cage induction machine are the most commonly used electrical drives, but like any other machine, they are vulnerable to faults. Among the widespread failures of the induction machine there are rotor faults. This paper focuses on the detection of broken rotor bars fault using multi-indicator. However, diagnostics of asynchronous machine rotor faults can be accomplished by analysing the anomalies of machine local variable such as torque, magnetic flux, stator current and neutral voltage signature analysis. The aim of this research is to summarize the existing models and to develop new models of squirrel cage induction motors with consideration of the neutral voltage and to study the effect of broken rotor bars on the different electrical quantities such as the park currents, torque, stator currents and neutral voltage. The performance of the model was assessed by comparing the simulation and experimental results. The obtained results show the effectiveness of the model, and allow detection and diagnosis of these defects.

  17. Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model

    Energy Technology Data Exchange (ETDEWEB)

    Sahragard, Nasrolah; Ramli, Abdul Rahman B [Institute of Advanced Technology, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia); Marhaban, Mohammad Hamiruce [Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia); Mansor, Shattri B, E-mail: sahragard@yahoo.com [Department of Civil Engineering, Faculty of Engineering, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia)

    2011-02-15

    Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.

  18. Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model

    Science.gov (United States)

    Sahragard, Nasrolah; Ramli, Abdul Rahman B.; Hamiruce Marhaban, Mohammad; Mansor, Shattri B.

    2011-02-01

    Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.

  19. Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model

    International Nuclear Information System (INIS)

    Sahragard, Nasrolah; Ramli, Abdul Rahman B; Marhaban, Mohammad Hamiruce; Mansor, Shattri B

    2011-01-01

    Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.

  20. Modeling human-machine interactions for operations room layouts

    Science.gov (United States)

    Hendy, Keith C.; Edwards, Jack L.; Beevis, David

    2000-11-01

    The LOCATE layout analysis tool was used to analyze three preliminary configurations for the Integrated Command Environment (ICE) of a future USN platform. LOCATE develops a cost function reflecting the quality of all human-human and human-machine communications within a workspace. This proof- of-concept study showed little difference between the efficacy of the preliminary designs selected for comparison. This was thought to be due to the limitations of the study, which included the assumption of similar size for each layout and a lack of accurate measurement data for various objects in the designs, due largely to their notional nature. Based on these results, the USN offered an opportunity to conduct a LOCATE analysis using more appropriate assumptions. A standard crew was assumed, and subject matter experts agreed on the communications patterns for the analysis. Eight layouts were evaluated with the concepts of coordination and command factored into the analysis. Clear differences between the layouts emerged. The most promising design was refined further by the USN, and a working mock-up built for human-in-the-loop evaluation. LOCATE was applied to this configuration for comparison with the earlier analyses.

  1. Magnetic saturation in semi-analytical harmonic modeling for electric machine analysis

    NARCIS (Netherlands)

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.

    2016-01-01

    A semi-analytical method based on the harmonic modeling (HM) technique is presented for the analysis of the magneto-static field distribution in the slotted structure of rotating electric machines. In contrast to the existing literature, the proposed model does not require the assumption of infinite

  2. Advanced induction machine model in phase coordinates for wind turbine applications

    DEFF Research Database (Denmark)

    Fajardo, L.A.; Iov, F.; Hansen, Anca Daniela

    2007-01-01

    In this paper an advanced phase coordinates squirrel cage induction machine model with time varying electrical parameters affected by magnetic saturation and rotor deep bar effects, is presented. The model uses standard data sheet for characterization of the electrical parameters, it is developed...

  3. Trade-off results and preliminary designs of Near-Term Hybrid Vehicles

    Science.gov (United States)

    Sandberg, J. J.

    1980-01-01

    Phase I of the Near-Term Hybrid Vehicle Program involved the development of preliminary designs of electric/heat engine hybrid passenger vehicles. The preliminary designs were developed on the basis of mission analysis, performance specification, and design trade-off studies conducted independently by four contractors. THe resulting designs involve parallel hybrid (heat engine/electric) propulsion systems with significant variation in component selection, power train layout, and control strategy. Each of the four designs is projected by its developer as having the potential to substitute electrical energy for 40% to 70% of the petroleum fuel consumed annually by its conventional counterpart.

  4. Hardware based technology assessment in support of near-term space fission missions

    International Nuclear Information System (INIS)

    Houts, Mike; Van Dyke, Melissa; Godfroy, Tom; Martin, James; Bragg-Sitton, Shannon; Dickens, Ricky; Salvail, Pat; Williams, Eric; Harper, Roger; Hrbud, Ivana; Carter, Robert

    2003-01-01

    Fission technology can enable rapid, affordable access to any point in the solar system. If fission propulsion systems are to be developed to their full potential; however, near-term customers must be identified and initial fission systems successfully developed, launched, and utilized. Successful utilization will most likely occur if frequent, significant hardware-based milestones can be achieved throughout the program. Achieving these milestones will depend on the capability to perform highly realistic non-nuclear testing of nuclear systems. This paper discusses ongoing and potential research that could help achieve these milestones

  5. Hardware Based Technology Assessment in Support of Near-Term Space Fission Missions

    Science.gov (United States)

    Houts, Mike; VanDyke, Melissa; Godfroy, Tom; Martin, James; BraggSitton, Shannon; Carter, Robert; Dickens, Ricky; Salvail, Pat; Williams, Eric; Harper, Roger

    2003-01-01

    Fission technology can enable rapid, affordable access to any point in the solar system. If fission propulsion systems are to be developed to their full potential; however, near-term customers must be identified and initial fission systems successfully developed, launched, and utilized. Successful utilization will most likely occur if frequent, significant hardware-based milestones can be achieved throughout the program. Achieving these milestones will depend on the capability to perform highly realistic non-nuclear testing of nuclear systems. This paper discusses ongoing and potential research that could help achieve these milestones.

  6. Evaluation of selected near-term energy-conservation options for the Midwest

    Energy Technology Data Exchange (ETDEWEB)

    Evans, A.R.; Colsher, C.S.; Hamilton, R.W.; Buehring, W.A.

    1978-11-01

    This report evaluates the potential for implementation of near-term energy-conservation practices for the residential, commercial, agricultural, industrial, transportation, and utility sectors of the economy in twelve states: Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin. The information used to evaluate the magnitude of achievable energy savings includes regional energy use, the regulatory/legislative climate relating to energy conservation, technical characteristics of the measures, and their feasibility of implementation. This work is intended to provide baseline information for an ongoing regional assessment of energy and environmental impacts in the Midwest. 80 references.

  7. Survey of tritium wastes and effluents in near-term fusion-research facilities

    International Nuclear Information System (INIS)

    Bickford, W.E.; Dingee, D.A.; Willingham, C.E.

    1981-08-01

    The use of tritium control technology in near-term research facilities has been studied for both the magnetic and inertial confinement fusion programs. This study focused on routine generation of tritium wastes and effluents, with little referene to accidents or facility decommissioning. This report serves as an independent review of the effectiveness of planned control technology and radiological hazards associated with operation. The facilities examined for the magnetic fusion program included Fusion Materials Irradiation Testing Facility (FMIT), Tritium Systems Test Assembly (TSTA), and Tokamak Fusion Test Reactor (TFTR) in the magnetic fusion program, while NOVA and Antares facilities were examined for the inertial confinement program

  8. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    Science.gov (United States)

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (pmachine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273

  9. A proposed model for assessing service quality in small machining and industrial maintenance companies

    Directory of Open Access Journals (Sweden)

    Morvam dos Santos Netto

    2014-11-01

    Full Text Available Machining and industrial maintenance services include repair (corrective maintenance of equipments, activities involving the assembly-disassembly of equipments, fault diagnosis, machining operations, forming operations, welding processes, assembly and test of equipments. This article proposes a model for assessing the quality of services provided by small machining and industrial maintenance companies, since there is a gap in the literature regarding this issue and because the importance of small service companies in socio-economic development of the country. The model is an adaptation of the SERVQUAL instrument and the criteria determining the quality of services are designed according to the service cycle of a typical small machining and industrial maintenance company. In this sense, the Moments of Truth have been considered in the preparation of two separate questionnaires. The first questionnaire contains 24 statements that reflect the expectations of customers, and the second one contains 24 statements that measure perceptions of service performance. An additional item was included in each questionnaire to assess, respectively, the overall expectation about the services and the overall company performance. Therefore, it is a model that considers the interfaces of the client/supplier relationship, the peculiarities of the machining and industrial maintenance service sector and the company size.

  10. A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Hemphill, Geralyn M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type has become a necessity in cancer research. A major challenge in cancer management is the classification of patients into appropriate risk groups for better treatment and follow-up. Such risk assessment is critically important in order to optimize the patient’s health and the use of medical resources, as well as to avoid cancer recurrence. This paper focuses on the application of machine learning methods for predicting the likelihood of a recurrence of cancer. It is not meant to be an extensive review of the literature on the subject of machine learning techniques for cancer recurrence modeling. Other recent papers have performed such a review, and I will rely heavily on the results and outcomes from these papers. The electronic databases that were used for this review include PubMed, Google, and Google Scholar. Query terms used include “cancer recurrence modeling”, “cancer recurrence and machine learning”, “cancer recurrence modeling and machine learning”, and “machine learning for cancer recurrence and prediction”. The most recent and most applicable papers to the topic of this review have been included in the references. It also includes a list of modeling and classification methods to predict cancer recurrence.

  11. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  12. Modelling of Tool Wear and Residual Stress during Machining of AISI H13 Tool Steel

    Science.gov (United States)

    Outeiro, José C.; Umbrello, Domenico; Pina, José C.; Rizzuti, Stefania

    2007-05-01

    Residual stresses can enhance or impair the ability of a component to withstand loading conditions in service (fatigue, creep, stress corrosion cracking, etc.), depending on their nature: compressive or tensile, respectively. This poses enormous problems in structural assembly as this affects the structural integrity of the whole part. In addition, tool wear issues are of critical importance in manufacturing since these affect component quality, tool life and machining cost. Therefore, prediction and control of both tool wear and the residual stresses in machining are absolutely necessary. In this work, a two-dimensional Finite Element model using an implicit Lagrangian formulation with an automatic remeshing was applied to simulate the orthogonal cutting process of AISI H13 tool steel. To validate such model the predicted and experimentally measured chip geometry, cutting forces, temperatures, tool wear and residual stresses on the machined affected layers were compared. The proposed FE model allowed us to investigate the influence of tool geometry, cutting regime parameters and tool wear on residual stress distribution in the machined surface and subsurface of AISI H13 tool steel. The obtained results permit to conclude that in order to reduce the magnitude of surface residual stresses, the cutting speed should be increased, the uncut chip thickness (or feed) should be reduced and machining with honed tools having large cutting edge radii produce better results than chamfered tools. Moreover, increasing tool wear increases the magnitude of surface residual stresses.

  13. Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Hang-cheong Wong

    2012-01-01

    Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.

  14. Contribution of maternal thyroxine to fetal thyroxine pools in normal rats near term

    International Nuclear Information System (INIS)

    Morreale de Escobar, G.; Calvo, R.; Obregon, M.J.; Escobar Del Rey, F.

    1990-01-01

    Normal dams were equilibrated isotopically with [ 125 I]T4 infused from 11 to 21 days of gestation, at which time maternal and fetal extrathyroidal tissues were obtained to determine their [ 125 I]T4 and T4 contents. The specific activity of the [ 125 I]T4 in the fetal tissues was lower than in maternal T4 pools. The extent of this change allows evaluation of the net contribution of maternal T4 to the fetal extrathyroidal T4 pools. At 21 days of gestation, near term, this represents 17.5 +/- 0.9% of the T4 in fetal tissues, a value considerably higher than previously calculated. The methodological approach was validated in dams given a goitrogen to block fetal thyroid function. The specific activities of the [ 125 I]T4 in maternal and fetal T4 pools were then similar, confirming that in cases of fetal thyroid impairment the T4 in fetal tissues is determined by the maternal contribution. Thus, previous statements that in normal conditions fetal thyroid economy near term is totally independent of maternal thyroid status ought to be reconsidered

  15. Near-term and next-generation nuclear power plant concepts

    International Nuclear Information System (INIS)

    Shiga, Shigenori; Handa, Norihiko; Heki, Hideaki

    2002-01-01

    Near-term and next-generation nuclear reactors will be required to have high economic competitiveness in the deregulated electricity market, flexibility with respect to electricity demand and investment, and good public acceptability. For near-term reactors in the 2010s, Toshiba is developing an improved advanced boiling water reactor (ABWR) based on the present ABWR with newly rationalized systems and components; a construction period of 36 months, one year shorter than the current period; and a power lineup ranging from 800 MWe to 1,600 MWe. For future reactors in the 2020s and beyond, Toshiba is developing the ABWR-II for large-scale, centralized power sources; a supercritical water-cooled power reactor with high thermal efficiency for medium-scale power sources; a modular reactor with siting flexibility for small-scale power sources; and a small, fast neutron reactor with inherent safety for independent power sources. From the viewpoint of efficient uranium resource utilization, a low-moderation BWR core with a high conversion factor is also being developed. (author)

  16. Economic analysis of direct hydrogen PEM fuel cells in three near-term markets

    International Nuclear Information System (INIS)

    Mahadevan, K.; Stone, H.; Judd, K.; Paul, D.

    2007-01-01

    Direct hydrogen polymer electrolyte membrane fuel cells (H-PEMFCs) offer several near-term opportunities including backup power applications in state and local agencies of emergency response; forklifts in high throughput distribution centers; and, airport ground support equipment. This paper presented an analysis of the market requirements for introducing H-PEMFCs successfully, as well as an analysis of the lifecycle costs of H-PEMFCs and competing alternatives in three near-term markets. It also used three scenarios as examples of the potential for market penetration of H-PEMFCs. For each of the three potential opportunities, the paper presented the market requirements, a lifecycle cost analysis, and net present value of the lifecycle costs. A sensitivity analysis of the net present value of the lifecycle costs and of the average annual cost of owning and operating each of the H-PEMFC opportunities was also conducted. It was concluded that H-PEMFC-powered pallet trucks in high-productivity environments represented a promising early opportunity. However, the value of H-PEMFC-powered forklifts compared to existing alternatives was reduced for applications with lower hours of operation and declining labor rates. In addition, H-PEMFC-powered baggage tractors in airports were more expensive than battery-powered baggage tractors on a lifecycle cost basis. 9 tabs., 4 figs

  17. Antimatter Requirements and Energy Costs for Near-Term Propulsion Applications

    Science.gov (United States)

    Schmidt, G. R.; Gerrish, H. P.; Martin, J. J.; Smith, G. A.; Meyer, K. J.

    1999-01-01

    The superior energy density of antimatter annihilation has often been pointed to as the ultimate source of energy for propulsion. However, the limited capacity and very low efficiency of present-day antiproton production methods suggest that antimatter may be too costly to consider for near-term propulsion applications. We address this issue by assessing the antimatter requirements for six different types of propulsion concepts, including two in which antiprotons are used to drive energy release from combined fission/fusion. These requirements are compared against the capacity of both the current antimatter production infrastructure and the improved capabilities that could exist within the early part of next century. Results show that although it may be impractical to consider systems that rely on antimatter as the sole source of propulsive energy, the requirements for propulsion based on antimatter-assisted fission/fusion do fall within projected near-term production capabilities. In fact, a new facility designed solely for antiproton production but based on existing technology could feasibly support interstellar precursor missions and omniplanetary spaceflight with antimatter costs ranging up to $6.4 million per mission.

  18. Classical boson sampling algorithms with superior performance to near-term experiments

    Science.gov (United States)

    Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony

    2017-12-01

    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.

  19. MODELS OF LIVE MIGRATION WITH ITERATIVE APPROACH AND MOVE OF VIRTUAL MACHINES

    Directory of Open Access Journals (Sweden)

    S. M. Aleksankov

    2015-11-01

    Full Text Available Subject of Research. The processes of live migration without shared storage with pre-copy approach and move migration are researched. Migration of virtual machines is an important opportunity of virtualization technology. It enables applications to move transparently with their runtime environments between physical machines. Live migration becomes noticeable technology for efficient load balancing and optimizing the deployment of virtual machines to physical hosts in data centres. Before the advent of live migration, only network migration (the so-called, «Move», has been used, that entails stopping the virtual machine execution while copying to another physical server, and, consequently, unavailability of the service. Method. Algorithms of live migration without shared storage with pre-copy approach and move migration of virtual machines are reviewed from the perspective of research of migration time and unavailability of services at migrating of virtual machines. Main Results. Analytical models are proposed predicting migration time of virtual machines and unavailability of services at migrating with such technologies as live migration with pre-copy approach without shared storage and move migration. In the latest works on the time assessment of unavailability of services and migration time using live migration without shared storage experimental results are described, that are applicable to draw general conclusions about the changes of time for unavailability of services and migration time, but not to predict their values. Practical Significance. The proposed models can be used for predicting the migration time and time of unavailability of services, for example, at implementation of preventive and emergency works on the physical nodes in data centres.

  20. Accelerated Monte Carlo system reliability analysis through machine-learning-based surrogate models of network connectivity

    International Nuclear Information System (INIS)

    Stern, R.E.; Song, J.; Work, D.B.

    2017-01-01

    The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.

  1. Improving wave forecasting by integrating ensemble modelling and machine learning

    Science.gov (United States)

    O'Donncha, F.; Zhang, Y.; James, S. C.

    2017-12-01

    Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.

  2. Design ensemble machine learning model for breast cancer diagnosis.

    Science.gov (United States)

    Hsieh, Sheau-Ling; Hsieh, Sung-Huai; Cheng, Po-Hsun; Chen, Chi-Huang; Hsu, Kai-Ping; Lee, I-Shun; Wang, Zhenyu; Lai, Feipei

    2012-10-01

    In this paper, we classify the breast cancer of medical diagnostic data. Information gain has been adapted for feature selections. Neural fuzzy (NF), k-nearest neighbor (KNN), quadratic classifier (QC), each single model scheme as well as their associated, ensemble ones have been developed for classifications. In addition, a combined ensemble model with these three schemes has been constructed for further validations. The experimental results indicate that the ensemble learning performs better than individual single ones. Moreover, the combined ensemble model illustrates the highest accuracy of classifications for the breast cancer among all models.

  3. LINEAR KERNEL SUPPORT VECTOR MACHINES FOR MODELING PORE-WATER PRESSURE RESPONSES

    Directory of Open Access Journals (Sweden)

    KHAMARUZAMAN W. YUSOF

    2017-08-01

    Full Text Available Pore-water pressure responses are vital in many aspects of slope management, design and monitoring. Its measurement however, is difficult, expensive and time consuming. Studies on its predictions are lacking. Support vector machines with linear kernel was used here to predict the responses of pore-water pressure to rainfall. Pore-water pressure response data was collected from slope instrumentation program. Support vector machine meta-parameter calibration and model development was carried out using grid search and k-fold cross validation. The mean square error for the model on scaled test data is 0.0015 and the coefficient of determination is 0.9321. Although pore-water pressure response to rainfall is a complex nonlinear process, the use of linear kernel support vector machine can be employed where high accuracy can be sacrificed for computational ease and time.

  4. Static Object Detection Based on a Dual Background Model and a Finite-State Machine

    Directory of Open Access Journals (Sweden)

    Heras Evangelio Rubén

    2011-01-01

    Full Text Available Detecting static objects in video sequences has a high relevance in many surveillance applications, such as the detection of abandoned objects in public areas. In this paper, we present a system for the detection of static objects in crowded scenes. Based on the detection of two background models learning at different rates, pixels are classified with the help of a finite-state machine. The background is modelled by two mixtures of Gaussians with identical parameters except for the learning rate. The state machine provides the meaning for the interpretation of the results obtained from background subtraction; it can be implemented as a look-up table with negligible computational cost and it can be easily extended. Due to the definition of the states in the state machine, the system can be used either full automatically or interactively, making it extremely suitable for real-life surveillance applications. The system was successfully validated with several public datasets.

  5. A Novel Machine Learning Strategy Based on Two-Dimensional Numerical Models in Financial Engineering

    Directory of Open Access Journals (Sweden)

    Qingzhen Xu

    2013-01-01

    Full Text Available Machine learning is the most commonly used technique to address larger and more complex tasks by analyzing the most relevant information already present in databases. In order to better predict the future trend of the index, this paper proposes a two-dimensional numerical model for machine learning to simulate major U.S. stock market index and uses a nonlinear implicit finite-difference method to find numerical solutions of the two-dimensional simulation model. The proposed machine learning method uses partial differential equations to predict the stock market and can be extensively used to accelerate large-scale data processing on the history database. The experimental results show that the proposed algorithm reduces the prediction error and improves forecasting precision.

  6. Effect of power quality on windings temperature of marine induction motors. Part I: Machine model

    Energy Technology Data Exchange (ETDEWEB)

    Gnacinski, P. [Gdynia Maritime Univ., Dept. of Ship Electrical Power Engineering, Morska Str. 83, 81-225 Gdynia (Poland)

    2009-10-15

    Marine induction machines are exposed to various power quality disturbances appearing simultaneously in ship power systems: frequency and voltage rms value deviation, voltage unbalance and voltage waveform distortions. As a result, marine induction motors can be seriously overheated due to lowered supply voltage quality. Improvement of the protection of marine induction machines requires an appropriate method of power quality assessment and modification of the power quality regulations of ship classification societies. This paper presents an analytical model of an induction cage machine supplied with voltage of lowered quality, used in part II of the work (effect of power quality on windings temperature of marine induction motors. Part II. Results of investigations and recommendations for related regulations) for power quality assessment in ship power systems, and for justification of the new power quality regulations proposal. The presented model is suitable for implementation in an on-line measurement system. (author)

  7. Photon beam modelling with Pinnacle3 Treatment Planning System for a Rokus M Co-60 Machine

    International Nuclear Information System (INIS)

    Dulcescu, Mihaela; Murgulet Cristian

    2008-01-01

    The basic relationships of the convolution/superposition dose calculation technique are reviewed, and a modelling technique that can be used for obtaining a satisfactory beam model for a commercially available convolution/superposition-based treatment planning system is described. A fluence energy spectrum for a Co-60 treatment machine obtained from a Monte Carlo simulation was used for modelling the fluence spectrum for a Rokus M machine. In order to achieve this model we measured the depth dose distribution and the dose profiles with a Wellhofer dosimetry system. The primary fluence was iteratively modelled by comparing the computed depth dose curves and beam profiles with the depth dose curves and crossbeam profiles measured in a water phantom. The objective of beam modelling is to build a model of the primary fluence that the patient is exposed to, which can then be used for the calculation of the dose deposited in the patient. (authors)

  8. Evaluation of discrete modeling efficiency of asynchronous electric machines

    OpenAIRE

    Byczkowska-Lipińska, Liliana; Stakhiv, Petro; Hoholyuk, Oksana; Vasylchyshyn, Ivanna

    2011-01-01

    In the paper the problem of effective mathematical macromodels in the form of state variables intended for asynchronous motor transient analysis is considered. Their comparing with traditional mathematical models of asynchronous motors including models built into MATLAB/Simulink software was carried out and analysis of their efficiency was conducted.

  9. A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning

    Directory of Open Access Journals (Sweden)

    Shang Bo-Wen

    2016-01-01

    Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models

  10. A Novel Application of Machine Learning Methods to Model Microcontroller Upset Due to Intentional Electromagnetic Interference

    Science.gov (United States)

    Bilalic, Rusmir

    A novel application of support vector machines (SVMs), artificial neural networks (ANNs), and Gaussian processes (GPs) for machine learning (GPML) to model microcontroller unit (MCU) upset due to intentional electromagnetic interference (IEMI) is presented. In this approach, an MCU performs a counting operation (0-7) while electromagnetic interference in the form of a radio frequency (RF) pulse is direct-injected into the MCU clock line. Injection times with respect to the clock signal are the clock low, clock rising edge, clock high, and the clock falling edge periods in the clock window during which the MCU is performing initialization and executing the counting procedure. The intent is to cause disruption in the counting operation and model the probability of effect (PoE) using machine learning tools. Five experiments were executed as part of this research, each of which contained a set of 38,300 training points and 38,300 test points, for a total of 383,000 total points with the following experiment variables: injection times with respect to the clock signal, injected RF power, injected RF pulse width, and injected RF frequency. For the 191,500 training points, the average training error was 12.47%, while for the 191,500 test points the average test error was 14.85%, meaning that on average, the machine was able to predict MCU upset with an 85.15% accuracy. Leaving out the results for the worst-performing model (SVM with a linear kernel), the test prediction accuracy for the remaining machines is almost 89%. All three machine learning methods (ANNs, SVMs, and GPML) showed excellent and consistent results in their ability to model and predict the PoE on an MCU due to IEMI. The GP approach performed best during training with a 7.43% average training error, while the ANN technique was most accurate during the test with a 10.80% error.

  11. State Machine Modeling of the Space Launch System Solid Rocket Boosters

    Science.gov (United States)

    Harris, Joshua A.; Patterson-Hine, Ann

    2013-01-01

    The Space Launch System is a Shuttle-derived heavy-lift vehicle currently in development to serve as NASA's premiere launch vehicle for space exploration. The Space Launch System is a multistage rocket with two Solid Rocket Boosters and multiple payloads, including the Multi-Purpose Crew Vehicle. Planned Space Launch System destinations include near-Earth asteroids, the Moon, Mars, and Lagrange points. The Space Launch System is a complex system with many subsystems, requiring considerable systems engineering and integration. To this end, state machine analysis offers a method to support engineering and operational e orts, identify and avert undesirable or potentially hazardous system states, and evaluate system requirements. Finite State Machines model a system as a finite number of states, with transitions between states controlled by state-based and event-based logic. State machines are a useful tool for understanding complex system behaviors and evaluating "what-if" scenarios. This work contributes to a state machine model of the Space Launch System developed at NASA Ames Research Center. The Space Launch System Solid Rocket Booster avionics and ignition subsystems are modeled using MATLAB/Stateflow software. This model is integrated into a larger model of Space Launch System avionics used for verification and validation of Space Launch System operating procedures and design requirements. This includes testing both nominal and o -nominal system states and command sequences.

  12. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Michael Horsfall

    one of the most critical quality measure in mechanical products. In the ... Keywords: cutting speed, centre lathe, empirical model, surface roughness, Mean absolute percentage deviation ... The factors considered were work piece properties.

  13. Credit Risk Analysis using Machine and Deep Learning models

    OpenAIRE

    Addo , Peter ,; Guegan , Dominique; Hassani , Bertrand

    2018-01-01

    URL des Documents de travail : https://centredeconomiesorbonne.univ-paris1.fr/documents-de-travail-du-ces/; Documents de travail du Centre d'Economie de la Sorbonne 2018.03 - ISSN : 1955-611X; Due to the hyper technology associated to Big Data, data availability and computing power, most banks or lending financial institutions are renewing their business models. Credit risk predictions, monitoring, model reliability and effective loan processing are key to decision making and transparency. In...

  14. Comparing and Validating Machine Learning Models for Mycobacterium tuberculosis Drug Discovery.

    Science.gov (United States)

    Lane, Thomas; Russo, Daniel P; Zorn, Kimberley M; Clark, Alex M; Korotcov, Alexandru; Tkachenko, Valery; Reynolds, Robert C; Perryman, Alexander L; Freundlich, Joel S; Ekins, Sean

    2018-04-26

    Tuberculosis is a global health dilemma. In 2016, the WHO reported 10.4 million incidences and 1.7 million deaths. The need to develop new treatments for those infected with Mycobacterium tuberculosis ( Mtb) has led to many large-scale phenotypic screens and many thousands of new active compounds identified in vitro. However, with limited funding, efforts to discover new active molecules against Mtb needs to be more efficient. Several computational machine learning approaches have been shown to have good enrichment and hit rates. We have curated small molecule Mtb data and developed new models with a total of 18,886 molecules with activity cutoffs of 10 μM, 1 μM, and 100 nM. These data sets were used to evaluate different machine learning methods (including deep learning) and metrics and to generate predictions for additional molecules published in 2017. One Mtb model, a combined in vitro and in vivo data Bayesian model at a 100 nM activity yielded the following metrics for 5-fold cross validation: accuracy = 0.88, precision = 0.22, recall = 0.91, specificity = 0.88, kappa = 0.31, and MCC = 0.41. We have also curated an evaluation set ( n = 153 compounds) published in 2017, and when used to test our model, it showed the comparable statistics (accuracy = 0.83, precision = 0.27, recall = 1.00, specificity = 0.81, kappa = 0.36, and MCC = 0.47). We have also compared these models with additional machine learning algorithms showing Bayesian machine learning models constructed with literature Mtb data generated by different laboratories generally were equivalent to or outperformed deep neural networks with external test sets. Finally, we have also compared our training and test sets to show they were suitably diverse and different in order to represent useful evaluation sets. Such Mtb machine learning models could help prioritize compounds for testing in vitro and in vivo.

  15. Geospatial Analysis of Near-Term Technical Potential of BECCS in the U.S.

    Science.gov (United States)

    Baik, E.; Sanchez, D.; Turner, P. A.; Mach, K. J.; Field, C. B.; Benson, S. M.

    2017-12-01

    Atmospheric carbon dioxide (CO2) removal using bioenergy with carbon capture and storage (BECCS) is crucial for achieving stringent climate change mitigation targets. To date, previous work discussing the feasibility of BECCS has largely focused on land availability and bioenergy potential, while CCS components - including capacity, injectivity, and location of potential storage sites - have not been thoroughly considered in the context of BECCS. A high-resolution geospatial analysis of both biomass production and potential geologic storage sites is conducted to consider the near-term deployment potential of BECCS in the U.S. The analysis quantifies the overlap between the biomass resource and CO2 storage locations within the context of storage capacity and injectivity. This analysis leverages county-level biomass production data from the U.S. Department of Energy's Billion Ton Report alongside potential CO2 geologic storage sites as provided by the USGS Assessment of Geologic Carbon Dioxide Storage Resources. Various types of lignocellulosic biomass (agricultural residues, dedicated energy crops, and woody biomass) result in a potential 370-400 Mt CO2 /yr of negative emissions in 2020. Of that CO2, only 30-31% of the produced biomass (110-120 Mt CO2 /yr) is co-located with a potential storage site. While large potential exists, there would need to be more than 250 50-MW biomass power plants fitted with CCS to capture all the co-located CO2 capacity in 2020. Neither absolute injectivity nor absolute storage capacity is likely to limit BECCS, but the results show regional capacity and injectivity constraints in the U.S. that had not been identified in previous BECCS analysis studies. The state of Illinois, the Gulf region, and western North Dakota emerge as the best locations for near-term deployment of BECCS with abundant biomass, sufficient storage capacity and injectivity, and the co-location of the two resources. Future studies assessing BECCS potential should

  16. OPERATING OF MOBILE MACHINE UNITS SYSTEM USING THE MODEL OF MULTICOMPONENT COMPLEX MOVEMENT

    Directory of Open Access Journals (Sweden)

    A. Lebedev

    2015-07-01

    Full Text Available To solve the problems of mobile machine units system operating it is proposed using complex multi-component (composite movement physical models. Implementation of the proposed method is possible by creating of automatic operating systems of fuel supply to the engines using linear accelerometers. Some examples to illustrate the proposed method are offered.

  17. Operating of mobile machine units system using the model of multicomponent complex movement

    OpenAIRE

    A. Lebedev; R. Kaidalov; N. Artiomov; M. Shulyak; M. Podrigalo; D. Abramov; D. Klets

    2015-01-01

    To solve the problems of mobile machine units system operating it is proposed using complex multi-component (composite) movement physical models. Implementation of the proposed method is possible by creating of automatic operating systems of fuel supply to the engines using linear accelerometers. Some examples to illustrate the proposed method are offered.

  18. Model of large scale man-machine systems with an application to vessel traffic control

    NARCIS (Netherlands)

    Wewerinke, P.H.; van der Ent, W.I.; ten Hove, D.

    1989-01-01

    Mathematical models are discussed to deal with complex large-scale man-machine systems such as vessel (air, road) traffic and process control systems. Only interrelationships between subsystems are assumed. Each subsystem is controlled by a corresponding human operator (HO). Because of the

  19. A comparative study of machine learning classifiers for modeling travel mode choice

    NARCIS (Netherlands)

    Hagenauer, J; Helbich, M

    2017-01-01

    The analysis of travel mode choice is an important task in transportation planning and policy making in order to understand and predict travel demands. While advances in machine learning have led to numerous powerful classifiers, their usefulness for modeling travel mode choice remains largely

  20. Modelling and optimization of a permanent-magnet machine in a flywheel

    NARCIS (Netherlands)

    Holm, S.R.

    2003-01-01

    This thesis describes the derivation of an analytical model for the design and optimization of a permanent-magnet machine for use in an energy storage flywheel. A prototype of this flywheel is to be used as the peak-power unit in a hybrid electric city bus. The thesis starts by showing the

  1. Static stiffness modeling of a novel hybrid redundant robot machine

    International Nuclear Information System (INIS)

    Li Ming; Wu Huapeng; Handroos, Heikki

    2011-01-01

    This paper presents a modeling method to study the stiffness of a hybrid serial-parallel robot IWR (Intersector Welding Robot) for the assembly of ITER vacuum vessel. The stiffness matrix of the basic element in the robot is evaluated using matrix structural analysis (MSA); the stiffness of the parallel mechanism is investigated by taking account of the deformations of both hydraulic limbs and joints; the stiffness of the whole integrated robot is evaluated by employing the virtual joint method and the principle of virtual work. The obtained stiffness model of the hybrid robot is analytical and the deformation results of the robot workspace under certain external load are presented.

  2. A comparison study of support vector machines and hidden Markov models in machinery condition monitoring

    International Nuclear Information System (INIS)

    Miao, Qiang; Huang, Hong Zhong; Fan, Xianfeng

    2007-01-01

    Condition classification is an important step in machinery fault detection, which is a problem of pattern recognition. Currently, there are a lot of techniques in this area and the purpose of this paper is to investigate two popular recognition techniques, namely hidden Markov model and support vector machine. At the beginning, we briefly introduced the procedure of feature extraction and the theoretical background of this paper. The comparison experiment was conducted for gearbox fault detection and the analysis results from this work showed that support vector machine has better classification performance in this area

  3. Big data - modelling of midges in Europa using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Cuellar, Ana Carolina; Kjær, Lene Jung; Skovgaard, Henrik

    2017-01-01

    coordinates of each trap, start and end dates of trapping. We used 120 environmental predictor variables together with Random Forest machine learning algorithms to predict the overall species distribution (probability of occurrence) and monthly abundance in Europe. We generated maps for every month...... and the Obsoletus group, although abundance was generally higher for a longer period of time for C. imicula than for the Obsoletus group. Using machine learning techniques, we were able to model the spatial distribution in Europe for C. imicola and the Obsoletus group in terms of abundance and suitability...

  4. Comparison of Models Needed for Conceptual Design of Man-Machine Systems in Different Application Domains

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1986-01-01

    and subjective preferences. For design of man-machine systems in process control, a framework has been developed in terms of separate representation of the problem domain, the decision task, and the information processing strategies required. The author analyzes the application of this framework to a number......For systematic and computer-aided design of man-machine systems, a consistent framework is needed, i. e. , a set of models which allows the selection of system characteristics which serve the individual user not only to satisfy his goal, but also to select mental processes that match his resources...

  5. Modeling and simulation of the fluid flow in wire electrochemical machining with rotating tool (wire ECM)

    Science.gov (United States)

    Klocke, F.; Herrig, T.; Zeis, M.; Klink, A.

    2017-10-01

    Combining the working principle of electrochemical machining (ECM) with a universal rotating tool, like a wire, could manage lots of challenges of the classical ECM sinking process. Such a wire-ECM process could be able to machine flexible and efficient 2.5-dimensional geometries like fir tree slots in turbine discs. Nowadays, established manufacturing technologies for slotting turbine discs are broaching and wire electrical discharge machining (wire EDM). Nevertheless, high requirements on surface integrity of turbine parts need cost intensive process development and - in case of wire-EDM - trim cuts to reduce the heat affected rim zone. Due to the process specific advantages, ECM is an attractive alternative manufacturing technology and is getting more and more relevant for sinking applications within the last few years. But ECM is also opposed with high costs for process development and complex electrolyte flow devices. In the past, few studies dealt with the development of a wire ECM process to meet these challenges. However, previous concepts of wire ECM were only suitable for micro machining applications. Due to insufficient flushing concepts the application of the process for machining macro geometries failed. Therefore, this paper presents the modeling and simulation of a new flushing approach for process assessment. The suitability of a rotating structured wire electrode in combination with an axial flushing for electrodes with high aspect ratios is investigated and discussed.

  6. Mathematical Model of Lifetime Duration at Insulation of Electrical Machines

    Directory of Open Access Journals (Sweden)

    Mihaela Răduca

    2009-10-01

    Full Text Available Abstract. This paper present a mathematical model of lifetime duration at hydro generator stator winding insulation when at hydro generator can be appear the damage regimes. The estimation to make by take of the programming and non-programming revisions, through the introduction and correlation of the new defined notions.

  7. Modelling rollover behaviour of exacavator-based forest machines

    Science.gov (United States)

    M.W. Veal; S.E. Taylor; Robert B. Rummer

    2003-01-01

    This poster presentation provides results from analytical and computer simulation models of rollover behaviour of hydraulic excavators. These results are being used as input to the operator protective structure standards development process. Results from rigid body mechanics and computer simulation methods agree well with field rollover test data. These results show...

  8. Syntactic discriminative language model rerankers for statistical machine translation

    NARCIS (Netherlands)

    Carter, S.; Monz, C.

    2011-01-01

    This article describes a method that successfully exploits syntactic features for n-best translation candidate reranking using perceptrons. We motivate the utility of syntax by demonstrating the superior performance of parsers over n-gram language models in differentiating between Statistical

  9. Modelling of Moving Coil Actuators in Fast Switching Valves Suitable for Digital Hydraulic Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Roemer, Daniel Beck; Bech, Michael Møller

    2015-01-01

    an estimation of the eddy currents generated in the actuator yoke upon current rise, as they may have significant influence on the coil current response. The analytical model facilitates fast simulation of the transient actuator response opposed to the transient electro-magnetic finite element model which......The efficiency of digital hydraulic machines is strongly dependent on the valve switching time. Recently, fast switching have been achieved by using a direct electromagnetic moving coil actuator as the force producing element in fast switching hydraulic valves suitable for digital hydraulic...... machines. Mathematical models of the valve switching, targeted for design optimisation of the moving coil actuator, are developed. A detailed analytical model is derived and presented and its accuracy is evaluated against transient electromagnetic finite element simulations. The model includes...

  10. Identification and non-integer order modelling of synchronous machines operating as generator

    Directory of Open Access Journals (Sweden)

    Szymon Racewicz

    2012-09-01

    Full Text Available This paper presents an original mathematical model of a synchronous generator using derivatives of fractional order. In contrast to classical models composed of a large number of R-L ladders, it comprises half-order impedances, which enable the accurate description of the electromagnetic induction phenomena in a wide frequency range, while minimizing the order and number of model parameters. The proposed model takes into account the skin eff ect in damper cage bars, the eff ects of eddy currents in rotor solid parts, and the saturation of the machine magnetic circuit. The half-order transfer functions used for modelling these phenomena were verifi ed by simulation of ferromagnetic sheet impedance using the fi nite elements method. The analysed machine’s parameters were identified on the basis of SSFR (StandStill Frequency Response characteristics measured on a gradually magnetised synchronous machine.

  11. Programming and machining of complex parts based on CATIA solid modeling

    Science.gov (United States)

    Zhu, Xiurong

    2017-09-01

    The complex parts of the use of CATIA solid modeling programming and simulation processing design, elaborated in the field of CNC machining, programming and the importance of processing technology. In parts of the design process, first make a deep analysis on the principle, and then the size of the design, the size of each chain, connected to each other. After the use of backstepping and a variety of methods to calculate the final size of the parts. In the selection of parts materials, careful study, repeated testing, the final choice of 6061 aluminum alloy. According to the actual situation of the processing site, it is necessary to make a comprehensive consideration of various factors in the machining process. The simulation process should be based on the actual processing, not only pay attention to shape. It can be used as reference for machining.

  12. Implementation of the Lanczos algorithm for the Hubbard model on the Connection Machine system

    International Nuclear Information System (INIS)

    Leung, P.W.; Oppenheimer, P.E.

    1992-01-01

    An implementation of the Lanczos algorithm for the exact diagonalization of the two dimensional Hubbard model on a 4x4 square lattice on the Connection Machine CM-2 system is described. The CM-2 is a massively parallel machine with distributed memory. The program is written in C/PARIS. This implementation minimizes memory usage by generating the matrix elements as needed instead of storing them. The Lanczos vectors are stored across the local memory of the processors. Using translational symmetry only, the dimension of the Hilbert space at half filling is more than 10 million. A speed of about 2.4 min per iteration is achieved on a 64K CM-2. This implementation is scalable. Running it on a bigger machine with more processors speeds up the process. The performance analysis of this implementation is shown and discuss its advantages and disadvantages are discussed

  13. Modeling of thermal spalling during electrical discharge machining of titanium diboride

    International Nuclear Information System (INIS)

    Gadalla, A.M.; Bozkurt, B.; Faulk, N.M.

    1991-01-01

    Erosion in electrical discharge machining has been described as occurring by melting and flushing the liquid formed. Recently, however, thermal spalling was reported as the mechanism for machining refractory materials with low thermal conductivity and high thermal expansion. The process is described in this paper by a model based on a ceramic surface exposed to a constant circular heating source which supplied a constant flux over the pulse duration. The calculations were based on TiB 2 mechanical properties along a and c directions. Theoretical predictions were verified by machining hexagonal TiB 2 . Large flakes of TiB 2 with sizes close to grain size and maximum thickness close to the predicted values were collected, together with spherical particles of Cu and Zn eroded from cutting wire. The cutting surfaces consist of cleavage planes sometimes contaminated with Cu, Zn, and impurities from the dielectric fluid

  14. Analysis of near-term production and market opportunities for hydrogen and related activities

    Energy Technology Data Exchange (ETDEWEB)

    Mauro, R.; Leach, S. [National Hydrogen Association, Washington, DC (United States)

    1995-09-01

    This paper summarizes current and planned activities in the areas of hydrogen production and use, near-term venture opportunities, and codes and standards. The rationale for these efforts is to assess industry interest and engage in activities that move hydrogen technologies down the path to commercialization. Some of the work presented in this document is a condensed, preliminary version of reports being prepared under the DOE/NREL contract. In addition, the NHA work funded by Westinghouse Savannah River Corporation (WSRC) to explore the opportunities and industry interest in a Hydrogen Research Center is briefly described. Finally, the planned support of and industry input to the Hydrogen Technical Advisory Panel (HTAP) on hydrogen demonstration projects is discussed.

  15. Chemicals from Biomass: A Market Assessment of Bioproducts with Near-Term Potential

    Energy Technology Data Exchange (ETDEWEB)

    Biddy, Mary J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Scarlata, Christopher [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kinchin, Christopher [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-03-23

    Production of chemicals from biomass offers a promising opportunity to reduce U.S. dependence on imported oil, as well as to improve the overall economics and sustainability of an integrated biorefinery. Given the increasing momentum toward the deployment and scale-up of bioproducts, this report strives to: (1) summarize near-term potential opportunities for growth in biomass-derived products; (2) identify the production leaders who are actively scaling up these chemical production routes; (3) review the consumers and market champions who are supporting these efforts; (4) understand the key drivers and challenges to move biomass-derived chemicals to market; and (5) evaluate the impact that scale-up of chemical strategies will have on accelerating the production of biofuels.

  16. Closed Nuclear Fuel Cycle Technologies to Meet Near-Term and Transition Period Requirements

    International Nuclear Information System (INIS)

    Collins, E.D.; Felker, L.K.; Benker, D.E.; Campbell, D.O.

    2008-01-01

    A scenario that very likely fits conditions in the U.S. nuclear power industry and can meet the goals of cost minimization, waste minimization, and provisions of engineered safeguards for proliferation resistance, including no separated plutonium, to close the fuel cycle with full actinide recycle is evaluated. Processing aged fuels, removed from the reactor for 30 years or more, can provide significant advantages in cost reduction and waste minimization. The UREX+3 separations process is being developed to separate used fuel components for reuse, thus minimizing waste generation and storage in geologic repositories. Near-term use of existing and new thermal spectrum reactors can be used initially for recycle actinide transmutation. A transition period will eventually occur, when economic conditions will allow commercial deployment of fast reactors; during this time, recycled plutonium can be diverted into fast reactor fuel and conversion of depleted uranium into additional fuel material can be considered. (authors)

  17. Heliostat Manufacturing for near-term markets. Phase II final report

    International Nuclear Information System (INIS)

    1998-01-01

    This report describes a project by Science Applications International Corporation and its subcontractors Boeing/Rocketdyne and Bechtel Corp. to develop manufacturing technology for production of SAIC stretched membrane heliostats. The project consists of three phases, of which two are complete. This first phase had as its goals to identify and complete a detailed evaluation of manufacturing technology, process changes, and design enhancements to be pursued for near-term heliostat markets. In the second phase, the design of the SAIC stretched membrane heliostat was refined, manufacturing tooling for mirror facet and structural component fabrication was implemented, and four proof-of-concept/test heliostats were produced and installed in three locations. The proposed plan for Phase III calls for improvements in production tooling to enhance product quality and prepare increased production capacity. This project is part of the U.S. Department of Energy's Solar Manufacturing Technology Program (SolMaT)

  18. Closed Nuclear Fuel Cycle Technologies to Meet Near-Term and Transition Period Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Collins, E.D.; Felker, L.K.; Benker, D.E.; Campbell, D.O. [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee, 37831-6152 (United States)

    2008-07-01

    A scenario that very likely fits conditions in the U.S. nuclear power industry and can meet the goals of cost minimization, waste minimization, and provisions of engineered safeguards for proliferation resistance, including no separated plutonium, to close the fuel cycle with full actinide recycle is evaluated. Processing aged fuels, removed from the reactor for 30 years or more, can provide significant advantages in cost reduction and waste minimization. The UREX+3 separations process is being developed to separate used fuel components for reuse, thus minimizing waste generation and storage in geologic repositories. Near-term use of existing and new thermal spectrum reactors can be used initially for recycle actinide transmutation. A transition period will eventually occur, when economic conditions will allow commercial deployment of fast reactors; during this time, recycled plutonium can be diverted into fast reactor fuel and conversion of depleted uranium into additional fuel material can be considered. (authors)

  19. Near term and long term materials issues and development needs for plasma interactive components

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1986-01-01

    Plasma interactive components (PICs), including the first wall, limiter blades, divertor collector plates, halo scrapers, and RF launchers, are exposed to high particle fluxes that can result in high sputtering erosion rates and high heat fluxes. In addition, the materials in reactors are exposed to high neutron fluxes which will degrade the bulk properties. This severe environment will limit the materials and designs which can be used in fusion devices. In order to provide a reasonable degree of confidence that plasma interactive components will operate successfully, a comprehensive development program is needed. Materials research and development plays a key role in the successful development of PICs. The range of operating conditions along with a summary of the major issues for materials development is described. The areas covered include plasma/materials interactions, erosion/redeposition, baseline materials properties, fabrication, and irradiation damage effects. Candidate materials and materials development needs in the near term and long term are identified

  20. Space reactor/organic Rankine conversion - A near-term state-of-the-art solution

    Science.gov (United States)

    Niggemann, R. E.; Lacey, D.

    The use of demonstrated reactor technology with organic Rankine cycle (ORC) power conversion can provide a low cost, minimal risk approach to reactor-powered electrical generation systems in the near term. Several reactor technologies, including zirconium hydride, EBR-II and LMFBR, have demonstrated long life and suitability for space application at the operating temperature required by an efficient ORC engine. While this approach would not replace the high temperature space reactor systems presently under development, it could be available in a nearer time frame at a low and predictable cost, allowing some missions requiring high power levels to be flown prior to the availability of advanced systems with lower specific mass. Although this system has relatively high efficiency, the heat rejection temperature is low, requiring a large radiator on the order of 3.4 sq m/kWe. Therefore, a deployable heat pipe radiator configuration will be required.

  1. Study of a nuclear energy supplied steelmaking system for near-term application

    International Nuclear Information System (INIS)

    Yan, Xing L.; Kasahara, Seiji; Tachibana, Yukio; Kunitomi, Kazuhiko

    2012-01-01

    Conventional steelmaking processes involve intensive fossil fuel consumption and CO 2 emission. The system resulting from this study ties a steelmaking plant to a nuclear plant. The latter supplies the former all energy and feedstock with the exception of iron ore. The actual design takes on a multi-disciplinary approach: The nuclear plant employs a proven next-generation technology of fission reactor with 950 °C outlet temperature to produce electricity and heat. The plant construction saving and high efficiency keep the cogeneration cost down. The steelmaking plant employs conventional furnaces but substitutes hydrogen and oxygen for hydrocarbons as reactant and fuel. Water decomposition through an experimentally-demonstrated thermochemical process manufactures the feedstock gases required. Through essential safety features, particular a fully-passive nuclear safety, the design achieves physical proximity and yet operational independence of the two plants to facilitate inter-plant energy transmission. Calculated energy and material balance of the integrated system yields slightly over 1000 t steel per 1 MWt yr nuclear thermal energy. The steel cost is estimated competitive. The CO 2 emission amounts to 1% of conventional processes. The sustainable performance, economical potential, robust safety, and use of verified technological bases attract near-term deployment of this nuclear steelmaking system. -- Highlights: ► A steelmaking concept is proposed based on multi-disciplinary approach. ► It ties advanced nuclear fission reactor and energy conversion to thermochemical manufacture and direct iron making. ► Technological strength of each area is exploited to integrate a final process. ► Heat and material balance of the process is made to predict performance and cost. ► The system rules out fossil fuel use and CO 2 emission, and is near-term deployable.

  2. The development of fully dynamic rotating machine models for nuclear training simulators

    International Nuclear Information System (INIS)

    Birsa, J.J.

    1990-01-01

    Prior to beginning the development of an enhanced set of electrical plant models for several nuclear training simulators, an extensive literature search was conducted to evaluate and select rotating machine models for use on these simulators. These models include the main generator, diesel generators, in-plant electric power distribution and off-side power. Form the results of this search, various models were investigated and several were selected for further evaluation. Several computer studies were performed on the selected models in order to determine their suitability for use in a training simulator environment. One surprising result of this study was that a number of established, classical models could not be made to reproduce actual plant steady-state data over the range necessary for a training simulator. This evaluation process and its results are presented in this paper. Various historical, as well as contemporary, electrical models of rotating machines are discussed. Specific criteria for selection of rotating machine models for training simulator use are presented

  3. Harmonic wave model of a permanent magnet synchronous machine for modeling partial demagnetization under short circuit conditions

    NARCIS (Netherlands)

    Kral, C.; Haumer, A.; Bogomolov, M.D.; Lomonova, E.

    2012-01-01

    This paper proposes a multi domain physical model of permanent magnet synchronous machines, considering electrical, magnetic, thermal and mechanical effects. For each component of the model, the main wave as well as lower and higher harmonic wave components of the magnetic flux and the magnetic

  4. MATHEMATICAL MODEL FOR THE STUDY AND DESIGN OF A ROTARY-VANE GAS REFRIGERATION MACHINE

    Directory of Open Access Journals (Sweden)

    V. V. Trandafilov

    2016-08-01

    Full Text Available This paper presents a mathematical model of calculating the main parameters the operating cycle, rotary-vane gas refrigerating machine that affect installation, machine control and working processes occurring in it at the specified criteria. A procedure and a graphical method for the rotary-vane gas refrigerating machine (RVGRM are proposed. A parametric study of the main geometric variables and temperature variables on the thermal behavior of the system is analyzed. The model considers polytrope index for the compression and expansion in the chamber. Graphs of the pressure and temperature in the chamber of the angle of rotation of the output shaft are received. The possibility of inclusion in the cycle regenerative heat exchanger is appreciated. The change of the coefficient of performance machine after turning the cycle regenerative heat exchanger is analyzed. It is shown that the installation of a regenerator RVGRM cycle results in increased COP more than 30%. The simulation results show that the proposed model can be used to design and optimize gas refrigerator Stirling.

  5. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    Directory of Open Access Journals (Sweden)

    Wm M. Wood

    2018-02-01

    Full Text Available A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t, and current, I(t. The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX” model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  6. A tool for urban soundscape evaluation applying Support Vector Machines for developing a soundscape classification model.

    Science.gov (United States)

    Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, Angel F

    2014-06-01

    To ensure appropriate soundscape management in urban environments, the urban-planning authorities need a range of tools that enable such a task to be performed. An essential step during the management of urban areas from a sound standpoint should be the evaluation of the soundscape in such an area. In this sense, it has been widely acknowledged that a subjective and acoustical categorization of a soundscape is the first step to evaluate it, providing a basis for designing or adapting it to match people's expectations as well. In this sense, this work proposes a model for automatic classification of urban soundscapes. This model is intended for the automatic classification of urban soundscapes based on underlying acoustical and perceptual criteria. Thus, this classification model is proposed to be used as a tool for a comprehensive urban soundscape evaluation. Because of the great complexity associated with the problem, two machine learning techniques, Support Vector Machines (SVM) and Support Vector Machines trained with Sequential Minimal Optimization (SMO), are implemented in developing model classification. The results indicate that the SMO model outperforms the SVM model in the specific task of soundscape classification. With the implementation of the SMO algorithm, the classification model achieves an outstanding performance (91.3% of instances correctly classified). © 2013 Elsevier B.V. All rights reserved.

  7. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    Science.gov (United States)

    Wood, Wm M.

    2018-02-01

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  8. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    Science.gov (United States)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv

    2007-04-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.

  9. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    International Nuclear Information System (INIS)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, Jose C.; Shivpuri, Rajiv

    2007-01-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change

  10. Issues of Application of Machine Learning Models for Virtual and Real-Life Buildings

    Directory of Open Access Journals (Sweden)

    Young Min Kim

    2016-06-01

    Full Text Available The current Building Energy Performance Simulation (BEPS tools are based on first principles. For the correct use of BEPS tools, simulationists should have an in-depth understanding of building physics, numerical methods, control logics of building systems, etc. However, it takes significant time and effort to develop a first principles-based simulation model for existing buildings—mainly due to the laborious process of data gathering, uncertain inputs, model calibration, etc. Rather than resorting to an expert’s effort, a data-driven approach (so-called “inverse” approach has received growing attention for the simulation of existing buildings. This paper reports a cross-comparison of three popular machine learning models (Artificial Neural Network (ANN, Support Vector Machine (SVM, and Gaussian Process (GP for predicting a chiller’s energy consumption in a virtual and a real-life building. The predictions based on the three models are sufficiently accurate compared to the virtual and real measurements. This paper addresses the following issues for the successful development of machine learning models: reproducibility, selection of inputs, training period, outlying data obtained from the building energy management system (BEMS, and validation of the models. From the result of this comparative study, it was found that SVM has a disadvantage in computation time compared to ANN and GP. GP is the most sensitive to a training period among the three models.

  11. Component simulation in problems of calculated model formation of automatic machine mechanisms

    Directory of Open Access Journals (Sweden)

    Telegin Igor

    2017-01-01

    Full Text Available The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gaps in kinematic pairs, friction forces, design and technological loads. As an example in the paper there are considered a formalization of stages in the computer model formation of the cutting mechanism in cold stamping automatic machine AV1818 and methods of for the computation of their parameters on the basis of its solid-state model.

  12. Limitations Of The Current State Space Modelling Approach In Multistage Machining Processes Due To Operation Variations

    Science.gov (United States)

    Abellán-Nebot, J. V.; Liu, J.; Romero, F.

    2009-11-01

    The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.

  13. Component simulation in problems of calculated model formation of automatic machine mechanisms

    OpenAIRE

    Telegin Igor; Kozlov Alexander; Zhirkov Alexander

    2017-01-01

    The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gap...

  14. Research on Modeling and Control of Regenerative Braking for Brushless DC Machines Driven Electric Vehicles

    OpenAIRE

    Jian-ping Wen; Chuan-wei Zhang

    2015-01-01

    In order to improve energy utilization rate of battery-powered electric vehicle (EV) using brushless DC machine (BLDCM), the model of braking current generated by regenerative braking and control method are discussed. On the basis of the equivalent circuit of BLDCM during the generative braking period, the mathematic model of braking current is established. By using an extended state observer (ESO) to observe actual braking current and the unknown disturbances of regenerative braking system, ...

  15. Direct Drive Synchronous Machine Models for Stability Assessment of Wind Farms

    Energy Technology Data Exchange (ETDEWEB)

    Poeller, Markus; Achilles, Sebastian [DIgSILENT GmbH, Gomaringen (Germany)

    2003-11-01

    The increasing size of wind farms requires power system stability analysis including dynamic wind generator models. For turbines above 1MW doubly-fed induction machines are the most widely used concept. However, especially in Germany, direct-drive wind generators based on converter-driven synchronous generator concepts have reached considerable market penetration. This paper presents converter driven synchronous generator models of various order that can be used for simulating transients and dynamics in a very wide time range.

  16. Model for Investigation of Operational Wind Power Plant Regimes with Doubly–Fed Asynchronous Machine in Power System

    Directory of Open Access Journals (Sweden)

    R. I. Mustafayev

    2012-01-01

    Full Text Available The paper presents methodology for mathematical modeling of power system (its part when jointly operated with wind power plants (stations that contain asynchronous doubly-fed machines used as generators. The essence and advantage of the methodology is that it allows efficiently to mate equations of doubly-fed asynchronous machines, written in the axes that rotate with the machine rotor speed with the equations of external electric power system, written in synchronously rotating axes.

  17. Characteristics determination of Tanka X-ray Diagnostic Machine Model RTO-125

    International Nuclear Information System (INIS)

    Trijoko, Susetyo; Nasukha; Suyati; Nugroho, Agung.

    1993-01-01

    Characteristics determination of Tanka X-ray diagnostic machine model RTO-125. The characteristics of X-ray machine used for examining patient should be known. The characteristics studied in this paper include : X-ray beam profile, coincidence of the light field with radiation field, peak voltage, radiation quality, stability of exposures, and linearity of exposures against time. Beam profile and radiation-field alignment were determined using X-ray film. Winconsin kVp test cassette was used to measure peak voltage. The quality of the radiation, represented by half-value layer (HVL), was measured using aluminium step-wedge. Stability and linearity of exposures were measured using ionization chamber detector having an air volume of 40 cc. The results of this study were documented for the TANKA X-ray machine model RTO-125 of PSPKR BATAN, and the method of this study could be applied for X-ray diagnostic machine in general. (authors). 6 refs., 2 tabs., 6 figs

  18. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain; Iqbal; Muljadi, Eduard

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solvers that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.

  19. Using cognitive modeling to improve the man-machine interface

    International Nuclear Information System (INIS)

    Newton, R.A.; Zyduck, R.C.; Johnson, D.R.

    1982-01-01

    A group of utilities from the Westinghouse Owners Group was formed in early 1980 to examine the interface requirements and to determine how they could be implemented. The products available from the major vendors were examined early in 1980 and judged not to be completely applicable. The utility group then decided to develop its own specifications for a Safety Assessment System (SAS) and, later in 1980, contracted with a company to develop the system, prepare the software and demonstrate the system on a simulator. The resulting SAS is a state-of-the-art system targeted for implementation on pressurized water reactor nuclear units. It has been designed to provide control room operators with centralized and easily understandable information from a computer-based data and display system. This paper gives an overview of the SAS plus a detailed description of one of its functional areas - called AIDS. The AIDS portion of SAS is an advanced concept which uses cognitive modeling of the operator as the basis for its design

  20. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  1. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  2. Near-term implications of a ban on new coal-fired power plants in the United States.

    Science.gov (United States)

    Newcomer, Adam; Apt, Jay

    2009-06-01

    Large numbers of proposed new coal power generators in the United States have been canceled, and some states have prohibited new coal power generators. We examine the effects on the U.S. electric power system of banning the construction of coal-fired electricity generators, which has been proposed as a means to reduce U.S. CO2 emissions. The model simulates load growth, resource planning, and economic dispatch of the Midwest Independent Transmission System Operator (ISO), Inc., Electric Reliability Council of Texas (ERCOT), and PJM under a ban on new coal generation and uses an economic dispatch model to calculate the resulting changes in dispatch order, CO2 emissions, and fuel use under three near-term (until 2030) future electric power sector scenarios. A national ban on new coal-fired power plants does not lead to CO2 reductions of the scale required under proposed federal legislation such as Lieberman-Warner but would greatly increase the fraction of time when natural gas sets the price of electricity, even with aggressive wind and demand response policies.

  3. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    International Nuclear Information System (INIS)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-01-01

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well

  4. Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System.

    Science.gov (United States)

    Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose

    2018-02-22

    Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.

  5. A one-dimensional Q-machine model taking into account charge-exchange collisions

    International Nuclear Information System (INIS)

    Maier, H.; Kuhn, S.

    1992-01-01

    The Q-machine is a nontrivial bounded plasma system which is excellently suited not only for fundamental plasma physics investigations but also for the development and testing of new theoretical methods for modeling such systems. However, although Q-machines have now been around for over thirty years, it appears that there exist no comprehensive theoretical models taking into account their considerable geometrical and physical complexity with a reasonable degree of self-consistency. In the present context we are concerned with the low-density, single-emitter Q-machine, for which the most widely used model is probably the (one-dimensional) ''collisionless plane-diode model'', which has originally been developed for thermionic diodes. Although the validity of this model is restricted to certain ''axial'' phenomena, we consider it a suitable starting point for extensions of various kinds. While a generalization to two-dimensional geometry (with still collisionless plasma) is being reported elsewhere, the present work represents a first extension to collisional plasma (with still one-dimensional geometry). (author) 12 refs., 2 figs

  6. Law machines: scale models, forensic materiality and the making of modern patent law.

    Science.gov (United States)

    Pottage, Alain

    2011-10-01

    Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.

  7. Contribution to the modelling of induction machines by fractional order; Contribution a la modelisation dynamique d'ordre non entier de la machine asynchrone a cage

    Energy Technology Data Exchange (ETDEWEB)

    Canat, S.

    2005-07-15

    Induction machine is most widespread in industry. Its traditional modeling does not take into account the eddy current in the rotor bars which however induce strong variations as well of the resistance as of the resistance of the rotor. This diffusive phenomenon, called 'skin effect' could be modeled by a compact transfer function using fractional derivative (non integer order). This report theoretically analyzes the electromagnetic phenomenon on a single rotor bar before approaching the rotor as a whole. This analysis is confirmed by the results of finite elements calculations of the magnetic field, exploited to identify a fractional order model of the induction machine (identification method of Levenberg-Marquardt). Then, the model is confronted with an identification of experimental results. Finally, an automatic method is carried out to approximate the dynamic model by integer order transfer function on a frequency band. (author)

  8. Antibiotic prophylaxis for term or near-term premature rupture of membranes: metaanalysis of randomized trials.

    Science.gov (United States)

    Saccone, Gabriele; Berghella, Vincenzo

    2015-05-01

    The objective of the study was to evaluate the efficacy of antibiotic prophylaxis in women with term or near-term premature rupture of membranes. Searches were performed in MEDLINE, OVID, Scopus, ClinicalTrials.gov, the PROSPERO International Prospective Register of Systematic Reviews, EMBASE, ScienceDirect.com, MEDSCAPE, and the Cochrane Central Register of Controlled Trials with the use of a combination of key words and text words related to antibiotics, premature rupture of membranes, term, and trials from inception of each database to September 2014. We included all randomized trials of singleton gestations with premature rupture of membranes at 36 weeks or more, who were randomized to antibiotic prophylaxis or control (either placebo or no treatment). The primary outcomes included maternal chorioamnionitis and neonatal sepsis. A subgroup analysis on studies with latency more than 12 hours was planned. Before data extraction, the review was registered with the PROSPERO International Prospective Register of Systematic Reviews (registration number CRD42014013928). The metaanalysis was performed following the Preferred Reporting Item for Systematic Reviews and Meta-analyses statement. Women who received antibiotics had the same rate of chorioamnionitis (2.7% vs 3.7%; relative risk [RR], 0.73, 95% confidence interval [CI], 0.48-1.12), endometritis (0.4% vs 0.9%; RR, 0.44, 95% CI, 0.18-1.10), maternal infection (3.1% vs 4.6%; RR, 0.48, 95% CI, 0.19-1.21), and neonatal sepsis (1.0% vs 1.4%; RR, 0.69, 95% CI, 0.34-1.39). In the planned subgroup analysis, women with latency longer than 12 hours, who received antibiotics, had a lower rate of chorioamnionitis (2.9% vs 6.1%; RR, 0.49, 95% CI, 0.27-0.91) and endometritis (0% vs 2.2%; RR, 0.12, 95% CI, 0.02-0.62) compared with the control group. Antibiotic prophylaxis for term or near-term premature rupture of membranes is not associated with any benefits in either maternal or neonatal outcomes. In women with latency longer

  9. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  10. Using Machine Learning as a fast emulator of physical processes within the Met Office's Unified Model

    Science.gov (United States)

    Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.

    2017-12-01

    The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.

  11. Machine Learning and Deep Learning Models to Predict Runoff Water Quantity and Quality

    Science.gov (United States)

    Bradford, S. A.; Liang, J.; Li, W.; Murata, T.; Simunek, J.

    2017-12-01

    Contaminants can be rapidly transported at the soil surface by runoff to surface water bodies. Physically-based models, which are based on the mathematical description of main hydrological processes, are key tools for predicting surface water impairment. Along with physically-based models, data-driven models are becoming increasingly popular for describing the behavior of hydrological and water resources systems since these models can be used to complement or even replace physically based-models. In this presentation we propose a new data-driven model as an alternative to a physically-based overland flow and transport model. First, we have developed a physically-based numerical model to simulate overland flow and contaminant transport (the HYDRUS-1D overland flow module). A large number of numerical simulations were carried out to develop a database containing information about the impact of various input parameters (weather patterns, surface topography, vegetation, soil conditions, contaminants, and best management practices) on runoff water quantity and quality outputs. This database was used to train data-driven models. Three different methods (Neural Networks, Support Vector Machines, and Recurrence Neural Networks) were explored to prepare input- output functional relations. Results demonstrate the ability and limitations of machine learning and deep learning models to predict runoff water quantity and quality.

  12. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  13. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  14. Response surface modelling of tool electrode wear rate and material removal rate in micro electrical discharge machining of Inconel 718

    DEFF Research Database (Denmark)

    Puthumana, Govindan

    2017-01-01

    conductivity and high strength causing it extremely difficult tomachine. Micro-Electrical Discharge Machining (Micro-EDM) is a non-conventional method that has a potential toovercome these restrictions for machining of Inconel 718. Response Surface Method (RSM) was used for modelling thetool Electrode Wear...

  15. The Relevance Voxel Machine (RVoxM): A Self-Tuning Bayesian Model for Informative Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2012-01-01

    This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatially...

  16. Extended Park's transformation for 2×3-phase synchronous machine and converter phasor model with representation of AC harmonics

    DEFF Research Database (Denmark)

    Knudsen, Hans

    1995-01-01

    A model of the 2×3-phase synchronous machine is presented using a new transformation based on the eigenvectors of the stator inductance matrix. The transformation fully decouples the stator inductance matrix, and this leads to an equivalent diagram of the machine with no mutual couplings...

  17. Near term hybrid passenger vehicle development program. Phase I. Appendices C and D. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-01-01

    The derivation of and actual preliminary design of the Near Term Hybrid Vehicle (NTHV) are presented. The NTHV uses a modified GM Citation body, a VW Rabbit turbocharged diesel engine, a 24KW compound dc electric motor, a modified GM automatic transmission, and an on-board computer for transmission control. The following NTHV information is presented: the results of the trade-off studies are summarized; the overall vehicle design; the selection of the design concept and the base vehicle (the Chevrolet Citation), the battery pack configuration, structural modifications, occupant protection, vehicle dynamics, and aerodynamics; the powertrain design, including the transmission, coupling devices, engine, motor, accessory drive, and powertrain integration; the motor controller; the battery type, duty cycle, charger, and thermal requirements; the control system (electronics); the identification of requirements, software algorithm requirements, processor selection and system design, sensor and actuator characteristics, displays, diagnostics, and other topics; environmental system including heating, air conditioning, and compressor drive; the specifications, weight breakdown, and energy consumption measures; advanced technology components, and the data sources and assumptions used. (LCL)

  18. Alternative routes to improved fuel utilization: Analysis of near-term economic incentives

    International Nuclear Information System (INIS)

    Salo, J.P.; Vieno, T.; Vira, J.

    1984-01-01

    The potential for savings in the nuclear fuel cycle costs is discussed from the point of view of a single utility. The analysis is concentrated on the existing and near-term economic incentives for improved fuel utilization, and the context is that of a small country without domestic fuel cycle services. In the uranium fuel cycle the extended burnup produces savings in the uranium feed as well as in the fuel fabrication and waste management requirements. The front-end fuel cycle cost impact is evaluated for BWRs. In the back-end part the situation is more specific of the concrete back-end solution. Estimates for savings in the cost of direct disposal of spent fuel are presented for a Finnish case. The economics of recycle is reviewed from a recent study on the use of MOX fuel in the Finnish BWRs. The results from a comparison with once-through alternative show that spent fuel reprocessing with consequent recycle of uranium and plutonium would be economically justified only with very high uranium prices. (author)

  19. Phase I of the Near-Term Hybrid Passenger-Vehicle Development Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-10-01

    Under contract to the Jet Propulsion Laboratory of the California Institute of Technology, Minicars conducted Phase I of the Near-Term Hybrid Passenger Vehicle (NTHV) Development Program. This program led to the preliminary design of a hybrid (electric and internal combustion engine powered) vehicle and fulfilled the objectives set by JPL. JPL requested that the report address certain specific topics. A brief summary of all Phase I activities is given initially; the hybrid vehicle preliminary design is described in Sections 4, 5, and 6. Table 2 of the Summary lists performance projections for the overall vehicle and some of its subsystems. Section 4.5 gives references to the more-detailed design information found in the Preliminary Design Data Package (Appendix C). Alternative hybrid-vehicle design options are discussed in Sections 3 through 6. A listing of the tradeoff study alternatives is included in Section 3. Computer simulations are discussed in Section 9. Section 8 describes the supporting economic analyses. Reliability and safety considerations are discussed specifically in Section 7 and are mentioned in Sections 4, 5, and 6. Section 10 lists conclusions and recommendations arrived at during the performance of Phase I. A complete bibliography follows the list of references.

  20. California Power-to-Gas and Power-to-Hydrogen Near-Term Business Case Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Eichman, Josh [National Renewable Energy Lab. (NREL), Golden, CO (United States); Flores-Espino, Francisco [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-12-01

    Flexible operation of electrolysis systems represents an opportunity to reduce the cost of hydrogen for a variety of end-uses while also supporting grid operations and thereby enabling greater renewable penetration. California is an ideal location to realize that value on account of growing renewable capacity and markets for hydrogen as a fuel cell electric vehicle (FCEV) fuel, refineries, and other end-uses. Shifting the production of hydrogen to avoid high cost electricity and participation in utility and system operator markets along with installing renewable generation to avoid utility charges and increase revenue from the Low Carbon Fuel Standard (LCFS) program can result in around $2.5/kg (21%) reduction in the production and delivery cost of hydrogen from electrolysis. This reduction can be achieved without impacting the consumers of hydrogen. Additionally, future strategies for reducing hydrogen cost were explored and include lower cost of capital, participation in the Renewable Fuel Standard program, capital cost reduction, and increased LCFS value. Each must be achieved independently and could each contribute to further reductions. Using the assumptions in this study found a 29% reduction in cost if all future strategies are realized. Flexible hydrogen production can simultaneously improve the performance and decarbonize multiple energy sectors. The lessons learned from this study should be used to understand near-term cost drivers and to support longer-term research activities to further improve cost effectiveness of grid integrated electrolysis systems.

  1. Evaluation of the Terminal Precision Scheduling and Spacing System for Near-Term NAS Application

    Science.gov (United States)

    Thipphavong, Jane; Martin, Lynne Hazel; Swenson, Harry N.; Lin, Paul; Nguyen, Jimmy

    2012-01-01

    NASA has developed a capability for terminal area precision scheduling and spacing (TAPSS) to provide higher capacity and more efficiently manage arrivals during peak demand periods. This advanced technology is NASA's vision for the NextGen terminal metering capability. A set of human-in-the-loop experiments was conducted to evaluate the performance of the TAPSS system for near-term implementation. The experiments evaluated the TAPSS system under the current terminal routing infrastructure to validate operational feasibility. A second goal of the study was to measure the benefit of the Center and TRACON advisory tools to help prioritize the requirements for controller radar display enhancements. Simulation results indicate that using the TAPSS system provides benefits under current operations, supporting a 10% increase in airport throughput. Enhancements to Center decision support tools had limited impact on improving the efficiency of terminal operations, but did provide more fuel-efficient advisories to achieve scheduling conformance within 20 seconds. The TRACON controller decision support tools were found to provide the most benefit, by improving the precision in schedule conformance to within 20 seconds, reducing the number of arrivals having lateral path deviations by 50% and lowering subjective controller workload. Overall, the TAPSS system was found to successfully develop an achievable terminal arrival metering plan that was sustainable under heavy traffic demand levels and reduce the complexity of terminal operations when coupled with the use of the terminal controller advisory tools.

  2. Ceramic composites for near term reactor application - HTR2008-58050

    International Nuclear Information System (INIS)

    Snead, L. L.; Katoh, Y.; Windes, W. E.; Shinavski, R. J.; Burchell, T. D.

    2008-01-01

    Currently, two composites types are being developed for in-core application: carbon fiber carbon composite (CFC), and silicon carbide fiber composite (SiC/SiC.) Irradiation effects studies have been carried out over the past few decades yielding radiation-tolerant CFC's and a composite of SiC/SiC with no apparent degradation in mechanical properties to very high neutron exposure. While CFC's can be engineered with significantly higher thermal conductivity, and a slight advantage in manufacturability than SiC/SiC, they do have a neutron irradiation-limited lifetime. The SiC composite, while possessing lower thermal conductivity (especially following irradiation), appears to have mechanical properties insensitive to irradiation. Both materials are currently being produced to sizes much larger than that considered for nuclear application. In addition to materials aspects, results of programs focusing on practical aspects of deploying composites for near-term reactors will be discussed. In particular, significant progress has been made in the fabrication, testing, and qualification of composite gas-cooled reactor control rod sheaths and the ASTM standardization required for eventual qualification. (authors)

  3. Three near term commercial markets in space and their potential role in space exploration

    Science.gov (United States)

    Gavert, Raymond B.

    2001-02-01

    Independent market studies related to Low Earth Orbit (LEO) commercialization have identified three near term markets that have return-on-investment potential. These markets are: (1) Entertainment (2) Education (3) Advertising/sponsorship. Commercial activity is presently underway focusing on these areas. A private company is working with the Russians on a commercial module attached to the ISS that will involve entertainment and probably the other two activities as well. A separate corporation has been established to commercialize the Russian Mir Space Station with entertainment and promotional advertising as important revenue sources. A new startup company has signed an agreement with NASA for commercial media activity on the International Space Station (ISS). Profit making education programs are being developed by a private firm to allow students to play the role of an astronaut and work closely with space scientists and astronauts. It is expected that the success of these efforts on the ISS program will extend to exploration missions beyond LEO. The objective of this paper is to extrapolate some of the LEO commercialization experiences to see what might be expected in space exploration missions to Mars, the Moon and beyond. .

  4. Round and round: Little consensus exists on the near-term future of natural gas

    International Nuclear Information System (INIS)

    Lunan, D.

    2004-01-01

    The various combinations of factors influencing natural gas supply and demand and the future price of natural gas is discussed. Expert opinion is that prices will continue to track higher, demand will grow with the surging American economy, and supplies will remain constrained providing more fuel for another cycle of ever-higher prices. There is also considerable concern about the continuing rise in demand and tight supply situation in the near term, and the uncertainty about when, or even whether, major new sources will become available. The prediction is that the overriding impact of declining domestic supplies will put a premium on natural gas at any given time. Overall, it appears certain that higher prices are here to stay: as a result, industrial gas users will see their competitiveness eroded, and individual consumers will see their heating bills rise. Governments, too, will be affected as the increasing cost of natural gas will slow down the pace of conversion of coal-fired power generating plants to natural gas, reducing anticipated emissions benefits and in the process compromising environmental goals. Current best estimates put prices for the 2004/2005 heating season at about US$5.40 per MMBtu, whereas the longer term price range is estimated to lie in the range of US$4.75 to US$5.25 per MMBtu. 2 figs

  5. A Collaboration Model for Community-Based Software Development with Social Machines

    Directory of Open Access Journals (Sweden)

    Dave Murray-Rust

    2016-02-01

    Full Text Available Crowdsourcing is generally used for tasks with minimal coordination, providing limited support for dynamic reconfiguration. Modern systems, exemplified by social ma chines, are subject to continual flux in both the client and development communities and their needs. To support crowdsourcing of open-ended development, systems must dynamically integrate human creativity with machine support. While workflows can be u sed to handle structured, predictable processes, they are less suitable for social machine development and its attendant uncertainty. We present models and techniques for coordination of human workers in crowdsourced software development environments. We combine the Social Compute Unit—a model of ad-hoc human worker teams—with versatile coordination protocols expressed in the Lightweight Social Calculus. This allows us to combine coordination and quality constraints with dynamic assessments of end-user desires, dynamically discovering and applying development protocols.

  6. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  7. Fault Tolerance Automotive Air-Ratio Control Using Extreme Learning Machine Model Predictive Controller

    OpenAIRE

    Pak Kin Wong; Hang Cheong Wong; Chi Man Vong; Tong Meng Iong; Ka In Wong; Xianghui Gao

    2015-01-01

    Effective air-ratio control is desirable to maintain the best engine performance. However, traditional air-ratio control assumes the lambda sensor located at the tail pipe works properly and relies strongly on the air-ratio feedback signal measured by the lambda sensor. When the sensor is warming up during cold start or under failure, the traditional air-ratio control no longer works. To address this issue, this paper utilizes an advanced modelling technique, kernel extreme learning machine (...

  8. A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, Hendrik F. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center

    2017-05-31

    The goal of the project was the development and demonstration of a significantly improved solar forecasting technology (short: Watt-sun), which leverages new big data processing technologies and machine-learnt blending between different models and forecast systems. The technology aimed demonstrating major advances in accuracy as measured by existing and new metrics which themselves were developed as part of this project. Finally, the team worked with Independent System Operators (ISOs) and utilities to integrate the forecasts into their operations.

  9. Quantitative chemogenomics: machine-learning models of protein-ligand interaction.

    Science.gov (United States)

    Andersson, Claes R; Gustafsson, Mats G; Strömbergsson, Helena

    2011-01-01

    Chemogenomics is an emerging interdisciplinary field that lies in the interface of biology, chemistry, and informatics. Most of the currently used drugs are small molecules that interact with proteins. Understanding protein-ligand interaction is therefore central to drug discovery and design. In the subfield of chemogenomics known as proteochemometrics, protein-ligand-interaction models are induced from data matrices that consist of both protein and ligand information along with some experimentally measured variable. The two general aims of this quantitative multi-structure-property-relationship modeling (QMSPR) approach are to exploit sparse/incomplete information sources and to obtain more general models covering larger parts of the protein-ligand space, than traditional approaches that focuses mainly on specific targets or ligands. The data matrices, usually obtained from multiple sparse/incomplete sources, typically contain series of proteins and ligands together with quantitative information about their interactions. A useful model should ideally be easy to interpret and generalize well to new unseen protein-ligand combinations. Resolving this requires sophisticated machine-learning methods for model induction, combined with adequate validation. This review is intended to provide a guide to methods and data sources suitable for this kind of protein-ligand-interaction modeling. An overview of the modeling process is presented including data collection, protein and ligand descriptor computation, data preprocessing, machine-learning-model induction and validation. Concerns and issues specific for each step in this kind of data-driven modeling will be discussed. © 2011 Bentham Science Publishers

  10. Advancing Control for Shield Tunneling Machine by Backstepping Design with LuGre Friction Model

    Directory of Open Access Journals (Sweden)

    Haibo Xie

    2014-01-01

    Full Text Available Shield tunneling machine is widely applied for underground tunnel construction. The shield machine is a complex machine with large momentum and ultralow advancing speed. The working condition underground is rather complicated and unpredictable, and brings big trouble in controlling the advancing speed. This paper focused on the advancing motion control on desired tunnel axis. A three-state dynamic model was established with considering unknown front face earth pressure force and unknown friction force. LuGre friction model was introduced to describe the friction force. Backstepping design was then proposed to make tracking error converge to zero. To have a comparison study, controller without LuGre model was designed. Tracking simulations of speed regulations and simulations when front face earth pressure changed were carried out to show the transient performances of the proposed controller. The results indicated that the controller had good tracking performance even under changing geological conditions. Experiments of speed regulations were carried out to have validations of the controllers.

  11. Study on intelligent processing system of man-machine interactive garment frame model

    Science.gov (United States)

    Chen, Shuwang; Yin, Xiaowei; Chang, Ruijiang; Pan, Peiyun; Wang, Xuedi; Shi, Shuze; Wei, Zhongqian

    2018-05-01

    A man-machine interactive garment frame model intelligent processing system is studied in this paper. The system consists of several sensor device, voice processing module, mechanical parts and data centralized acquisition devices. The sensor device is used to collect information on the environment changes brought by the body near the clothes frame model, the data collection device is used to collect the information of the environment change induced by the sensor device, voice processing module is used for speech recognition of nonspecific person to achieve human-machine interaction, mechanical moving parts are used to make corresponding mechanical responses to the information processed by data collection device.it is connected with data acquisition device by a means of one-way connection. There is a one-way connection between sensor device and data collection device, two-way connection between data acquisition device and voice processing module. The data collection device is one-way connection with mechanical movement parts. The intelligent processing system can judge whether it needs to interact with the customer, realize the man-machine interaction instead of the current rigid frame model.

  12. Improving virtual screening predictive accuracy of Human kallikrein 5 inhibitors using machine learning models.

    Science.gov (United States)

    Fang, Xingang; Bagui, Sikha; Bagui, Subhash

    2017-08-01

    The readily available high throughput screening (HTS) data from the PubChem database provides an opportunity for mining of small molecules in a variety of biological systems using machine learning techniques. From the thousands of available molecular descriptors developed to encode useful chemical information representing the characteristics of molecules, descriptor selection is an essential step in building an optimal quantitative structural-activity relationship (QSAR) model. For the development of a systematic descriptor selection strategy, we need the understanding of the relationship between: (i) the descriptor selection; (ii) the choice of the machine learning model; and (iii) the characteristics of the target bio-molecule. In this work, we employed the Signature descriptor to generate a dataset on the Human kallikrein 5 (hK 5) inhibition confirmatory assay data and compared multiple classification models including logistic regression, support vector machine, random forest and k-nearest neighbor. Under optimal conditions, the logistic regression model provided extremely high overall accuracy (98%) and precision (90%), with good sensitivity (65%) in the cross validation test. In testing the primary HTS screening data with more than 200K molecular structures, the logistic regression model exhibited the capability of eliminating more than 99.9% of the inactive structures. As part of our exploration of the descriptor-model-target relationship, the excellent predictive performance of the combination of the Signature descriptor and the logistic regression model on the assay data of the Human kallikrein 5 (hK 5) target suggested a feasible descriptor/model selection strategy on similar targets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Characterization and modeling of 2D-glass micro-machining by spark-assisted chemical engraving (SACE) with constant velocity

    International Nuclear Information System (INIS)

    Didar, Tohid Fatanat; Dolatabadi, Ali; Wüthrich, Rolf

    2008-01-01

    Spark-assisted chemical engraving (SACE) is an unconventional micro-machining technology based on electrochemical discharge used for micro-machining nonconductive materials. SACE 2D micro-machining with constant speed was used to machine micro-channels in glass. Parameters affecting the quality and geometry of the micro-channels machined by SACE technology with constant velocity were presented and the effect of each of the parameters was assessed. The effect of chemical etching on the geometry of micro-channels under different machining conditions has been studied, and a model is proposed for characterization of the micro-channels as a function of machining voltage and applied speed

  14. Prediction of near-term increases in suicidal ideation in recently depressed patients with bipolar II disorder using intensive longitudinal data.

    Science.gov (United States)

    Depp, Colin A; Thompson, Wesley K; Frank, Ellen; Swartz, Holly A

    2017-01-15

    There are substantial gaps in understanding near-term precursors of suicidal ideation in bipolar II disorder. We evaluated whether repeated patient-reported mood and energy ratings predicted subsequent near-term increases in suicide ideation. Secondary data were used from 86 depressed adults with bipolar II disorder enrolled in one of 3 clinical trials evaluating Interpersonal and Social Rhythm Therapy and/or pharmacotherapy as treatments for depression. Twenty weeks of daily mood and energy ratings and weekly Hamilton Depression Rating Scale (HDRS) were obtained. Penalized regression was used to model trajectories of daily mood and energy ratings in the 3 week window prior to HDRS Suicide Item ratings. Participants completed an average of 68.6 (sd=52) days of mood and energy ratings. Aggregated across the sample, 22% of the 1675 HDRS Suicide Item ratings were non-zero, indicating presence of at least some suicidal thoughts. A cross-validated model with longitudinal ratings of energy and depressed mood within the three weeks prior to HDRS ratings resulted in an AUC of 0.91 for HDRS Suicide item >2, accounting for twice the variation when compared to baseline HDRS ratings. Energy, both at low and high levels, was an earlier predictor than mood. Data derived from a heterogeneous treated sample may not generalize to naturalistic samples. Identified suicidal behavior was absent from the sample so it could not be predicted. Prediction models coupled with intensively gathered longitudinal data may shed light on the dynamic course of near-term risk factors for suicidal ideation in bipolar II disorder. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Developing a dengue forecast model using machine learning: A case study in China.

    Science.gov (United States)

    Guo, Pi; Liu, Tao; Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun

    2017-10-01

    In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011-2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics.

  16. Are there intelligent Turing machines?

    OpenAIRE

    Bátfai, Norbert

    2015-01-01

    This paper introduces a new computing model based on the cooperation among Turing machines called orchestrated machines. Like universal Turing machines, orchestrated machines are also designed to simulate Turing machines but they can also modify the original operation of the included Turing machines to create a new layer of some kind of collective behavior. Using this new model we can define some interested notions related to cooperation ability of Turing machines such as the intelligence quo...

  17. Multifrequency spiral vector model for the brushless doubly-fed induction machine

    DEFF Research Database (Denmark)

    Han, Peng; Cheng, Ming; Zhu, Xinkai

    2017-01-01

    This paper presents a multifrequency spiral vector model for both steady-state and dynamic performance analysis of the brushless doubly-fed induction machine (BDFIM) with a nested-loop rotor. Winding function theory is first employed to give a full picture of the inductance characteristics...... analytically, revealing the underlying relationship between harmonic components of stator-rotor mutual inductances and the airgap magnetic field distribution. Different from existing vector models, which only model the fundamental components of mutual inductances, the proposed vector model takes...... into consideration the low-order space harmonic coupling by incorporating nonsinusoidal inductances into modeling process. A new model order reduction approach is then proposed to transform the nested-loop rotor into an equivalent single-loop one. The effectiveness of the proposed modelling method is verified by 2D...

  18. Meeting the near-term demand for hydrogen using nuclear energy in competitive power markets

    International Nuclear Information System (INIS)

    Miller, Alistair I.; Duffey, Romney B.

    2004-01-01

    Hydrogen is becoming the reference fuel for future transportation and, in the USA in particular, a vision for its production from advanced nuclear reactors has been formulated. Fulfillment of this vision depend on its economics in 2020 or later. Prior to 2020, hydrogen needs to gain a substantial foothold without incurring excessive costs for the establishment of the distribution network for the new fuel. Water electrolysis and steam-methane reforming (SMR) are the existing hydrogen-production technologies, used for small-scale and large-scale production, respectively. Provided electricity is produced at costs expected for nuclear reactors of near-term design, electrolysis appears to offer superior economics when the SMR-related costs of distribution and sequestration (or an equivalent emission levy) are included. This is shown to hold at least until several percentage points of road transport have been converted to hydrogen. Electrolysis has large advantages over SMRs in being almost scale-independent and allowing local production. The key requirements for affordable electrolysis are low capital cost and relatively high utilization, although the paper shows that it should be advantageous to avoid the peaks of electricity demand and cost. The electricity source must enable high utilization as well as being itself low-cost and emissions-free. By using off-peak electricity, no extra costs for enhanced electricity distribution should occur. The longer-term supply of hydrogen may ultimately evolve away from low-temperature water electrolysis but it appears to be an excellent technology for early deployment and capable of supplying hydrogen at prices not dissimilar from today's costs for gasoline and diesel provided the vehicle's power unit is a fuel cell. (author)

  19. AP1000 will meet the challenges of near-term deployment

    International Nuclear Information System (INIS)

    Matzie, Regis A.

    2008-01-01

    The world demand for energy is growing rapidly, particularly in developing countries that are trying to raise the standard of living for billions of people, many of whom do not have access to electricity or clean water. Climate change and the concern for increased emissions of green house gases have brought into question the future primary reliance of fossil fuels. With the projected worldwide increase in energy demand, concern for the environmental impact of carbon emissions, and the recent price volatility of fossil fuels, nuclear energy is undergoing a rapid resurgence. This 'nuclear renaissance' is broad based, reaching across Asia, North America, Europe, as well as selected countries in Africa and South America. Many countries have publicly expressed their intentions to pursue the construction of new nuclear energy plants. Some countries that have previously turned away from commercial nuclear energy are reconsidering the advisability of this decision. This renaissance is facilitated by the availability of more advanced reactor designs than are operating today, with improved safety, economy, and operations. One such design, the Westinghouse AP1000 advanced passive plant, has been a long time in the making! The development of this passive technology started over two decades ago from an embryonic belief that a new approach to design was needed to spawn a nuclear renaissance. The principal challenges were seen as ensuring reactor safety by requiring less reliance on operator actions and overcoming the high plant capital cost of nuclear energy. The AP1000 design is based on the use of innovative passive technology and modular construction, which require significantly less equipment and commodities that facilitate a more rapid construction schedule. Because Westinghouse had the vision and the perseverance to continue the development of this passive technology, the AP1000 design is ready to meet today's challenge of near-term deployment

  20. Near-term viability of solar heat applications for the federal sector

    Science.gov (United States)

    Williams, T. A.

    1991-12-01

    Solar thermal technologies are capable of providing heat across a wide range of temperatures, making them potentially attractive for meeting energy requirements for industrial process heat applications and institutional heating. The energy savings that could be realized by solar thermal heat are quite large, potentially several quads annually. Although technologies for delivering heat at temperatures above 100 C currently exist within industry, only a fairly small number of commercial systems have been installed to date. The objective of this paper is to investigate and discuss the prospects for near term solar heat sales to federal facilities as a mechanism for providing an early market niche to the aid the widespread development and implementation of the technology. The specific technical focus is on mid-temperature (100 to 350 C) heat demands that could be met with parabolic trough systems. Federal facilities have several features relative to private industry that may make them attractive for solar heat applications relative to other sectors. Key features are specific policy mandates for conserving energy, a long term planning horizon with well defined decision criteria, and prescribed economic return criteria for conservation and solar investments that are generally less stringent than the investment criteria used by private industry. Federal facilities also have specific difficulties in the sale of solar heat technologies that are different from those of other sectors, and strategies to mitigate these difficulties will be important. For the baseline scenario developed in this paper, the solar heat application was economically competitive with heat provided by natural gas. The system levelized energy cost was $5.9/MBtu for the solar heat case, compared to $6.8/MBtu for the life cycle fuel cost of a natural gas case. A third-party ownership would also be attractive to federal users, since it would guarantee energy savings and would not need initial federal funds.

  1. Predicting Near-Term Water Quality from Satellite Observations of Watershed Conditions

    Science.gov (United States)

    Weiss, W. J.; Wang, L.; Hoffman, K.; West, D.; Mehta, A. V.; Lee, C.

    2017-12-01

    Despite the strong influence of watershed conditions on source water quality, most water utilities and water resource agencies do not currently have the capability to monitor watershed sources of contamination with great temporal or spatial detail. Typically, knowledge of source water quality is limited to periodic grab sampling; automated monitoring of a limited number of parameters at a few select locations; and/or monitoring relevant constituents at a treatment plant intake. While important, such observations are not sufficient to inform proactive watershed or source water management at a monthly or seasonal scale. Satellite remote sensing data on the other hand can provide a snapshot of an entire watershed at regular, sub-monthly intervals, helping analysts characterize watershed conditions and identify trends that could signal changes in source water quality. Accordingly, the authors are investigating correlations between satellite remote sensing observations of watersheds and source water quality, at a variety of spatial and temporal scales and lags. While correlations between remote sensing observations and direct in situ measurements of water quality have been well described in the literature, there are few studies that link remote sensing observations across a watershed with near-term predictions of water quality. In this presentation, the authors will describe results of statistical analyses and discuss how these results are being used to inform development of a desktop decision support tool to support predictive application of remote sensing data. Predictor variables under evaluation include parameters that describe vegetative conditions; parameters that describe climate/weather conditions; and non-remote sensing, in situ measurements. Water quality parameters under investigation include nitrogen, phosphorus, organic carbon, chlorophyll-a, and turbidity.

  2. Development of near-term batteries for electric vehicles. Summary report, October 1977-September 1979

    Energy Technology Data Exchange (ETDEWEB)

    Rajan, J.B. (comp.)

    1980-06-01

    The status and results through FY 1979 on the Near-Term Electric Vehicle Battery Project of the Argonne National Laboratory are summarized. This project conducts R and D on lead-acid, nickel/zinc and nickel/iron batteries with the objective of achieving commercialization in electric vehicles in the 1980's. Key results of the R and D indicate major technology advancements and achievement of most of FY 1979 performance goals. In the lead-acid system the specific energy was increased from less than 30 Wh/kg to over 40 Wh/kg at the C/3 rate; the peak power density improved from 70 W/kg to over 110 W/kg at the 50% state of charge; and over 200 deep-discharge cycle life demonstrated. In the nickel/iron system a specific energy of 48 Wh/kg was achieved; a peak power of about 100 W/kg demonstrated and a life of 36 cycles obtained. In the nickel/zinc system, specific energies of up to 64 Wh/kg were shown; peak powers of 133 W/kg obtained; and a life of up to 120 cycles measured. Future R and D will emphasize increased cycle life for nickel/zinc batteries and increased cycle life and specific energy for lead-acid and nickel/iron batteries. Testing of 145 cells was completed by NBTL. Cell evaluation included a full set of performance tests plus the application of a simulated power profile equivalent to the power demands of an electric vehicle in stop-start urban driving. Simplified test profiles which approximate electric vehicle demands are also described.

  3. An Examination of Selected Datacom Options for the Near-Term Implementation of Trajectory Based Operations

    Science.gov (United States)

    Johnson, Walter W.; Lachter, Joel B.; Battiste, Vernol; Lim, Veranika; Brandt, Summer L.; Koteskey, Robert W.; Dao, Arik-Quang V.; Ligda, Sarah V.; Wu, Shu-Chieh

    2011-01-01

    A primary feature of the Next Generation Air Transportation System (NextGen) is trajectory based operations (TBO). Under TBO, aircraft flight plans are known to computer systems on the ground that aid in scheduling and separation. The Future Air Navigation System (FANS) was developed to support TBO, but relatively few aircraft in the US are FANSequipped. Thus, any near-term implementation must provide TBO procedures for non-FANS aircraft. Previous research has explored controller clearances, but any implementation must also provide procedures for aircraft requests. The work presented here aims to surface issues surrounding TBO communication procedures for non-FANS aircraft and for aircraft requesting deviations around weather. Three types of communication were explored: Voice, FANS, and ACARS,(Aircraft Communications Addressing and Reporting System). ACARS and FANS are datacom systems that differ in that FANS allows uplinked flight plans to be loaded into the Flight Management System (FMS), while ACARS delivers flight plans as text that must be entered manually via the Control Display Unit (CDU). Sixteen pilots (eight two-person flight decks) and four controllers participated in 32 20-minute scenarios that required the flight decks to navigate through convective weather as they approached their top of descents (TODs). Findings: The rate of non-conformance was higher than anticipated, with aircraft off path more than 20% of the time. Controllers did not differentiate between the ACARS and FANS datacom, and were mixed in their preference for Voice vs. datacom (ACARS and FANS). Pilots uniformly preferred Voice to datacom, particularly ACARS. Much of their dislike appears to result from the slow response times in the datacom conditions. As a result, participants frequently resorted to voice communication. These results imply that, before implementing TBO in environments where pilots make weather deviation requests, further research is needed to develop communication

  4. Evolution of near term PBMR steam and cogeneration applications - HTR2008-58219

    International Nuclear Information System (INIS)

    Kuhr, R. W.; Hannink, R.; Paul, K.; Kriel, W.; Greyvenstein, R.; Young, R.

    2008-01-01

    US and international applications for large onsite cogeneration (steam and power) systems are emerging as a near term market for the PBMR. The South African PBMR demonstration project applies a high temperature (900 deg. C) Brayton cycle for high efficiency power generation. In addition, a number of new applications are being investigated using an intermediate temperature range (700-750 deg. C) with a simplified heat supply system design. This intermediate helium delivery temperature supports conventional steam Rankine cycle designs at higher efficiencies than obtained from water type reactor systems. These designs can be adapted for cogeneration of steam, similar to the design of gas turbine cogeneration plants that supply steam and power at many industrial sites. This temperature range allows use of conventional or readily qualifiable materials and equipment, avoiding some cost premiums associated with more difficult operating conditions. As gas prices and CO 2 values increase, the potential value of a small nuclear reactor with advanced safety characteristics increases dramatically. Because of its smaller scale, the 400-500 MWt PBMR offers the economic advantages of onsite thermal integration (steam, hot water and desalination co-production) and of providing onsite power at cost versus at retail industrial rates avoiding transmission and distribution costs. Advanced safety characteristics of the PBMR support the location of plants adjacent to steam users, district energy systems, desalination plants, and other large commercial and industrial facilities. Additional benefits include price stability, long term security of energy supply and substantial CO 2 reductions. Target markets include existing sites using gas fired boilers and cogeneration units, new projects such as refinery and petrochemical expansions, and coal-to-liquids projects where steam and power represent major burdens on fuel use and CO 2 emissions. Lead times associated with the nuclear licensing

  5. The solenoidal transport option: IFE drivers, near term research facilities, and beam dynamics

    International Nuclear Information System (INIS)

    Lee, E.P.; Briggs, R.J.

    1997-09-01

    Solenoidal magnets have been used as the beam transport system in all the high current electron induction accelerators that have been built in the past several decades. They have also been considered for the front end transport system for heavy ion accelerators for Inertial Fusion Energy (IFE) drivers, but this option has received very little attention in recent years. The analysis reported here was stimulated mainly by the recent effort to define an affordable open-quotes Integrated Research Experimentclose quotes (IRE) that can meet the near term needs of the IFE program. The 1996 FESAC IFE review panel agreed that an integrated experiment is needed to fully resolve IFE heavy ion driver science and technology issues; specifically, open-quotes the basic beam dynamics issues in the accelerator, the final focusing and transport issues in a reactor-relevant beam parameter regime, and the target heating phenomenologyclose quotes. The development of concepts that can meet these technical objectives and still stay within the severe cost constraints all new fusion proposals will encounter is a formidable challenge. Solenoidal transport has a very favorable scaling as the particle mass is decreased (the main reason why it is preferred for electrons in the region below 50 MeV). This was recognized in a recent conceptual study of high intensity induction linac-based proton accelerators for Accelerator Driven Transmutation Technologies, where solenoidal transport was chosen for the front end. Reducing the ion mass is an obvious scaling to exploit in an IRE design, since the output beam voltage will necessarily be much lower than that of a full scale driver, so solenoids should certainly be considered as one option for this experiment as well

  6. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    Science.gov (United States)

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  7. A general electromagnetic excitation model for electrical machines considering the magnetic saturation and rub impact

    Science.gov (United States)

    Xu, Xueping; Han, Qinkai; Chu, Fulei

    2018-03-01

    The electromagnetic vibration of electrical machines with an eccentric rotor has been extensively investigated. However, magnetic saturation was often neglected. Moreover, the rub impact between the rotor and stator is inevitable when the amplitude of the rotor vibration exceeds the air-gap. This paper aims to propose a general electromagnetic excitation model for electrical machines. First, a general model which takes the magnetic saturation and rub impact into consideration is proposed and validated by the finite element method and reference. The dynamic equations of a Jeffcott rotor system with electromagnetic excitation and mass imbalance are presented. Then, the effects of pole-pair number and rubbing parameters on vibration amplitude are studied and approaches restraining the amplitude are put forward. Finally, the influences of mass eccentricity, resultant magnetomotive force (MMF), stiffness coefficient, damping coefficient, contact stiffness and friction coefficient on the stability of the rotor system are investigated through the Floquet theory, respectively. The amplitude jumping phenomenon is observed in a synchronous generator for different pole-pair numbers. The changes of design parameters can alter the stability states of the rotor system and the range of parameter values forms the zone of stability, which lays helpful suggestions for the design and application of the electrical machines.

  8. Language Model Adaptation Using Machine-Translated Text for Resource-Deficient Languages

    Directory of Open Access Journals (Sweden)

    Sadaoki Furui

    2009-01-01

    Full Text Available Text corpus size is an important issue when building a language model (LM. This is a particularly important issue for languages where little data is available. This paper introduces an LM adaptation technique to improve an LM built using a small amount of task-dependent text with the help of a machine-translated text corpus. Icelandic speech recognition experiments were performed using data, machine translated (MT from English to Icelandic on a word-by-word and sentence-by-sentence basis. LM interpolation using the baseline LM and an LM built from either word-by-word or sentence-by-sentence translated text reduced the word error rate significantly when manually obtained utterances used as a baseline were very sparse.

  9. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Science.gov (United States)

    Choi, Ickwon; Chung, Amy W; Suscovich, Todd J; Rerks-Ngarm, Supachai; Pitisuttithum, Punnee; Nitayaphan, Sorachai; Kaewkungwal, Jaranit; O'Connell, Robert J; Francis, Donald; Robb, Merlin L; Michael, Nelson L; Kim, Jerome H; Alter, Galit; Ackerman, Margaret E; Bailey-Kellogg, Chris

    2015-04-01

    The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity) and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release). We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  10. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Directory of Open Access Journals (Sweden)

    Ickwon Choi

    2015-04-01

    Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  11. Developing an Onboard Traffic-Aware Flight Optimization Capability for Near-Term Low-Cost Implementation

    Science.gov (United States)

    Wing, David J.; Ballin, Mark G.; Koczo, Stefan, Jr.; Vivona, Robert A.; Henderson, Jeffrey M.

    2013-01-01

    The concept of Traffic Aware Strategic Aircrew Requests (TASAR) combines Automatic Dependent Surveillance Broadcast (ADS-B) IN and airborne automation to enable user-optimal in-flight trajectory replanning and to increase the likelihood of Air Traffic Control (ATC) approval for the resulting trajectory change request. TASAR is designed as a near-term application to improve flight efficiency or other user-desired attributes of the flight while not impacting and potentially benefiting ATC. Previous work has indicated the potential for significant benefits for each TASAR-equipped aircraft. This paper will discuss the approach to minimizing TASAR's cost for implementation and accelerating readiness for near-term implementation.

  12. Modeling and simulation of control system for electron beam machine (EBM) using programmable automation controller (PAC)

    International Nuclear Information System (INIS)

    Leo Kwee Wah; Lojius Lombigit; Abu Bakar Mhd Ghazali; Muhamad Zahidee Taat; Ayub Mohamed; Chong Foh Yoong

    2006-01-01

    An EBM electronic model is designed to simulate the control system of the Nissin EBM, which is located at Block 43, MINT complex of Jalan Dengkil with maximum output of 3 MeV, 30 mA using a Programmable Automation Controllers (PAC). This model operates likes a real EBM system where all the start-up, interlocking and stopping procedures are fully followed. It also involves formulating the mathematical models to relate certain output with the input parameters using data from actual operation on EB machine. The simulation involves a set of PAC system consisting of the digital and analogue input/output modules. The program code is written using Labview software (real-time version) on a PC and then downloaded into the PAC stand-alone memory. All the 23 interlocking signals required by the EB machine are manually controlled by mechanical switches and represented by LEDs. The EB parameters are manually controlled by potentiometers and displayed on analogue and digital meters. All these signals are then interfaced to the PC via a wifi wireless communication built-in at the PAC controller. The program is developed in accordance to the specifications and requirement of the original real EB system and displays them on the panel of the model and also on the PC monitor. All possible chances from human errors, hardware and software malfunctions, including the worst-case conditions will be tested, evaluated and modified. We hope that the performance of our model complies the requirements of operating the EB machine. It also hopes that this electronic model can replace the original PC interfacing being utilized in the Nissin EBM in the near future. The system can also be used to study the fault tolerance analysis and automatic re-configuration for advanced control of the EB system. (Author)

  13. Predicting Pre-planting Risk of Stagonospora nodorum blotch in Winter Wheat Using Machine Learning Models

    Directory of Open Access Journals (Sweden)

    Lucky eMehra

    2016-03-01

    Full Text Available Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB, caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum. The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early

  14. Predicting Pre-planting Risk of Stagonospora nodorum blotch in Winter Wheat Using Machine Learning Models.

    Science.gov (United States)

    Mehra, Lucky K; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S

    2016-01-01

    Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of

  15. A Multianalyzer Machine Learning Model for Marine Heterogeneous Data Schema Mapping

    Directory of Open Access Journals (Sweden)

    Wang Yan

    2014-01-01

    Full Text Available The main challenges that marine heterogeneous data integration faces are the problem of accurate schema mapping between heterogeneous data sources. In order to improve the schema mapping efficiency and get more accurate learning results, this paper proposes a heterogeneous data schema mapping method basing on multianalyzer machine learning model. The multianalyzer analysis the learning results comprehensively, and a fuzzy comprehensive evaluation system is introduced for output results’ evaluation and multi factor quantitative judging. Finally, the data mapping comparison experiment on the East China Sea observing data confirms the effectiveness of the model and shows multianalyzer’s obvious improvement of mapping error rate.

  16. A Multianalyzer Machine Learning Model for Marine Heterogeneous Data Schema Mapping

    Science.gov (United States)

    Yan, Wang; Jiajin, Le; Yun, Zhang

    2014-01-01

    The main challenges that marine heterogeneous data integration faces are the problem of accurate schema mapping between heterogeneous data sources. In order to improve the schema mapping efficiency and get more accurate learning results, this paper proposes a heterogeneous data schema mapping method basing on multianalyzer machine learning model. The multianalyzer analysis the learning results comprehensively, and a fuzzy comprehensive evaluation system is introduced for output results' evaluation and multi factor quantitative judging. Finally, the data mapping comparison experiment on the East China Sea observing data confirms the effectiveness of the model and shows multianalyzer's obvious improvement of mapping error rate. PMID:25250372

  17. MODELS OF FATIGUE LIFE CURVES IN FATIGUE LIFE CALCULATIONS OF MACHINE ELEMENTS – EXAMPLES OF RESEARCH

    Directory of Open Access Journals (Sweden)

    Grzegorz SZALA

    2014-03-01

    Full Text Available In the paper there was attempted to analyse models of fatigue life curves possible to apply in calculations of fatigue life of machine elements. The analysis was limited to fatigue life curves in stress approach enabling cyclic stresses from the range of low cycle fatigue (LCF, high cycle fatigue (HCF, fatigue limit (FL and giga cycle fatigue (GCF appearing in the loading spectrum at the same time. Chosen models of the analysed fatigue live curves will be illustrated with test results of steel and aluminium alloys.

  18. Research on Modeling and Control of Regenerative Braking for Brushless DC Machines Driven Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jian-ping Wen

    2015-01-01

    Full Text Available In order to improve energy utilization rate of battery-powered electric vehicle (EV using brushless DC machine (BLDCM, the model of braking current generated by regenerative braking and control method are discussed. On the basis of the equivalent circuit of BLDCM during the generative braking period, the mathematic model of braking current is established. By using an extended state observer (ESO to observe actual braking current and the unknown disturbances of regenerative braking system, the autodisturbances rejection controller (ADRC for controlling the braking current is developed. Experimental results show that the proposed method gives better recovery efficiency and is robust to disturbances.

  19. Prediction of cognitive and motor development in preterm children using exhaustive feature selection and cross-validation of near-term white matter microstructure.

    Science.gov (United States)

    Schadl, Kornél; Vassar, Rachel; Cahill-Rowley, Katelyn; Yeom, Kristin W; Stevenson, David K; Rose, Jessica

    2018-01-01

    Advanced neuroimaging and computational methods offer opportunities for more accurate prognosis. We hypothesized that near-term regional white matter (WM) microstructure, assessed on diffusion tensor imaging (DTI), using exhaustive feature selection with cross-validation would predict neurodevelopment in preterm children. Near-term MRI and DTI obtained at 36.6 ± 1.8 weeks postmenstrual age in 66 very-low-birth-weight preterm neonates were assessed. 60/66 had follow-up neurodevelopmental evaluation with Bayley Scales of Infant-Toddler Development, 3rd-edition (BSID-III) at 18-22 months. Linear models with exhaustive feature selection and leave-one-out cross-validation computed based on DTI identified sets of three brain regions most predictive of cognitive and motor function; logistic regression models were computed to classify high-risk infants scoring one standard deviation below mean. Cognitive impairment was predicted (100% sensitivity, 100% specificity; AUC = 1) by near-term right middle-temporal gyrus MD, right cingulate-cingulum MD, left caudate MD. Motor impairment was predicted (90% sensitivity, 86% specificity; AUC = 0.912) by left precuneus FA, right superior occipital gyrus MD, right hippocampus FA. Cognitive score variance was explained (29.6%, cross-validated Rˆ2 = 0.296) by left posterior-limb-of-internal-capsule MD, Genu RD, right fusiform gyrus AD. Motor score variance was explained (31.7%, cross-validated Rˆ2 = 0.317) by left posterior-limb-of-internal-capsule MD, right parahippocampal gyrus AD, right middle-temporal gyrus AD. Search in large DTI feature space more accurately identified neonatal neuroimaging correlates of neurodevelopment.

  20. Constructing and validating readability models: the method of integrating multilevel linguistic features with machine learning.

    Science.gov (United States)

    Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En

    2015-06-01

    Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.

  1. Estimating the complexity of 3D structural models using machine learning methods

    Science.gov (United States)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  2. A hybrid prognostic model for multistep ahead prediction of machine condition

    Science.gov (United States)

    Roulias, D.; Loutas, T. H.; Kostopoulos, V.

    2012-05-01

    Prognostics are the future trend in condition based maintenance. In the current framework a data driven prognostic model is developed. The typical procedure of developing such a model comprises a) the selection of features which correlate well with the gradual degradation of the machine and b) the training of a mathematical tool. In this work the data are taken from a laboratory scale single stage gearbox under multi-sensor monitoring. Tests monitoring the condition of the gear pair from healthy state until total brake down following several days of continuous operation were conducted. After basic pre-processing of the derived data, an indicator that correlated well with the gearbox condition was obtained. Consecutively the time series is split in few distinguishable time regions via an intelligent data clustering scheme. Each operating region is modelled with a feed-forward artificial neural network (FFANN) scheme. The performance of the proposed model is tested by applying the system to predict the machine degradation level on unseen data. The results show the plausibility and effectiveness of the model in following the trend of the timeseries even in the case that a sudden change occurs. Moreover the model shows ability to generalise for application in similar mechanical assets.

  3. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Directory of Open Access Journals (Sweden)

    Lei Jia

    Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  4. Field tests and machine learning approaches for refining algorithms and correlations of driver's model parameters.

    Science.gov (United States)

    Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto

    2010-03-01

    This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.

  5. Predicting Freeway Work Zone Delays and Costs with a Hybrid Machine-Learning Model

    Directory of Open Access Journals (Sweden)

    Bo Du

    2017-01-01

    Full Text Available A hybrid machine-learning model, integrating an artificial neural network (ANN and a support vector machine (SVM model, is developed to predict spatiotemporal delays, subject to road geometry, number of lane closures, and work zone duration in different periods of a day and in the days of a week. The model is very user friendly, allowing the least inputs from the users. With that the delays caused by a work zone on any location of a New Jersey freeway can be predicted. To this end, tremendous amounts of data from different sources were collected to establish the relationship between the model inputs and outputs. A comparative analysis was conducted, and results indicate that the proposed model outperforms others in terms of the least root mean square error (RMSE. The proposed hybrid model can be used to calculate contractor penalty in terms of cost overruns as well as incentive reward schedule in case of early work competition. Additionally, it can assist work zone planners in determining the best start and end times of a work zone for developing and evaluating traffic mitigation and management plans.

  6. Exploring the influence of constitutive models and associated parameters for the orthogonal machining of Ti6Al4V

    Science.gov (United States)

    Pervaiz, S.; Anwar, S.; Kannan, S.; Almarfadi, A.

    2018-04-01

    Ti6Al4V is known as difficult-to-cut material due to its inherent properties such as high hot hardness, low thermal conductivity and high chemical reactivity. Though, Ti6Al4V is utilized by industrial sectors such as aeronautics, energy generation, petrochemical and bio-medical etc. For the metal cutting community, competent and cost-effective machining of Ti6Al4V is a challenging task. To optimize cost and machining performance for the machining of Ti6Al4V, finite element based cutting simulation can be a very useful tool. The aim of this paper is to develop a finite element machining model for the simulation of Ti6Al4V machining process. The study incorporates material constitutive models namely Power Law (PL) and Johnson – Cook (JC) material models to mimic the mechanical behaviour of Ti6Al4V. The study investigates cutting temperatures, cutting forces, stresses, and plastic strains with respect to different PL and JC material models with associated parameters. In addition, the numerical study also integrates different cutting tool rake angles in the machining simulations. The simulated results will be beneficial to draw conclusions for improving the overall machining performance of Ti6Al4V.

  7. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  8. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  9. Mechatronics in the mining industry. Modelling of underground machines; Mechatronik im Bergbau. Modellbildung von Untertage-Maschinen

    Energy Technology Data Exchange (ETDEWEB)

    Bruckmann, Tobias; Brandt, Thorsten [mercatronics GmbH, Duisburg (Germany)

    2009-12-17

    The development of new functions for machines operating underground often requires a prolonged and cost-intensive test phase. Precisely the development of complex functions as occur in operating assistance systems, for example, is highly iterative. If a corresponding prototype is required for each iteration step of the development, the development costs will, of course, increase rapidly. Virtual prototypes and simulators based on mathematical models of the machine offer an alternative in this case. The article describes the same principles for modelling the kinematics of underground machines. (orig.)

  10. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    2011-07-27

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy in reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.

  11. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    Science.gov (United States)

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  12. One- and two-dimensional Stirling machine simulation using experimentally generated reversing flow turbuulence models

    International Nuclear Information System (INIS)

    Goldberg, L.F.

    1990-08-01

    The activities described in this report do not constitute a continuum but rather a series of linked smaller investigations in the general area of one- and two-dimensional Stirling machine simulation. The initial impetus for these investigations was the development and construction of the Mechanical Engineering Test Rig (METR) under a grant awarded by NASA to Dr. Terry Simon at the Department of Mechanical Engineering, University of Minnesota. The purpose of the METR is to provide experimental data on oscillating turbulent flows in Stirling machine working fluid flow path components (heater, cooler, regenerator, etc.) with particular emphasis on laminar/turbulent flow transitions. Hence, the initial goals for the grant awarded by NASA were, broadly, to provide computer simulation backup for the design of the METR and to analyze the results produced. This was envisaged in two phases: First, to apply an existing one-dimensional Stirling machine simulation code to the METR and second, to adapt a two-dimensional fluid mechanics code which had been developed for simulating high Rayleigh number buoyant cavity flows to the METR. The key aspect of this latter component was the development of an appropriate turbulence model suitable for generalized application to Stirling simulation. A final-step was then to apply the two-dimensional code to an existing Stirling machine for which adequate experimental data exist. The work described herein was carried out over a period of three years on a part-time basis. Forty percent of the first year's funding was provided as a match to the NASA funds by the Underground Space Center, University of Minnesota, which also made its computing facilities available to the project at no charge

  13. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    Science.gov (United States)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ˜60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  14. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    Energy Technology Data Exchange (ETDEWEB)

    Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M. [Applied Electromagnetic Research Institute, National Institute of Information and Communications Technology, 4-2-1, Nukui-Kitamachi, Koganei, Tokyo 184-8795 (Japan); Sugiura, K., E-mail: nishizuka.naoto@nict.go.jp [Advanced Speech Translation Research and Development Promotion Center, National Institute of Information and Communications Technology (Japan)

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  15. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    International Nuclear Information System (INIS)

    Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.; Sugiura, K.

    2017-01-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  16. Near-term markets for PEM fuel cell power modules: industrial vehicles and hydrogen recovery

    International Nuclear Information System (INIS)

    Chintawar, P.S.; Block, G.

    2004-01-01

    'Full text:' Nuvera Fuel Cells, Inc. is a global leader in the development and advancement of multifuel processing and fuel cell technology. With offices located in Italy and the USA, Nuvera is committed to advancing the commercialization of hydrogen fuel cell power modules for industrial vehicles and equipment and stationary applications by 2006, natural gas fuel cell power systems for cogeneration applications by 2007, and on-board gasoline fuel processors and fuel cell stacks for automotive applications by 2010. Nuvera Fuel Cells Europe is ISO 9001:2000 certified for 'Research, Development, Design, Production and Servicing of Fuel Cell Stacks and Fuel Cell Systems.' In the chemical industry, one of the largest operating expenses today is the cost of electricity. For example, caustic soda and chlorine are produced today using industrial membrane electrolysis which is an energy intensive process. Production of 1 metric ton of caustic soda consumes 2.5 MWh of energy. However, about 20% of the electricity consumed can be recovered by converting the hydrogen byproduct of the caustic soda production process into electricity via PEM fuel cells. The accessible market is a function of the economic value of the hydrogen whether flared, used as fuel, or as chemical. Responding to this market need, we are currently developing large hydrogen fuel cell power modules 'Forza' that use excess hydrogen to produce electricity, representing a practical economic alternative to reducing the net electricity cost. Due for commercial launch in 2006, Forza is a low-pressure, steady state, base-load power generation solution that will operate at high efficiency and 100% capacity over a 24-hour period. We believe this premise is also true for chemical and electrochemical plants and companies that convert hydrogen to electricity using renewable sources like windmills or hydropower. The second near-term market that Nuvera is developing utilizes a 5.5 kW hydrogen fueled power module 'H 2 e

  17. Bearing Degradation Process Prediction Based on the Support Vector Machine and Markov Model

    Directory of Open Access Journals (Sweden)

    Shaojiang Dong

    2014-01-01

    Full Text Available Predicting the degradation process of bearings before they reach the failure threshold is extremely important in industry. This paper proposed a novel method based on the support vector machine (SVM and the Markov model to achieve this goal. Firstly, the features are extracted by time and time-frequency domain methods. However, the extracted original features are still with high dimensional and include superfluous information, and the nonlinear multifeatures fusion technique LTSA is used to merge the features and reduces the dimension. Then, based on the extracted features, the SVM model is used to predict the bearings degradation process, and the CAO method is used to determine the embedding dimension of the SVM model. After the bearing degradation process is predicted by SVM model, the Markov model is used to improve the prediction accuracy. The proposed method was validated by two bearing run-to-failure experiments, and the results proved the effectiveness of the methodology.

  18. Hemodynamic modelling of BOLD fMRI - A machine learning approach

    DEFF Research Database (Denmark)

    Jacobsen, Danjal Jakup

    2007-01-01

    This Ph.D. thesis concerns the application of machine learning methods to hemodynamic models for BOLD fMRI data. Several such models have been proposed by different researchers, and they have in common a basis in physiological knowledge of the hemodynamic processes involved in the generation...... of the BOLD signal. The BOLD signal is modelled as a non-linear function of underlying, hidden (non-measurable) hemodynamic state variables. The focus of this thesis work has been to develop methods for learning the parameters of such models, both in their traditional formulation, and in a state space...... formulation. In the latter, noise enters at the level of the hidden states, as well as in the BOLD measurements themselves. A framework has been developed to allow approximate posterior distributions of model parameters to be learned from real fMRI data. This is accomplished with Markov chain Monte Carlo...

  19. A Hybrid dasymetric and machine learning approach to high-resolution residential electricity consumption modeling

    Energy Technology Data Exchange (ETDEWEB)

    Morton, April M [ORNL; Nagle, Nicholas N [ORNL; Piburn, Jesse O [ORNL; Stewart, Robert N [ORNL; McManamay, Ryan A [ORNL

    2017-01-01

    As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for detailed information regarding residential energy consumption patterns has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy consumption, the majority of techniques are highly dependent on region-specific data sources and often require building- or dwelling-level details that are not publicly available for many regions in the United States. Furthermore, many existing methods do not account for errors in input data sources and may not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more general hybrid approach to high-resolution residential electricity consumption modeling by merging a dasymetric model with a complementary machine learning algorithm. The method s flexible data requirement and statistical framework ensure that the model both is applicable to a wide range of regions and considers errors in input data sources.

  20. A novel improved fuzzy support vector machine based stock price trend forecast model

    OpenAIRE

    Wang, Shuheng; Li, Guohao; Bao, Yifan

    2018-01-01

    Application of fuzzy support vector machine in stock price forecast. Support vector machine is a new type of machine learning method proposed in 1990s. It can deal with classification and regression problems very successfully. Due to the excellent learning performance of support vector machine, the technology has become a hot research topic in the field of machine learning, and it has been successfully applied in many fields. However, as a new technology, there are many limitations to support...

  1. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity.

    Science.gov (United States)

    Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D

    2014-03-25

    A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.

  2. Modeling and Designing of A Nonlineartemperature-Humidity Controller Using Inmushroom-Drying Machine

    Science.gov (United States)

    Wu, Xiuhua; Luo, Haiyan; Shi, Minhui

    Drying-process of many kinds of farm produce in a close room, such as mushroom-drying machine, is generally a complicated nonlinear and timedelay cause, in which the temperature and the humidity are the main controlled elements. The accurate controlling of the temperature and humidity is always an interesting problem. It's difficult and very important to make a more accurate mathematical model about the varying of the two. A math model was put forward after considering many aspects and analyzing the actual working circumstance in this paper. Form the model it can be seen that the changes of temperature and humidity in drying machine are not simple linear but an affine nonlinear process. Controlling the process exactly is the key that influences the quality of the dried mushroom. In this paper, the differential geometry theories and methods are used to analyze and solve the model of these smallenvironment elements. And at last a kind of nonlinear controller which satisfied the optimal quadratic performance index is designed. It can be proved more feasible and practical than the conventional controlling.

  3. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    Directory of Open Access Journals (Sweden)

    Li Deng

    2015-01-01

    Full Text Available In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming, using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model’s input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators’ operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  4. A Genetic Algorithm Based Support Vector Machine Model for Blood-Brain Barrier Penetration Prediction

    Directory of Open Access Journals (Sweden)

    Daqing Zhang

    2015-01-01

    Full Text Available Blood-brain barrier (BBB is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration.

  5. Development of Predictive QSAR Models of 4-Thiazolidinones Antitrypanosomal Activity using Modern Machine Learning Algorithms.

    Science.gov (United States)

    Kryshchyshyn, Anna; Devinyak, Oleg; Kaminskyy, Danylo; Grellier, Philippe; Lesyk, Roman

    2017-11-14

    This paper presents novel QSAR models for the prediction of antitrypanosomal activity among thiazolidines and related heterocycles. The performance of four machine learning algorithms: Random Forest regression, Stochastic gradient boosting, Multivariate adaptive regression splines and Gaussian processes regression have been studied in order to reach better levels of predictivity. The results for Random Forest and Gaussian processes regression are comparable and outperform other studied methods. The preliminary descriptor selection with Boruta method improved the outcome of machine learning methods. The two novel QSAR-models developed with Random Forest and Gaussian processes regression algorithms have good predictive ability, which was proved by the external evaluation of the test set with corresponding Q 2 ext =0.812 and Q 2 ext =0.830. The obtained models can be used further for in silico screening of virtual libraries in the same chemical domain in order to find new antitrypanosomal agents. Thorough analysis of descriptors influence in the QSAR models and interpretation of their chemical meaning allows to highlight a number of structure-activity relationships. The presence of phenyl rings with electron-withdrawing atoms or groups in para-position, increased number of aromatic rings, high branching but short chains, high HOMO energy, and the introduction of 1-substituted 2-indolyl fragment into the molecular structure have been recognized as trypanocidal activity prerequisites. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Improved equivalent magnetic network modeling for analyzing working points of PMs in interior permanent magnet machine

    Science.gov (United States)

    Guo, Liyan; Xia, Changliang; Wang, Huimin; Wang, Zhiqiang; Shi, Tingna

    2018-05-01

    As is well known, the armature current will be ahead of the back electromotive force (back-EMF) under load condition of the interior permanent magnet (PM) machine. This kind of advanced armature current will produce a demagnetizing field, which may make irreversible demagnetization appeared in PMs easily. To estimate the working points of PMs more accurately and take demagnetization under consideration in the early design stage of a machine, an improved equivalent magnetic network model is established in this paper. Each PM under each magnetic pole is segmented, and the networks in the rotor pole shoe are refined, which makes a more precise model of the flux path in the rotor pole shoe possible. The working point of each PM under each magnetic pole can be calculated accurately by the established improved equivalent magnetic network model. Meanwhile, the calculated results are compared with those calculated by FEM. And the effects of d-axis component and q-axis component of armature current, air-gap length and flux barrier size on working points of PMs are analyzed by the improved equivalent magnetic network model.

  7. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    Science.gov (United States)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  8. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  9. Application of heuristic and machine-learning approach to engine model calibration

    Science.gov (United States)

    Cheng, Jie; Ryu, Kwang R.; Newman, C. E.; Davis, George C.

    1993-03-01

    Automation of engine model calibration procedures is a very challenging task because (1) the calibration process searches for a goal state in a huge, continuous state space, (2) calibration is often a lengthy and frustrating task because of complicated mutual interference among the target parameters, and (3) the calibration problem is heuristic by nature, and often heuristic knowledge for constraining a search cannot be easily acquired from domain experts. A combined heuristic and machine learning approach has, therefore, been adopted to improve the efficiency of model calibration. We developed an intelligent calibration program called ICALIB. It has been used on a daily basis for engine model applications, and has reduced the time required for model calibrations from many hours to a few minutes on average. In this paper, we describe the heuristic control strategies employed in ICALIB such as a hill-climbing search based on a state distance estimation function, incremental problem solution refinement by using a dynamic tolerance window, and calibration target parameter ordering for guiding the search. In addition, we present the application of a machine learning program called GID3* for automatic acquisition of heuristic rules for ordering target parameters.

  10. A Model-based Analysis of Impulsivity Using a Slot-Machine Gambling Paradigm

    Directory of Open Access Journals (Sweden)

    Saee ePaliwal

    2014-07-01

    Full Text Available Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling. Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11, correlated significantly with an aggregate read-out of the following gambling responses: bet increases, machines switches, casino switches and double-ups. Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e. the Hierarchical Gaussian Filter (HGF and Rescorla-Wagner reinforcement learning models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF, the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to impulsivity. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and future assessments of pathological gambling.

  11. Research on Dynamic Modeling and Application of Kinetic Contact Interface in Machine Tool

    Directory of Open Access Journals (Sweden)

    Dan Xu

    2016-01-01

    Full Text Available A method is presented which is a kind of combining theoretic analysis and experiment to obtain the equivalent dynamic parameters of linear guideway through four steps in detail. From statics analysis, vibration model analysis, dynamic experiment, and parameter identification, the dynamic modeling of linear guideway is synthetically studied. Based on contact mechanics and elastic mechanics, the mathematic vibration model and the expressions of basic mode frequency are deduced. Then, equivalent stiffness and damping of guideway are obtained in virtue of single-freedom-degree mode fitting method. Moreover, the investigation above is applied in a certain gantry-type machining center; and through comparing with simulation model and experiment results, both availability and correctness are validated.

  12. Sugeno-Fuzzy Expert System Modeling for Quality Prediction of Non-Contact Machining Process

    Science.gov (United States)

    Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.

    2018-03-01

    Modeling can be categorised into four main domains: prediction, optimisation, estimation and calibration. In this paper, the Takagi-Sugeno-Kang (TSK) fuzzy logic method is examined as a prediction modelling method to investigate the taper quality of laser lathing, which seeks to replace traditional lathe machines with 3D laser lathing in order to achieve the desired cylindrical shape of stock materials. Three design parameters were selected: feed rate, cutting speed and depth of cut. A total of twenty-four experiments were conducted with eight sequential runs and replicated three times. The results were found to be 99% of accuracy rate of the TSK fuzzy predictive model, which suggests that the model is a suitable and practical method for non-linear laser lathing process.

  13. Modeling and prediction of human word search behavior in interactive machine translation

    Science.gov (United States)

    Ji, Duo; Yu, Bai; Ma, Bin; Ye, Na

    2017-12-01

    As a kind of computer aided translation method, Interactive Machine Translation technology reduced manual translation repetitive and mechanical operation through a variety of methods, so as to get the translation efficiency, and played an important role in the practical application of the translation work. In this paper, we regarded the behavior of users' frequently searching for words in the translation process as the research object, and transformed the behavior to the translation selection problem under the current translation. The paper presented a prediction model, which is a comprehensive utilization of alignment model, translation model and language model of the searching words behavior. It achieved a highly accurate prediction of searching words behavior, and reduced the switching of mouse and keyboard operations in the users' translation process.

  14. The Model of Information Support for Management of Investment Attractiveness of Machine-Building Enterprises

    Directory of Open Access Journals (Sweden)

    Chernetska Olga V.

    2016-11-01

    Full Text Available The article discloses the content of the definition of “information support”, identifies basic approaches to the interpretation of this economic category. The main purpose of information support for management of enterprise investment attractiveness is determined. The key components of information support for management of enterprise investment attractiveness are studied. The main types of automated information systems for management of the investment attractiveness of enterprises are identified and characterized. The basic computer programs for assessing the level of investment attractiveness of enterprises are considered. A model of information support for management of investment attractiveness of machine-building enterprises is developed.

  15. Modeling of Residual Stress and Machining Distortion in Aerospace Components (PREPRINT)

    Science.gov (United States)

    2010-03-01

    John Gayda, “The Effect of Heat Treatment on Residual Stress and Machining Distortions in Advanced Nickel Base Disk Alloys,” NASA/TM-2001-210717. 2...Wei-Tsu Wu, Guoji Li, Juipeng Tang, Shesh Srivatsa, Ravi Shankar, Ron Wallis, Padu Ramasundaram and John Gayda, “A process modeling system for heat...Materials Processing Technology 98 (2000) 189-195. 6. M.A. Rist, S. Tin, B.A. Roder, J.A. James, and M.R. Daymond , “Residual Stresses in a

  16. Research on Error Modelling and Identification of 3 Axis NC Machine Tools Based on Cross Grid Encoder Measurement

    International Nuclear Information System (INIS)

    Du, Z C; Lv, C F; Hong, M S

    2006-01-01

    A new error modelling and identification method based on the cross grid encoder is proposed in this paper. Generally, there are 21 error components in the geometric error of the 3 axis NC machine tools. However according our theoretical analysis, the squareness error among different guide ways affects not only the translation error component, but also the rotational ones. Therefore, a revised synthetic error model is developed. And the mapping relationship between the error component and radial motion error of round workpiece manufactured on the NC machine tools are deduced. This mapping relationship shows that the radial error of circular motion is the comprehensive function result of all the error components of link, worktable, sliding table and main spindle block. Aiming to overcome the solution singularity shortcoming of traditional error component identification method, a new multi-step identification method of error component by using the Cross Grid Encoder measurement technology is proposed based on the kinematic error model of NC machine tool. Firstly, the 12 translational error components of the NC machine tool are measured and identified by using the least square method (LSM) when the NC machine tools go linear motion in the three orthogonal planes: XOY plane, XOZ plane and YOZ plane. Secondly, the circular error tracks are measured when the NC machine tools go circular motion in the same above orthogonal planes by using the cross grid encoder Heidenhain KGM 182. Therefore 9 rotational errors can be identified by using LSM. Finally the experimental validation of the above modelling theory and identification method is carried out in the 3 axis CNC vertical machining centre Cincinnati 750 Arrow. The entire 21 error components have been successfully measured out by the above method. Research shows the multi-step modelling and identification method is very suitable for 'on machine measurement'

  17. Predictive Models for Different Roughness Parameters During Machining Process of Peek Composites Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Mata-Cabrera Francisco

    2013-10-01

    Full Text Available Polyetheretherketone (PEEK composite belongs to a group of high performance thermoplastic polymers and is widely used in structural components. To improve the mechanical and tribological properties, short fibers are added as reinforcement to the material. Due to its functional properties and potential applications, it’s impor- tant to investigate the machinability of non-reinforced PEEK (PEEK, PEEK rein- forced with 30% of carbon fibers (PEEK CF30, and reinforced PEEK with 30% glass fibers (PEEK GF30 to determine the optimal conditions for the manufacture of the parts. The present study establishes the relationship between the cutting con- ditions (cutting speed and feed rate and the roughness (Ra , Rt , Rq , Rp , by develop- ing second order mathematical models. The experiments were planned as per full factorial design of experiments and an analysis of variance has been performed to check the adequacy of the models. These state the adequacy of the derived models to obtain predictions for roughness parameters within ranges of parameters that have been investigated during the experiments. The experimental results show that the most influence of the cutting parameters is the feed rate, furthermore, proved that glass fiber reinforcements produce a worse machinability.

  18. 3D Magnetic field modeling of a new superconducting synchronous machine using reluctance network method

    Science.gov (United States)

    Kelouaz, Moussa; Ouazir, Youcef; Hadjout, Larbi; Mezani, Smail; Lubin, Thiery; Berger, Kévin; Lévêque, Jean

    2018-05-01

    In this paper a new superconducting inductor topology intended for synchronous machine is presented. The studied machine has a standard 3-phase armature and a new kind of 2-poles inductor (claw-pole structure) excited by two coaxial superconducting coils. The air-gap spatial variation of the radial flux density is obtained by inserting a superconducting bulk, which deviates the magnetic field due to the coils. The complex geometry of this inductor usually needs 3D finite elements (FEM) for its analysis. However, to avoid a long computational time inherent to 3D FEM, we propose in this work an alternative modeling, which uses a 3D meshed reluctance network. The results obtained with the developed model are compared to 3D FEM computations as well as to measurements carried out on a laboratory prototype. Finally, a 3D FEM study of the shielding properties of the superconducting screen demonstrates the suitability of using a diamagnetic-like model of the superconducting screen.

  19. Quantifying surgical complexity with machine learning: looking beyond patient factors to improve surgical models.

    Science.gov (United States)

    Van Esbroeck, Alexander; Rubinfeld, Ilan; Hall, Bruce; Syed, Zeeshan

    2014-11-01

    To investigate the use of machine learning to empirically determine the risk of individual surgical procedures and to improve surgical models with this information. American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) data from 2005 to 2009 were used to train support vector machine (SVM) classifiers to learn the relationship between textual constructs in current procedural terminology (CPT) descriptions and mortality, morbidity, Clavien 4 complications, and surgical-site infections (SSI) within 30 days of surgery. The procedural risk scores produced by the SVM classifiers were validated on data from 2010 in univariate and multivariate analyses. The procedural risk scores produced by the SVM classifiers achieved moderate-to-high levels of discrimination in univariate analyses (area under receiver operating characteristic curve: 0.871 for mortality, 0.789 for morbidity, 0.791 for SSI, 0.845 for Clavien 4 complications). Addition of these scores also substantially improved multivariate models comprising patient factors and previously proposed correlates of procedural risk (net reclassification improvement and integrated discrimination improvement: 0.54 and 0.001 for mortality, 0.46 and 0.011 for morbidity, 0.68 and 0.022 for SSI, 0.44 and 0.001 for Clavien 4 complications; P risk for individual procedures. This information can be measured in an entirely data-driven manner and substantially improves multifactorial models to predict postoperative complications. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Reservoir Inflow Prediction under GCM Scenario Downscaled by Wavelet Transform and Support Vector Machine Hybrid Models

    Directory of Open Access Journals (Sweden)

    Gusfan Halik

    2015-01-01

    Full Text Available Climate change has significant impacts on changing precipitation patterns causing the variation of the reservoir inflow. Nowadays, Indonesian hydrologist performs reservoir inflow prediction according to the technical guideline of Pd-T-25-2004-A. This technical guideline does not consider the climate variables directly, resulting in significant deviation to the observation results. This research intends to predict the reservoir inflow using the statistical downscaling (SD of General Circulation Model (GCM outputs. The GCM outputs are obtained from the National Center for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP/NCAR Reanalysis. A new proposed hybrid SD model named Wavelet Support Vector Machine (WSVM was utilized. It is a combination of the Multiscale Principal Components Analysis (MSPCA and nonlinear Support Vector Machine regression. The model was validated at Sutami Reservoir, Indonesia. Training and testing were carried out using data of 1991–2008 and 2008–2012, respectively. The results showed that MSPCA produced better extracting data than PCA. The WSVM generated better reservoir inflow prediction than the one of technical guideline. Moreover, this research also applied WSVM for future reservoir inflow prediction based on GCM ECHAM5 and scenario SRES A1B.

  1. Modeling PM2.5 Urban Pollution Using Machine Learning and Selected Meteorological Parameters

    Directory of Open Access Journals (Sweden)

    Jan Kleine Deters

    2017-01-01

    Full Text Available Outdoor air pollution costs millions of premature deaths annually, mostly due to anthropogenic fine particulate matter (or PM2.5. Quito, the capital city of Ecuador, is no exception in exceeding the healthy levels of pollution. In addition to the impact of urbanization, motorization, and rapid population growth, particulate pollution is modulated by meteorological factors and geophysical characteristics, which complicate the implementation of the most advanced models of weather forecast. Thus, this paper proposes a machine learning approach based on six years of meteorological and pollution data analyses to predict the concentrations of PM2.5 from wind (speed and direction and precipitation levels. The results of the classification model show a high reliability in the classification of low (25 µg/m3 and low (<10 µg/m3 versus moderate (10–25 µg/m3 concentrations of PM2.5. A regression analysis suggests a better prediction of PM2.5 when the climatic conditions are getting more extreme (strong winds or high levels of precipitation. The high correlation between estimated and real data for a time series analysis during the wet season confirms this finding. The study demonstrates that the use of statistical models based on machine learning is relevant to predict PM2.5 concentrations from meteorological data.

  2. Modelling Water Stress in a Shiraz Vineyard Using Hyperspectral Imaging and Machine Learning

    Directory of Open Access Journals (Sweden)

    Kyle Loggenberg

    2018-01-01

    Full Text Available The detection of water stress in vineyards plays an integral role in the sustainability of high-quality grapes and prevention of devastating crop loses. Hyperspectral remote sensing technologies combined with machine learning provides a practical means for modelling vineyard water stress. In this study, we applied two ensemble learners, i.e., random forest (RF and extreme gradient boosting (XGBoost, for discriminating stressed and non-stressed Shiraz vines using terrestrial hyperspectral imaging. Additionally, we evaluated the utility of a spectral subset of wavebands, derived using RF mean decrease accuracy (MDA and XGBoost gain. Our results show that both ensemble learners can effectively analyse the hyperspectral data. When using all wavebands (p = 176, RF produced a test accuracy of 83.3% (KHAT (kappa analysis = 0.67, and XGBoost a test accuracy of 80.0% (KHAT = 0.6. Using the subset of wavebands (p = 18 produced slight increases in accuracy ranging from 1.7% to 5.5% for both RF and XGBoost. We further investigated the effect of smoothing the spectral data using the Savitzky-Golay filter. The results indicated that the Savitzky-Golay filter reduced model accuracies (ranging from 0.7% to 3.3%. The results demonstrate the feasibility of terrestrial hyperspectral imagery and machine learning to create a semi-automated framework for vineyard water stress modelling.

  3. Retention of knowledge and experience from experts in near-term operating plants

    International Nuclear Information System (INIS)

    Jiang, H.

    2007-01-01

    Full text: Tianwan Nuclear Power Station (TNPS) will be put into commercial operation in May, 2007. Right-sizing is on the way to adapt the organization to the new stage of TNPS. TNPS is facing challenges of dilution of expertise by the rightsizing. This condition is aggravated by the incipient training system and a very competitive fighting for attracting technical experts in nuclear area, because the very ambitious projects of nuclear plants which are thriving in China. This can lead to the compromise of the capability to safely and economically operate TNPS. Indubitably, a personnel training plays a very crucial role in knowledge management, especially for countries as China which are weak in professional education system. Key knowledge and skills for safely and reliably operating nuclear power plants can be effectively identified by personnel training system developed in a systematic way and properly implemented. And only by sound and sufficient training can adequate number of replacements be produced. Well-developed IT platform can help the information management in such an era of information and internet. Information should be collected in a systematic way instead of stacking information on an ad hoc basis. But the project database must be established in an well-organized way, and the information should be aroused from sleeping, so that usable data will not be lost and are readily accessible on intranet and available to users. Or else the engineers take great pain to search for data like looking for a needle in a haystack, while useful data are gathering dust somewhere deep in the databank something. Compared to the well-developed industrial countries, there is quite a room in fundamental aspects which are cardinal requisites for effective knowledge management. These factors Contributing to Knowledge Management in Near-Term Operating Plants include not simply training and information management but also almost all other technical and management related to the

  4. TASKA-M - a low cost, near term tandem mirror device for fusion technology testing

    International Nuclear Information System (INIS)

    Badger, B.; Corradini, M.L.; El-Guebaly, L.; Emmert, G.A.; Kulcinski, G.L.; Larsen, E.M.; Maynard, C.W.; Perkins, L.J.; Peterson, R.R.; Plute, K.E.; Santarius, J.F.; Sawan, M.E.; Scharer, J.E.; Sviatoslavsky, I.N.; Sze, D.K.; Vogelsang, W.F.; Wittenberg, L.J.; Leppelmeier, G.W.; Grover, J.M.; Opperman, E.K.; Vogel, M.A.; Borie, E.; Taczanowski, S.; Arendt, F.; Dittrich, H.G.; Fett, T.; Haferkamp, B.; Heinz, W.; Hoelzchen, E.; Kleefeldt, K.; Klingelhoefer, R.; Komarek, P.; Kuntze, M.; Leiste, H.G.; Link, W.; Malang, S.; Manes, B.M.; Maurer, W.; Michael, I.; Mueller, R.A.; Neffe, G.; Schramm, K.; Suppan, A.; Weinberg, D.

    1984-04-01

    TASKA-M (Modifizierte Tandem Spiegelmaschine Karlsruhe) is a study of a dedicated fusion technology device based on the mirror principle, in continuation of the 1981/82 TASKA study. The main objective is to minimize cost while retaining key requirements of neutron flux and fluence for blanket and material development and for component testing in a nuclear environment. Direct costs are reduced to about 400 M$ by dropping reactor-relevant aspects not essential to technology testing: No thermal barrier and electrostatic plugging of the plasma; fusion power of 7 MW at an injected power of 44 MW; tritium supply from external sources. All technologies for operating the machine are expected to be available by 1990; the plasma physics relies on microstabilization in a sloshing ion population. (orig.) [de

  5. Use of models and mockups in verifying man-machine interfaces

    International Nuclear Information System (INIS)

    Seminara, J.L.

    1985-01-01

    The objective of Human Factors Engineering is to tailor the design of facilities and equipment systems to match the capabilities and limitations of the personnel who will operate and maintain the system. This optimization of the man-machine interface is undertaken to enhance the prospects for safe, reliable, timely, and error-free human performance in meeting system objectives. To ensure the eventual success of a complex man-machine system it is important to systematically and progressively test and verify the adequacy of man-machine interfaces from initial design concepts to system operation. Human factors specialists employ a variety of methods to evaluate the quality of the human-system interface. These methods include: (1) Reviews of two-dimensional drawings using appropriately scaled transparent overlays of personnel spanning the anthropometric range, considering clothing and protective gear encumbrances (2) Use of articulated, scaled, plastic templates or manikins that are overlayed on equipment or facility drawings (3) Development of computerized manikins in computer aided design approaches (4) Use of three-dimensional scale models to better conceptualize work stations, control rooms or maintenance facilities (5) Full or half-scale mockups of system components to evaluate operator/maintainer interfaces (6) Part of full-task dynamic simulation of operator or maintainer tasks and interactive system responses (7) Laboratory and field research to establish human performance capabilities with alternative system design concepts or configurations. Of the design verification methods listed above, this paper will only consider the use of models and mockups in the design process

  6. Quality prediction modeling for sintered ores based on mechanism models of sintering and extreme learning machine based error compensation

    Science.gov (United States)

    Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang

    2018-06-01

    Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.

  7. Estimation of the applicability domain of kernel-based machine learning models for virtual screening

    Directory of Open Access Journals (Sweden)

    Fechner Nikolas

    2010-03-01

    Full Text Available Abstract Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening

  8. Estimation of the applicability domain of kernel-based machine learning models for virtual screening.

    Science.gov (United States)

    Fechner, Nikolas; Jahn, Andreas; Hinselmann, Georg; Zell, Andreas

    2010-03-11

    The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening. The proposed applicability domain formulations

  9. Development of hardware system using temperature and vibration maintenance models integration concepts for conventional machines monitoring: A case study

    OpenAIRE

    Adeyeri, Michael Kanisuru; Mpofu, Khumbulani; Kareem, Buliaminu

    2016-01-01

    This article describes the integration of temperature and vibration models for maintenance monitoring of conventional machinery parts in which their optimal andbest functionalities are affected by abnormal changes in temperature and vibration values thereby resulting in machine failures, machines breakdown, poor quality of products, inability to meeting customers' demand, poor inventory control and just to mention a few. The work entails the use of temperature and vibration sensors as monitor...

  10. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods.

    Science.gov (United States)

    Gonzalez-Navarro, Felix F; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A; Flores-Rios, Brenda L; Ibarra-Esquer, Jorge E

    2016-10-26

    Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.

  11. Specific modes of vibratory technological machines: mathematical models, peculiarities of interaction of system elements

    Science.gov (United States)

    Eliseev, A. V.; Sitov, I. S.; Eliseev, S. V.

    2018-03-01

    The methodological basis of constructing mathematical models of vibratory technological machines is developed in the article. An approach is proposed that makes it possible to introduce a vibration table in a specific mode that provides conditions for the dynamic damping of oscillations for the zone of placement of a vibration exciter while providing specified vibration parameters in the working zone of the vibration table. The aim of the work is to develop methods of mathematical modeling, oriented to technological processes with long cycles. The technologies of structural mathematical modeling are used with structural schemes, transfer functions and amplitude-frequency characteristics. The concept of the work is to test the possibilities of combining the conditions for reducing loads with working components of a vibration exciter while simultaneously maintaining sufficiently wide limits in variating the parameters of the vibrational field.

  12. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Felix F. Gonzalez-Navarro

    2016-10-01

    Full Text Available Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.

  13. Extreme learning machine for reduced order modeling of turbulent geophysical flows

    Science.gov (United States)

    San, Omer; Maulik, Romit

    2018-04-01

    We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.

  14. Modeling, Control and Analyze of Multi-Machine Drive Systems using Bond Graph Technique

    Directory of Open Access Journals (Sweden)

    J. Belhadj

    2006-03-01

    Full Text Available In this paper, a system viewpoint method has been investigated to study and analyze complex systems using Bond Graph technique. These systems are multimachine multi-inverter based on Induction Machine (IM, well used in industries like rolling mills, textile, and railway traction. These systems are multi-domains, multi-scales time and present very strong internal and external couplings, with non-linearity characterized by a high model order. The classical study with analytic model is difficult to manipulate and it is limited to some performances. In this study, a “systemic approach” is presented to design these kinds of systems, using an energetic representation based on Bond Graph formalism. Three types of multimachine are studied with their control strategies. The modeling is carried out by Bond Graph and results are discussed to show the performances of this methodology

  15. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    Science.gov (United States)

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  16. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  17. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    Science.gov (United States)

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  18. Modeling a ground-coupled heat pump system by a support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Esen, Hikmet; Esen, Mehmet [Department of Mechanical Education, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey); Inalli, Mustafa [Department of Mechanical Engineering, Faculty of Engineering, Firat University, 23279 Elazig (Turkey); Sengur, Abdulkadir [Department of Electronic and Computer Science, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey)

    2008-08-15

    This paper reports on a modeling study of ground coupled heat pump (GCHP) system performance (COP) by using a support vector machine (SVM) method. A GCHP system is a multi-variable system that is hard to model by conventional methods. As regards the SVM, it has a superior capability for generalization, and this capability is independent of the dimensionality of the input data. In this study, a SVM based method was intended to adopt GCHP system for efficient modeling. The Lin-kernel SVM method was quite efficient in modeling purposes and did not require a pre-knowledge about the system. The performance of the proposed methodology was evaluated by using several statistical validation parameters. It is found that the root-mean squared (RMS) value is 0.002722, the coefficient of multiple determinations (R{sup 2}) value is 0.999999, coefficient of variation (cov) value is 0.077295, and mean error function (MEF) value is 0.507437 for the proposed Lin-kernel SVM method. The optimum parameters of the SVM method were determined by using a greedy search algorithm. This search algorithm was effective for obtaining the optimum parameters. The simulation results show that the SVM is a good method for prediction of the COP of the GCHP system. The computation of SVM model is faster compared with other machine learning techniques (artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS)); because there are fewer free parameters and only support vectors (only a fraction of all data) are used in the generalization process. (author)

  19. Limits, modeling and design of high-speed permanent magnet machines

    NARCIS (Netherlands)

    Borisavljevic, A.

    2011-01-01

    There is a growing number of applications that require fast-rotating machines; motivation for this thesis comes from a project in which downsized spindles for micro-machining have been researched (TU Delft Microfactory project). The thesis focuses on analysis and design of high-speed PM machines and

  20. Large-scale ligand-based predictive modelling using support vector machines.

    Science.gov (United States)

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.

  1. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  2. Research on Dynamic Models and Performances of Shield Tunnel Boring Machine Cutterhead Driving System

    Directory of Open Access Journals (Sweden)

    Xianhong Li

    2013-01-01

    Full Text Available A general nonlinear time-varying (NLTV dynamic model and linear time-varying (LTV dynamic model are presented for shield tunnel boring machine (TBM cutterhead driving system, respectively. Different gear backlashes and mesh damped and transmission errors are considered in the NLTV dynamic model. The corresponding multiple-input and multiple-output (MIMO state space models are also presented. Through analyzing the linear dynamic model, the optimal reducer ratio (ORR and optimal transmission ratio (OTR are obtained for the shield TBM cutterhead driving system, respectively. The NLTV and LTV dynamic models are numerically simulated, and the effects of physical parameters under various conditions of NLTV dynamic model are analyzed. Physical parameters such as the load torque, gear backlash and transmission error, gear mesh stiffness and damped, pinions inertia and damped, large gear inertia and damped, and motor rotor inertia and damped are investigated in detail to analyze their effects on dynamic response and performances of the shield TBM cutterhead driving system. Some preliminary approaches are proposed to improve dynamic performances of the cutterhead driving system, and dynamic models will provide a foundation for shield TBM cutterhead driving system's cutterhead fault diagnosis, motion control, and torque synchronous control.

  3. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    Science.gov (United States)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  4. Fishery landing forecasting using EMD-based least square support vector machine models

    Science.gov (United States)

    Shabri, Ani

    2015-05-01

    In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..

  5. Design concept of K-DEMO for near-term implementation

    Science.gov (United States)

    Kim, K.; Im, K.; Kim, H. C.; Oh, S.; Park, J. S.; Kwon, S.; Lee, Y. S.; Yeom, J. H.; Lee, C.; Lee, G.-S.; Neilson, G.; Kessel, C.; Brown, T.; Titus, P.; Mikkelsen, D.; Zhai, Y.

    2015-05-01

    A Korean fusion energy development promotion law (FEDPL) was enacted in 2007. As a following step, a conceptual design study for a steady-state Korean fusion demonstration reactor (K-DEMO) was initiated in 2012. After the thorough 0D system analysis, the parameters of the main machine characterized by the major and minor radii of 6.8 and 2.1 m, respectively, were chosen for further study. The analyses of heating and current drives were performed for the development of the plasma operation scenarios. Preliminary results on lower hybrid and neutral beam current drive are included herein. A high performance Nb3Sn-based superconducting conductor is adopted, providing a peak magnetic field approaching 16 T with the magnetic field at the plasma centre above 7 T. Pressurized water is the prominent choice for the main coolant of K-DEMO when the balance of plant development details is considered. The blanket system adopts a ceramic pebble type breeder. Considering plasma performance, a double-null divertor is the reference configuration choice of K-DEMO. For a high availability operation, K-DEMO incorporates a design with vertical maintenance. A design concept for K-DEMO is presented together with the preliminary design parameters.

  6. A Melodic Contour Repeatedly Experienced by Human Near-Term Fetuses Elicits a Profound Cardiac Reaction One Month after Birth

    OpenAIRE

    Granier-Deferre, Carolyn; Bassereau, Sophie; Ribeiro, Aurélie; Jacquet, Anne-Yvonne; DeCasper, Anthony J.

    2011-01-01

    Background Human hearing develops progressively during the last trimester of gestation. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, and process complex auditory streams. Fetal and neonatal studies show that they can remember frequently recurring sounds. However, existing data can only show retention intervals up to several days after birth. Methodology/Principal Findings Here we show that auditory memories can last at least six weeks. Experimental fe...

  7. Management of hyperbilirubinemia in near ... term newborns according to American Academy of Pediatrics Guidelines: Report of three cases

    OpenAIRE

    Naomi Esthemita Dewanto; Rinawati Rohsiswatmo

    2009-01-01

    All neonates have a transient rise in bilirubin levels, and about 30-50% of infants become visibly jaundiced.1,2 Most jaundice is benign; however, because of the potential brain toxicity of bilirubin, newborn infants must be monitored to identify those who might develop severe hyperbilirubinemia and, in rare cases, acute bilirubin encephalopathy or kernicterus. Ten percent of term infants and 25% of near-term infants have significant hyperbilirubinemia and requir...

  8. Neurodevelopmental outcomes of near-term small-for-gestational-age infants with and without signs of placental underperfusion.

    Science.gov (United States)

    Parra-Saavedra, Miguel; Crovetto, Francesca; Triunfo, Stefania; Savchev, Stefan; Peguero, Anna; Nadal, Alfons; Parra, Guido; Gratacos, Eduard; Figueras, Francesc

    2014-04-01

    To evaluate 2-year neurodevelopmental outcomes of near-term, small-for-gestational-age (SGA) newborns segregated by presence or absence of histopathology reflecting placental underperfusion (PUP). A cohort of consecutive near-term (≥ 34.0 weeks) SGA newborns with normal prenatal umbilical artery Doppler studies was selected. All placentas were inspected for evidence of underperfusion and classified in accordance with established histologic criteria. Neurodevelopmental outcomes at 24 months (age-corrected) were then evaluated, applying the Bayley Scale for Infant and Toddler Development, Third Edition (Bayley-III) to assess cognitive, language, and motor competencies. The impact of PUP on each domain was measured via analysis of covariance, logistic and ordinal regression, with adjustment for smoking, socioeconomic status, gestational age at birth, gender, and breastfeeding. A total of 83 near-term SGA deliveries were studied, 46 (55.4%) of which showed signs of PUP. At 2 years, adjusted neurodevelopmental outcomes were significantly poorer in births involving PUP (relative to SGA infants without PUP) for all three domains of the Bayley scale: cognitive (105.5 vs 96.3, adjusted-p = 0.03), language (98.6 vs 87.8, adjusted-p<0.001), and motor (102.7 vs 94.5, adjusted-p = 0.007). Similarly, the adjusted likelihood of abnormal cognitive, language, and motor competencies in instances of underperfusion was 9.3-, 17.5-, and 1.44-fold higher, respectively, differing significantly for the former two domains. In a substantial fraction of near-term SGA babies without Doppler evidence of placental insufficiency, histologic changes compatible with PUP are still identifiable. These infants are at greater risk of abnormal neurodevelopmental outcomes at 2 years. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Modeling x-ray data for the Saturn z-pinch machine

    International Nuclear Information System (INIS)

    Matuska, W.; Peterson, D.; Deeney, C.; Derzon, M.

    1997-01-01

    A wealth of XRD and time dependent x-ray imaging data exist for the Saturn z-pinch machine, where the load is either a tungsten wire array or a tungsten wire array which implodes onto a SiO 2 foam. Also, these pinches have been modeled with a 2-D RMHD Eulerian computer code. In this paper the authors start with the 2-D Eulerian results to calculate time and spatially dependent spectra using both LTE and NLTE models. Then using response functions, these spectra are converted to XRD currents and camera images, which are quantitatively compared with the data. Through these comparisons, areas of good and lesser quality agreement are determined, and areas are identified where the 2-D Eulerian code should be improved

  10. Using fuzzy models in machining control system and assessment of sustainability

    Science.gov (United States)

    Grinek, A. V.; Boychuk, I. P.; Dantsevich, I. M.

    2018-03-01

    Description of the complex relationship of the optimum velocity with the temperature-strength state in the cutting zone for machining a fuzzy model is proposed. The fuzzy-logical conclusion allows determining the processing speed, which ensures effective, from the point of view of ensuring the quality of the surface layer, the temperature in the cutting zone and the maximum allowable cutting force. A scheme for stabilizing the temperature-strength state in the cutting zone using a nonlinear fuzzy PD–controller is proposed. The stability of the nonlinear system is estimated with the help of grapho–analytical realization of the method of harmonic balance and by modeling in MatLab.

  11. Quick Estimation Model for the Concentration of Indoor Airborne Culturable Bacteria: An Application of Machine Learning

    Directory of Open Access Journals (Sweden)

    Zhijian Liu

    2017-07-01

    Full Text Available Indoor airborne culturable bacteria are sometimes harmful to human health. Therefore, a quick estimation of their concentration is particularly necessary. However, measuring the indoor microorganism concentration (e.g., bacteria usually requires a large amount of time, economic cost, and manpower. In this paper, we aim to provide a quick solution: using knowledge-based machine learning to provide quick estimation of the concentration of indoor airborne culturable bacteria only with the inputs of several measurable indoor environmental indicators, including: indoor particulate matter (PM2.5 and PM10, temperature, relative humidity, and CO2 concentration. Our results show that a general regression neural network (GRNN model can sufficiently provide a quick and decent estimation based on the model training and testing using an experimental database with 249 data groups.

  12. Mathematically modelling the power requirement for a vertical shaft mowing machine

    Directory of Open Access Journals (Sweden)

    Jorge Simón Pérez de Corcho Fuentes

    2008-09-01

    Full Text Available This work describes a mathematical model for determining the power demand for a vertical shaft mowing machine, particularly taking into account the influence of speed on cutting power, which is different from that of other models of mowers. The influence of the apparatus’ rotation and translation speeds was simulated in determining power demand. The results showed that no chan-ges in cutting power were produced by varying the knives’ angular speed (if translation speed was constant, while cutting power became increased if translation speed was increased. Variations in angular speed, however, influenced other parameters deter-mining total power demand. Determining this vertical shaft mower’s cutting pattern led to obtaining good crop stubble quality at the mower’s lower rotation speed, hence reducing total energy requirements.

  13. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    Science.gov (United States)

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  14. A mathematical model for surface roughness of fluidic channels produced by grinding aided electrochemical discharge machining (G-ECDM

    Directory of Open Access Journals (Sweden)

    Ladeesh V. G.

    2017-01-01

    Full Text Available Grinding aided electrochemical discharge machining is a hybrid technique, which combines the grinding action of an abrasive tool and thermal effects of electrochemical discharges to remove material from the workpiece for producing complex contours. The present study focuses on developing fluidic channels on borosilicate glass using G-ECDM and attempts to develop a mathematical model for surface roughness of the machined channel. Preliminary experiments are conducted to study the effect of machining parameters on surface roughness. Voltage, duty factor, frequency and tool feed rate are identified as the significant factors for controlling surface roughness of the channels produced by G-ECDM. A mathematical model was developed for surface roughness by considering the grinding action and thermal effects of electrochemical discharges in material removal. Experiments are conducted to validate the model and the results obtained are in good agreement with that predicted by the model.

  15. Machine Learning Approach for Software Reliability Growth Modeling with Infinite Testing Effort Function

    Directory of Open Access Journals (Sweden)

    Subburaj Ramasamy

    2017-01-01

    Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.

  16. Unsteady aerodynamic modeling at high angles of attack using support vector machines

    Directory of Open Access Journals (Sweden)

    Wang Qing

    2015-06-01

    Full Text Available Accurate aerodynamic models are the basis of flight simulation and control law design. Mathematically modeling unsteady aerodynamics at high angles of attack bears great difficulties in model structure determination and parameter estimation due to little understanding of the flow mechanism. Support vector machines (SVMs based on statistical learning theory provide a novel tool for nonlinear system modeling. The work presented here examines the feasibility of applying SVMs to high angle-of-attack unsteady aerodynamic modeling field. Mainly, after a review of SVMs, several issues associated with unsteady aerodynamic modeling by use of SVMs are discussed in detail, such as selection of input variables, selection of output variables and determination of SVM parameters. The least squares SVM (LS-SVM models are set up from certain dynamic wind tunnel test data of a delta wing and an aircraft configuration, and then used to predict the aerodynamic responses in other tests. The predictions are in good agreement with the test data, which indicates the satisfying learning and generalization performance of LS-SVMs.

  17. Support vector machine-based open crop model (SBOCM: Case of rice production in China

    Directory of Open Access Journals (Sweden)

    Ying-xue Su

    2017-03-01

    Full Text Available Existing crop models produce unsatisfactory simulation results and are operationally complicated. The present study, however, demonstrated the unique advantages of statistical crop models for large-scale simulation. Using rice as the research crop, a support vector machine-based open crop model (SBOCM was developed by integrating developmental stage and yield prediction models. Basic geographical information obtained by surface weather observation stations in China and the 1:1000000 soil database published by the Chinese Academy of Sciences were used. Based on the principle of scale compatibility of modeling data, an open reading frame was designed for the dynamic daily input of meteorological data and output of rice development and yield records. This was used to generate rice developmental stage and yield prediction models, which were integrated into the SBOCM system. The parameters, methods, error resources, and other factors were analyzed. Although not a crop physiology simulation model, the proposed SBOCM can be used for perennial simulation and one-year rice predictions within certain scale ranges. It is convenient for data acquisition, regionally applicable, parametrically simple, and effective for multi-scale factor integration. It has the potential for future integration with extensive social and economic factors to improve the prediction accuracy and practicability.

  18. QSAR models for prediction study of HIV protease inhibitors using support vector machines, neural networks and multiple linear regression

    Directory of Open Access Journals (Sweden)

    Rachid Darnag

    2017-02-01

    Full Text Available Support vector machines (SVM represent one of the most promising Machine Learning (ML tools that can be applied to develop a predictive quantitative structure–activity relationship (QSAR models using molecular descriptors. Multiple linear regression (MLR and artificial neural networks (ANNs were also utilized to construct quantitative linear and non linear models to compare with the results obtained by SVM. The prediction results are in good agreement with the experimental value of HIV activity; also, the results reveal the superiority of the SVM over MLR and ANN model. The contribution of each descriptor to the structure–activity relationships was evaluated.

  19. Model design and simulation of automatic sorting machine using proximity sensor

    Directory of Open Access Journals (Sweden)

    Bankole I. Oladapo

    2016-09-01

    Full Text Available The automatic sorting system has been reported to be complex and a global problem. This is because of the inability of sorting machines to incorporate flexibility in their design concept. This research therefore designed and developed an automated sorting object of a conveyor belt. The developed automated sorting machine is able to incorporate flexibility and separate species of non-ferrous metal objects and at the same time move objects automatically to the basket as defined by the regulation of the Programmable Logic Controllers (PLC with a capacitive proximity sensor to detect a value range of objects. The result obtained shows that plastic, wood, and steel were sorted into their respective and correct position with an average, sorting, time of 9.903 s, 14.072 s and 18.648 s respectively. The proposed developed model of this research could be adopted at any institution or industries, whose practices are based on mechatronics engineering systems. This is to guide the industrial sector in sorting of object and teaching aid to institutions and hence produce the list of classified materials according to the enabled sorting program commands.

  20. A 3D finite element model for the vibration analysis of asymmetric rotating machines

    Energy Technology Data Exchange (ETDEWEB)

    Prabel, B.; Combescure, D. [CEA Saclay, DEN, DM2S, SEMT, DYN, F-91191 Gif Sur Yvette (France); Lazarus, A. [Ecole Polytech, Mecan Solides Lab, F-91128 Palaiseau (France)

    2010-07-01

    This paper suggests a 3D finite element method based on the modal theory in order to analyse linear periodically time-varying systems. Presentation of the method is given through the particular case of asymmetric rotating machines. First, Hill governing equations of asymmetric rotating oscillators with two degrees of freedom are investigated. These differential equations with periodic coefficients are solved with classic Floquet theory leading to parametric quasi-modes. These mathematical entities are found to have the same fundamental properties as classic Eigenmodes, but contain several harmonics possibly responsible for parametric instabilities. Extension to the vibration analysis (stability, frequency spectrum) of asymmetric rotating machines with multiple degrees of freedom is achieved with a fully 3D finite element model including stator and rotor coupling. Due to Hill expansion, the usual degrees of freedom are duplicated and associated with the relevant harmonic of the Floquet solutions in the frequency domain. Parametric quasi-modes as well as steady-state response of the whole system are ingeniously computed with a component-mode synthesis method. Finally, experimental investigations are performed on a test rig composed of an asymmetric rotor running on non-isotropic supports. Numerical and experimental results are compared to highlight the potential of the numerical method. (authors)

  1. A machine learning approach for automated assessment of retinal vasculature in the oxygen induced retinopathy model.

    Science.gov (United States)

    Mazzaferri, Javier; Larrivée, Bruno; Cakir, Bertan; Sapieha, Przemyslaw; Costantino, Santiago

    2018-03-02

    Preclinical studies of vascular retinal diseases rely on the assessment of developmental dystrophies in the oxygen induced retinopathy rodent model. The quantification of vessel tufts and avascular regions is typically computed manually from flat mounted retinas imaged using fluorescent probes that highlight the vascular network. Such manual measurements are time-consuming and hampered by user variability and bias, thus a rapid and objective method is needed. Here, we introduce a machine learning approach to segment and characterize vascular tufts, delineate the whole vasculature network, and identify and analyze avascular regions. Our quantitative retinal vascular assessment (QuRVA) technique uses a simple machine learning method and morphological analysis to provide reliable computations of vascular density and pathological vascular tuft regions, devoid of user intervention within seconds. We demonstrate the high degree of error and variability of manual segmentations, and designed, coded, and implemented a set of algorithms to perform this task in a fully automated manner. We benchmark and validate the results of our analysis pipeline using the consensus of several manually curated segmentations using commonly used computer tools. The source code of our implementation is released under version 3 of the GNU General Public License ( https://www.mathworks.com/matlabcentral/fileexchange/65699-javimazzaf-qurva ).

  2. A Universal Reactive Machine

    DEFF Research Database (Denmark)

    Andersen, Henrik Reif; Mørk, Simon; Sørensen, Morten U.

    1997-01-01

    Turing showed the existence of a model universal for the set of Turing machines in the sense that given an encoding of any Turing machine asinput the universal Turing machine simulates it. We introduce the concept of universality for reactive systems and construct a CCS processuniversal...

  3. Validation of a Numerical Model for the Prediction of the Annoyance Condition at the Operator Station of Construction Machines

    Directory of Open Access Journals (Sweden)

    Eleonora Carletti

    2016-11-01

    Full Text Available It is well-known that the reduction of noise levels is not strictly linked to the reduction of noise annoyance. Even earthmoving machine manufacturers are facing the problem of customer complaints concerning the noise quality of their machines with increasing frequency. Unfortunately, all the studies geared to the understanding of the relationship between multidimensional characteristics of noise signals and the auditory perception of annoyance require repeated sessions of jury listening tests, which are time-consuming. In this respect, an annoyance prediction model was developed for compact loaders to assess the annoyance sensation perceived by operators at their workplaces without repeating the full sound quality assessment but using objective parameters only. This paper aims at verifying the feasibility of the developed annoyance prediction model when applied to other kinds of earthmoving machines. For this purpose, an experimental investigation was performed on five earthmoving machines, different in type, dimension, and engine mechanical power, and the annoyance predicted by the numerical model was compared to the annoyance given by subjective listening tests. The results were evaluated by means of the squared value of the correlation coefficient, R2, and they confirm the possible applicability of the model to other kinds of machines.

  4. Virtual-view PSNR prediction based on a depth distortion tolerance model and support vector machine.

    Science.gov (United States)

    Chen, Fen; Chen, Jiali; Peng, Zongju; Jiang, Gangyi; Yu, Mei; Chen, Hua; Jiao, Renzhi

    2017-10-20

    Quality prediction of virtual-views is important for free viewpoint video systems, and can be used as feedback to improve the performance of depth video coding and virtual-view rendering. In this paper, an efficient virtual-view peak signal to noise ratio (PSNR) prediction method is proposed. First, the effect of depth distortion on virtual-view quality is analyzed in detail, and a depth distortion tolerance (DDT) model that determines the DDT range is presented. Next, the DDT model is used to predict the virtual-view quality. Finally, a support vector machine (SVM) is utilized to train and obtain the virtual-view quality prediction model. Experimental results show that the Spearman's rank correlation coefficient and root mean square error between the actual PSNR and the predicted PSNR by DDT model are 0.8750 and 0.6137 on average, and by the SVM prediction model are 0.9109 and 0.5831. The computational complexity of the SVM method is lower than the DDT model and the state-of-the-art methods.

  5. Ecophysiological modeling of grapevine water stress in Burgundy terroirs by a machine-learning approach

    Directory of Open Access Journals (Sweden)

    Luca eBrillante

    2016-06-01

    Full Text Available In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay located in eight experimental plots (Burgundy, France along a hillslope were monitored weekly for three years for leaf water potentials, both at predawn (Ψpd and at midday (Ψstem. The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall and soil characteristics (soil texture, gravel content, slope by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ13C of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd, comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ13C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions.

  6. Ecophysiological Modeling of Grapevine Water Stress in Burgundy Terroirs by a Machine-Learning Approach.

    Science.gov (United States)

    Brillante, Luca; Mathieu, Olivier; Lévêque, Jean; Bois, Benjamin

    2016-01-01

    In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay) located in eight experimental plots (Burgundy, France) along a hillslope were monitored weekly for 3 years for leaf water potentials, both at predawn (Ψpd) and at midday (Ψstem). The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall) and soil characteristics (soil texture, gravel content, slope) by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ(13)C) of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd), comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ(13)C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions.

  7. Modeling and Dynamic Analysis of Cutterhead Driving System in Tunnel Boring Machine

    Directory of Open Access Journals (Sweden)

    Wei Sun

    2017-01-01

    Full Text Available Failure of cutterhead driving system (CDS of tunnel boring machine (TBM often occurs under shock and vibration conditions. To investigate the dynamic characteristics and reduce system vibration further, an electromechanical coupling model of CDS is established which includes the model of direct torque control (DTC system for three-phase asynchronous motor and purely torsional dynamic model of multistage gear transmission system. The proposed DTC model can provide driving torque just as the practical inverter motor operates so that the influence of motor operating behavior will not be erroneously estimated. Moreover, nonlinear gear meshing factors, such as time-variant mesh stiffness and transmission error, are involved in the dynamic model. Based on the established nonlinear model of CDS, vibration modes can be classified into three types, that is, rigid motion mode, rotational vibration mode, and planet vibration mode. Moreover, dynamic responses under actual driving torque and idealized equivalent torque are compared, which reveals that the ripple of actual driving torque would aggravate vibration of gear transmission system. Influence index of torque ripple is proposed to show that vibration of system increases with torque ripple. This study provides useful guideline for antivibration design and motor control of CDS in TBM.

  8. Hidden Markov models and other machine learning approaches in computational molecular biology

    Energy Technology Data Exchange (ETDEWEB)

    Baldi, P. [California Inst. of Tech., Pasadena, CA (United States)

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In this tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.

  9. Machine Learning-based discovery of closures for reduced models of dynamical systems

    Science.gov (United States)

    Pan, Shaowu; Duraisamy, Karthik

    2017-11-01

    Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  10. Rotary ultrasonic machining of CFRP: a mechanistic predictive model for cutting force.

    Science.gov (United States)

    Cong, W L; Pei, Z J; Sun, X; Zhang, C L

    2014-02-01

    Cutting force is one of the most important output variables in rotary ultrasonic machining (RUM) of carbon fiber reinforced plastic (CFRP) composites. Many experimental investigations on cutting force in RUM of CFRP have been reported. However, in the literature, there are no cutting force models for RUM of CFRP. This paper develops a mechanistic predictive model for cutting force in RUM of CFRP. The material removal mechanism of CFRP in RUM has been analyzed first. The model is based on the assumption that brittle fracture is the dominant mode of material removal. CFRP micromechanical analysis has been conducted to represent CFRP as an equivalent homogeneous material to obtain the mechanical properties of CFRP from its components. Based on this model, relationships between input variables (including ultrasonic vibration amplitude, tool rotation speed, feedrate, abrasive size, and abrasive concentration) and cutting force can be predicted. The relationships between input variables and important intermediate variables (indentation depth, effective contact time, and maximum impact force of single abrasive grain) have been investigated to explain predicted trends of cutting force. Experiments are conducted to verify the model, and experimental results agree well with predicted trends from this model. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Numerical Simulations of Two-Phase Flow in a Self-Aerated Flotation Machine and Kinetics Modeling

    KAUST Repository

    Fayed, Hassan E.; Ragab, Saad

    2015-01-01

    A new boundary condition treatment has been devised for two-phase flow numerical simulations in a self-aerated minerals flotation machine and applied to a Wemco 0.8 m3 pilot cell. Airflow rate is not specified a priori but is predicted by the simulations as well as power consumption. Time-dependent simulations of two-phase flow in flotation machines are essential to understanding flow behavior and physics in self-aerated machines such as the Wemco machines. In this paper, simulations have been conducted for three different uniform bubble sizes (db = 0.5, 0.7 and 1.0 mm) to study the effects of bubble size on air holdup and hydrodynamics in Wemco pilot cells. Moreover, a computational fluid dynamics (CFD)-based flotation model has been developed to predict the pulp recovery rate of minerals from a flotation cell for different bubble sizes, different particle sizes and particle size distribution. The model uses a first-order rate equation, where models for probabilities of collision, adhesion and stabilization and collisions frequency estimated by Zaitchik-2010 model are used for the calculation of rate constant. Spatial distributions of dissipation rate and air volume fraction (also called void fraction) determined by the two-phase simulations are the input for the flotation kinetics model. The average pulp recovery rate has been calculated locally for different uniform bubble and particle diameters. The CFD-based flotation kinetics model is also used to predict pulp recovery rate in the presence of particle size distribution. Particle number density pdf and the data generated for single particle size are used to compute the recovery rate for a specific mean particle diameter. Our computational model gives a figure of merit for the recovery rate of a flotation machine, and as such can be used to assess incremental design improvements as well as design of new machines.

  12. Numerical Simulations of Two-Phase Flow in a Self-Aerated Flotation Machine and Kinetics Modeling

    Directory of Open Access Journals (Sweden)

    Hassan Fayed

    2015-03-01

    Full Text Available A new boundary condition treatment has been devised for two-phase flow numerical simulations in a self-aerated minerals flotation machine and applied to a Wemco 0.8 m3 pilot cell. Airflow rate is not specified a priori but is predicted by the simulations as well as power consumption. Time-dependent simulations of two-phase flow in flotation machines are essential to understanding flow behavior and physics in self-aerated machines such as the Wemco machines. In this paper, simulations have been conducted for three different uniform bubble sizes (db = 0.5, 0.7 and 1.0 mm to study the effects of bubble size on air holdup and hydrodynamics in Wemco pilot cells. Moreover, a computational fluid dynamics (CFD-based flotation model has been developed to predict the pulp recovery rate of minerals from a flotation cell for different bubble sizes, different particle sizes and particle size distribution. The model uses a first-order rate equation, where models for probabilities of collision, adhesion and stabilization and collisions frequency estimated by Zaitchik-2010 model are used for the calculation of rate constant. Spatial distributions of dissipation rate and air volume fraction (also called void fraction determined by the two-phase simulations are the input for the flotation kinetics model. The average pulp recovery rate has been calculated locally for different uniform bubble and particle diameters. The CFD-based flotation kinetics model is also used to predict pulp recovery rate in the presence of particle size distribution. Particle number density pdf and the data generated for single particle size are used to compute the recovery rate for a specific mean particle diameter. Our computational model gives a figure of merit for the recovery rate of a flotation machine, and as such can be used to assess incremental design improvements as well as design of new machines.

  13. Numerical Simulations of Two-Phase Flow in a Self-Aerated Flotation Machine and Kinetics Modeling

    KAUST Repository

    Fayed, Hassan E.

    2015-03-30

    A new boundary condition treatment has been devised for two-phase flow numerical simulations in a self-aerated minerals flotation machine and applied to a Wemco 0.8 m3 pilot cell. Airflow rate is not specified a priori but is predicted by the simulations as well as power consumption. Time-dependent simulations of two-phase flow in flotation machines are essential to understanding flow behavior and physics in self-aerated machines such as the Wemco machines. In this paper, simulations have been conducted for three different uniform bubble sizes (db = 0.5, 0.7 and 1.0 mm) to study the effects of bubble size on air holdup and hydrodynamics in Wemco pilot cells. Moreover, a computational fluid dynamics (CFD)-based flotation model has been developed to predict the pulp recovery rate of minerals from a flotation cell for different bubble sizes, different particle sizes and particle size distribution. The model uses a first-order rate equation, where models for probabilities of collision, adhesion and stabilization and collisions frequency estimated by Zaitchik-2010 model are used for the calculation of rate constant. Spatial distributions of dissipation rate and air volume fraction (also called void fraction) determined by the two-phase simulations are the input for the flotation kinetics model. The average pulp recovery rate has been calculated locally for different uniform bubble and particle diameters. The CFD-based flotation kinetics model is also used to predict pulp recovery rate in the presence of particle size distribution. Particle number density pdf and the data generated for single particle size are used to compute the recovery rate for a specific mean particle diameter. Our computational model gives a figure of merit for the recovery rate of a flotation machine, and as such can be used to assess incremental design improvements as well as design of new machines.

  14. Modeling of the integrity of machining surfaces: application to the case of 15-5 PH stainless steel finish turning

    International Nuclear Information System (INIS)

    Mondelin, A.

    2012-01-01

    During machining, extreme conditions of pressure, temperature and strain appear in the cutting zone. In this thermo-mechanical context, the link between the cutting conditions (cutting speed, lubrication, feed rate, wear, tool coating...) and the machining surface integrity represents a major scientific target. This PhD study is a part of a global project called MIFSU (Modeling of the Integrity and Fatigue resistance of Machining Surfaces) and it focuses on the finish turning of the 15-5PH (a martensitic stainless steel used for parts of helicopter rotor). Firstly, material behavior has been studied in order to provide data for machining simulations. Stress-free dilatometry tests were conducted to obtain the austenitization kinetics of 15-5PH steel for high heating rates (up to 11,000 degrees C/s). Then, parameters of Leblond metallurgical model have been calibrated. In addition, dynamic compression tests (de/dt ranging from 0.01 to 80/s and e ≥ 1) have been performed to calibrate a strain-rate dependent elasto-plasticity model (for high strains). These tests also helped to highlight the dynamic recrystallization phenomena and their influence on the flow stress of the material. Thus, recrystallization model has also been implemented.In parallel, a numerical model for the prediction of machined surface integrity has been constructed. This model is based on a methodology called 'hybrid' (developed during the PhD thesis of Frederic Valiorgue for the AISI 304L steel). The method consists in replacing tool and chip modeling by equivalent loadings (obtained experimentally). A calibration step of these loadings has been carried out using orthogonal cutting and friction tests (with sensitivity studies of machining forces, friction and heat partition coefficients to cutting parameters variations).Finally, numerical simulations predictions of microstructural changes (austenitization and dynamic recrystallization) and residual stresses have been successfully compared with

  15. hERG classification model based on a combination of support vector machine method and GRIND descriptors

    DEFF Research Database (Denmark)

    Li, Qiyuan; Jorgensen, Flemming Steen; Oprea, Tudor

    2008-01-01

    and diverse library of 495 compounds. The models combine pharmacophore-based GRIND descriptors with a support vector machine (SVM) classifier in order to discriminate between hERG blockers and nonblockers. Our models were applied at different thresholds from 1 to 40 mu m and achieved an overall accuracy up...

  16. Filtered selection coupled with support vector machines generate a functionally relevant prediction model for colorectal cancer

    Directory of Open Access Journals (Sweden)

    Gabere MN

    2016-06-01

    Full Text Available Musa Nur Gabere,1 Mohamed Aly Hussein,1 Mohammad Azhar Aziz2 1Department of Bioinformatics, King Abdullah International Medical Research Center/King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia; 2Colorectal Cancer Research Program, Department of Medical Genomics, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia Purpose: There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC. The selection of important features is a crucial step before training a classifier.Methods: In this study, we built a model that uses support vector machine (SVM to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid.Results: The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF, Bayes net (BN, multilayer perceptron (MLP, naïve Bayes (NB, reduced error pruning tree (REPT, and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP. Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1

  17. A geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time

    Science.gov (United States)

    Yu, Miaomiao; Tang, Yinghui; Fu, Yonghong

    2013-06-01

    In this article, we consider a geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time. A maintenance policy (N - 1, N) based on the number of failures of the service machine is introduced into the system. Assuming that a failed service machine after repair will not be 'as good as new', and the spare service machine for replacement is only available by an order. More specifically, we suppose that the procurement lead time for delivering the spare service machine follows a phase-type (PH) distribution. Under such assumptions, we apply the matrix-analytic method to develop the steady state probabilities of the system, and then we obtain some system performance measures. Finally, employing an important Lemma, the explicit expression of the long-run average cost rate for the service machine is derived, and the direct search method is also implemented to determine the optimal value of N for minimising the average cost rate.

  18. Modeling the Financial Distress of Microenterprise StartUps Using Support Vector Machines: A Case Study

    Directory of Open Access Journals (Sweden)

    Antonio Blanco-Oliver

    2014-10-01

    Full Text Available Despite the leading role that micro-entrepreneurship plays in economic development, and the high failure rate of microenterprise start-ups in their early years, very few studies have designed financial distress models to detect the financial problems of micro-entrepreneurs. Moreover, due to a lack of research, nothing is known about whether non-financial information and nonparametric statistical techniques improve the predictive capacity of these models. Therefore, this paper provides an innovative financial distress model specifically designed for microenterprise startups via support vector machines (SVMs that employs financial, non-financial, and macroeconomic variables. Based on a sample of almost 5,500 micro- entrepreneurs from a Peruvian Microfinance Institution (MFI, our findings show that the introduction of non-financial information related to the zone in which the entrepreneurs live and situate their business, the duration of the MFI-entrepreneur relationship, the number of loans granted by the MFI in the last year, the loan destination, and the opinion of experts on the probability that microenterprise start-ups may experience financial problems, significantly increases the accuracy performance of our financial distress model. Furthermore, the results reveal that the models that use SVMs outperform those which employ traditional logistic regression (LR analysis.

  19. Assessing biomass of diverse coastal marsh ecosystems using statistical and machine learning models

    Science.gov (United States)

    Mo, Yu; Kearney, Michael S.; Riter, J. C. Alexis; Zhao, Feng; Tilley, David R.

    2018-06-01

    The importance and vulnerability of coastal marshes necessitate effective ways to closely monitor them. Optical remote sensing is a powerful tool for this task, yet its application to diverse coastal marsh ecosystems consisting of different marsh types is limited. This study samples spectral and biophysical data from freshwater, intermediate, brackish, and saline marshes in Louisiana, and develops statistical and machine learning models to assess the marshes' biomass with combined ground, airborne, and spaceborne remote sensing data. It is found that linear models derived from NDVI and EVI are most favorable for assessing Leaf Area Index (LAI) using multispectral data (R2 = 0.7 and 0.67, respectively), and the random forest models are most useful in retrieving LAI and Aboveground Green Biomass (AGB) using hyperspectral data (R2 = 0.91 and 0.84, respectively). It is also found that marsh type and plant species significantly impact the linear model development (P biomass of Louisiana's coastal marshes using various optical remote sensing techniques, and highlights the impacts of the marshes' species composition on the model development and the sensors' spatial resolution on biomass mapping, thereby providing useful tools for monitoring the biomass of coastal marshes in Louisiana and diverse coastal marsh ecosystems elsewhere.

  20. Unsupervised machine learning account of magnetic transitions in the Hubbard model

    Science.gov (United States)

    Ch'ng, Kelvin; Vazquez, Nick; Khatami, Ehsan

    2018-01-01

    We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t -distributed stochastic neighboring ensemble (t -SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t -SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t -SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.

  1. Establishment of tunnel-boring machine disk cutter rock-breaking model from energy perspective

    Directory of Open Access Journals (Sweden)

    Liwei Song

    2015-12-01

    Full Text Available As the most important cutting tools during tunnel-boring machine tunneling construction process, V-type disk cutter’s rock-breaking mechanism has been researched by many scholars all over the world. Adopting finite element method, this article focused on the interaction between V-type disk cutters and the intact rock to carry out microscopic parameter analysis methods: first, the stress model of rock breaking was established through V-type disk cutter motion trajectory analysis; second, based on the incremental theorem of the elastic–plastic theory, the strain model of the relative changes of rock displacement during breaking process was created. According to the principle of admissible work by energy method of the elastic–plastic theory to analyze energy transfer rules in the process of breaking rock, rock-breaking force of the V-type disk cutter could be regarded as the external force in the rock system. Finally, by taking the rock system as the reference object, the total potential energy equivalent model of rock system was derived to obtain the forces of the three directions acting on V-type disk cutter during the rock-breaking process. This derived model, which has been proved to be effective and scientific through comparisons with some original force models and by comparative analysis with experimental data, also initiates a new research strategy taking the view of the micro elastic–plastic theory to study the rock-breaking mechanism.

  2. FACT. Streamed data analysis and online application of machine learning models

    Energy Technology Data Exchange (ETDEWEB)

    Bruegge, Kai Arno; Buss, Jens [Technische Universitaet Dortmund (Germany). Astroteilchenphysik; Collaboration: FACT-Collaboration

    2016-07-01

    Imaging Atmospheric Cherenkov Telescopes (IACTs) like FACT produce a continuous flow of data during measurements. Analyzing the data in near real time is essential for monitoring sources. One major task of a monitoring system is to detect changes in the gamma-ray flux of a source, and to alert other experiments if some predefined limit is reached. In order to calculate the flux of an observed source, it is necessary to run an entire data analysis process including calibration, image cleaning, parameterization, signal-background separation and flux estimation. Software built on top of a data streaming framework has been implemented for FACT and generalized to work with the data acquisition framework of the Cherenkov Telescope Array (CTA). We present how the streams-framework is used to apply supervised machine learning models to an online data stream from the telescope.

  3. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    International Nuclear Information System (INIS)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-01-01

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelity quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.

  4. Reliability enumeration model for the gear in a multi-functional machine

    Science.gov (United States)

    Nasution, M. K. M.; Ambarita, H.

    2018-02-01

    The angle and direction of motion play an important role in the ability of a multifunctional machine to be able to perform the task to be charged. The movement can be a rotational action that appears to perform a round, by which the rotation can be done by connecting the generator by hand through the help of a hinge formed from two rounded surfaces. The rotation of the entire arm can be carried out by the interconnection between two surfaces having a jagged ring. This link will change according to the angle of motion, and any yeast of the serration will have a share in the success of this process, therefore a robust hand measurement model is established based on canonical provisions.

  5. Machine learning based cloud mask algorithm driven by radiative transfer modeling

    Science.gov (United States)

    Chen, N.; Li, W.; Tanikawa, T.; Hori, M.; Shimada, R.; Stamnes, K. H.

    2017-12-01

    Cloud detection is a critically important first step required to derive many satellite data products. Traditional threshold based cloud mask algorithms require a complicated design process and fine tuning for each sensor, and have difficulty over snow/ice covered areas. With the advance of computational power and machine learning techniques, we have developed a new algorithm based on a neural network classifier driven by extensive radiative transfer modeling. Statistical validation results obtained by using collocated CALIOP and MODIS data show that its performance is consistent over different ecosystems and significantly better than the MODIS Cloud Mask (MOD35 C6) during the winter seasons over mid-latitude snow covered areas. Simulations using a reduced number of satellite channels also show satisfactory results, indicating its flexibility to be configured for different sensors.

  6. Data on Support Vector Machines (SVM model to forecast photovoltaic power

    Directory of Open Access Journals (Sweden)

    M. Malvoni

    2016-12-01

    Full Text Available The data concern the photovoltaic (PV power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled “Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data” (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015 [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA are applied to the Least Squares Support Vector Machines (LS-SVM to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material.

  7. Early Colorectal Cancer Detected by Machine Learning Model Using Gender, Age, and Complete Blood Count Data.

    Science.gov (United States)

    Hornbrook, Mark C; Goshen, Ran; Choman, Eran; O'Keeffe-Rosetti, Maureen; Kinar, Yaron; Liles, Elizabeth G; Rust, Kristal C

    2017-10-01

    Machine learning tools identify patients with blood counts indicating greater likelihood of colorectal cancer and warranting colonoscopy referral. To validate a machine learning colorectal cancer detection model on a US community-based insured adult population. Eligible colorectal cancer cases (439 females, 461 males) with complete blood counts before diagnosis were identified from Kaiser Permanente Northwest Region's Tumor Registry. Control patients (n = 9108) were randomly selected from KPNW's population who had no cancers, received at ≥1 blood count, had continuous enrollment from 180 days prior to the blood count through 24 months after the count, and were aged 40-89. For each control, one blood count was randomly selected as the pseudo-colorectal cancer diagnosis date for matching to cases, and assigned a "calendar year" based on the count date. For each calendar year, 18 controls were randomly selected to match the general enrollment's 10-year age groups and lengths of continuous enrollment. Prediction performance was evaluated by area under the curve, specificity, and odds ratios. Area under the receiver operating characteristics curve for detecting colorectal cancer was 0.80 ± 0.01. At 99% specificity, the odds ratio for association of a high-risk detection score with colorectal cancer was 34.7 (95% CI 28.9-40.4). The detection model had the highest accuracy in identifying right-sided colorectal cancers. ColonFlag ® identifies individuals with tenfold higher risk of undiagnosed colorectal cancer at curable stages (0/I/II), flags colorectal tumors 180-360 days prior to usual clinical diagnosis, and is more accurate at identifying right-sided (compared to left-sided) colorectal cancers.

  8. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    Science.gov (United States)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  9. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    Science.gov (United States)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

  10. Machine learning modeling of plant phenology based on coupling satellite and gridded meteorological dataset

    Science.gov (United States)

    Czernecki, Bartosz; Nowosad, Jakub; Jabłońska, Katarzyna

    2018-04-01

    Changes in the timing of plant phenological phases are important proxies in contemporary climate research. However, most of the commonly used traditional phenological observations do not give any coherent spatial information. While consistent spatial data can be obtained from airborne sensors and preprocessed gridded meteorological data, not many studies robustly benefit from these data sources. Therefore, the main aim of this study is to create and evaluate different statistical models for reconstructing, predicting, and improving quality of phenological phases monitoring with the use of satellite and meteorological products. A quality-controlled dataset of the 13 BBCH plant phenophases in Poland was collected for the period 2007-2014. For each phenophase, statistical models were built using the most commonly applied regression-based machine learning techniques, such as multiple linear regression, lasso, principal component regression, generalized boosted models, and random forest. The quality of the models was estimated using a k-fold cross-validation. The obtained results showed varying potential for coupling meteorological derived indices with remote sensing products in terms of phenological modeling; however, application of both data sources improves models' accuracy from 0.6 to 4.6 day in terms of obtained RMSE. It is shown that a robust prediction of early phenological phases is mostly related to meteorological indices, whereas for autumn phenophases, there is a stronger information signal provided by satellite-derived vegetation metrics. Choosing a specific set of predictors and applying a robust preprocessing procedures is more important for final results than the selection of a particular statistical model. The average RMSE for the best models of all phenophases is 6.3, while the individual RMSE vary seasonally from 3.5 to 10 days. Models give reliable proxy for ground observations with RMSE below 5 days for early spring and late spring phenophases. For

  11. Numerical modelling of micro-machining of f.c.c. single crystal: Influence of strain gradients

    KAUST Repository

    Demiral, Murat

    2014-11-01

    A micro-machining process becomes increasingly important with the continuous miniaturization of components used in various fields from military to civilian applications. To characterise underlying micromechanics, a 3D finite-element model of orthogonal micro-machining of f.c.c. single crystal copper was developed. The model was implemented in a commercial software ABAQUS/Explicit employing a user-defined subroutine VUMAT. Strain-gradient crystal-plasticity and conventional crystal-plasticity theories were used to demonstrate the influence of pre-existing and evolved strain gradients on the cutting process for different combinations of crystal orientations and cutting directions. Crown Copyright © 2014.

  12. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    2009-10-01

    Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are

  13. Prediction of Aerosol Optical Depth in West Asia: Machine Learning Methods versus Numerical Models

    Science.gov (United States)

    Omid Nabavi, Seyed; Haimberger, Leopold; Abbasi, Reyhaneh; Samimi, Cyrus

    2017-04-01

    Dust-prone areas of West Asia are releasing increasingly large amounts of dust particles during warm months. Because of the lack of ground-based observations in the region, this phenomenon is mainly monitored through remotely sensed aerosol products. The recent development of mesoscale Numerical Models (NMs) has offered an unprecedented opportunity to predict dust emission, and, subsequently Aerosol Optical Depth (AOD), at finer spatial and temporal resolutions. Nevertheless, the significant uncertainties in input data and simulations of dust activation and transport limit the performance of numerical models in dust prediction. The presented study aims to evaluate if machine-learning algorithms (MLAs), which require much less computational expense, can yield the same or even better performance than NMs. Deep blue (DB) AOD, which is observed by satellites but also predicted by MLAs and NMs, is used for validation. We concentrate our evaluations on the over dry Iraq plains, known as the main origin of recently intensified dust storms in West Asia. Here we examine the performance of four MLAs including Linear regression Model (LM), Support Vector Machine (SVM), Artificial Neural Network (ANN), Multivariate Adaptive Regression Splines (MARS). The Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) and the Dust REgional Atmosphere Model (DREAM) are included as NMs. The MACC aerosol re-analysis of European Centre for Medium-range Weather Forecast (ECMWF) is also included, although it has assimilated satellite-based AOD data. Using the Recursive Feature Elimination (RFE) method, nine environmental features including soil moisture and temperature, NDVI, dust source function, albedo, dust uplift potential, vertical velocity, precipitation and 9-month SPEI drought index are selected for dust (AOD) modeling by MLAs. During the feature selection process, we noticed that NDVI and SPEI are of the highest importance in MLAs predictions. The data set was divided

  14. Neonatal physiological correlates of near-term brain development on MRI and DTI in very-low-birth-weight preterm infants

    Directory of Open Access Journals (Sweden)

    Jessica Rose, PhD

    2014-01-01

    Results suggest that at near-term age, thalamus WM microstructure may be particularly vulnerable to certain neonatal risk factors. Interactions between albumin, bilirubin, phototherapy, and brain development warrant further investigation. Identification of physiological risk factors associated with selective vulnerability of certain brain regions at near-term age may clarify the etiology of neurodevelopmental impairment and inform neuroprotective treatment for VLBW preterm infants.

  15. Machine medical ethics

    CERN Document Server

    Pontier, Matthijs

    2015-01-01

    The essays in this book, written by researchers from both humanities and sciences, describe various theoretical and experimental approaches to adding medical ethics to a machine in medical settings. Medical machines are in close proximity with human beings, and getting closer: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. In such contexts, machines are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for e...

  16. Modeling and control of PEMFC based on least squares support vector machines

    International Nuclear Information System (INIS)

    Li Xi; Cao Guangyi; Zhu Xinjian

    2006-01-01

    The proton exchange membrane fuel cell (PEMFC) is one of the most important power supplies. The operating temperature of the stack is an important controlled variable, which impacts the performance of the PEMFC. In order to improve the generating performance of the PEMFC, prolong its life and guarantee safety, credibility and low cost of the PEMFC system, it must be controlled efficiently. A nonlinear predictive control algorithm based on a least squares support vector machine (LS-SVM) model is presented for a family of complex systems with severe nonlinearity, such as the PEMFC, in this paper. The nonlinear off line model of the PEMFC is built by a LS-SVM model with radial basis function (RBF) kernel so as to implement nonlinear predictive control of the plant. During PEMFC operation, the off line model is linearized at each sampling instant, and the generalized predictive control (GPC) algorithm is applied to the predictive control of the plant. Experimental results demonstrate the effectiveness and advantages of this approach

  17. Model of Peatland Vegetation Species using HyMap Image and Machine Learning

    Science.gov (United States)

    Dayuf Jusuf, Muhammad; Danoedoro, Projo; Muljo Sukojo, Bangun; Hartono

    2017-12-01

    Species Tumih / Parepat (Combretocarpus-rotundatus Mig. Dancer) family Anisophylleaceae and Meranti (Shorea Belangerang, Shorea Teysmanniana Dyer ex Brandis) family Dipterocarpaceae is a group of vegetation species distribution model. Species pioneer is predicted as an indicator of the succession of ecosystem restoration of tropical peatland characteristics and extremely fragile (unique) in the endemic hot spot of Sundaland. Climate change projections and conservation planning are hot topics of current discussion, analysis of alternative approaches and the development of combinations of species projection modelling algorithms through geospatial information systems technology. Approach model to find out the research problem of vegetation level based on the machine learning hybrid method, wavelet and artificial neural networks. Field data are used as a reference collection of natural resource field sample objects and biodiversity assessment. The testing and training ANN data set iterations times 28, achieve a performance value of 0.0867 MSE value is smaller than the ANN training data, above 50%, and spectral accuracy 82.1 %. Identify the location of the sample point position of the Tumih / Parepat vegetation species using HyMap Image is good enough, at least the modelling, design of the species distribution can reach the target in this study. The computation validation rate above 90% proves the calculation can be considered.

  18. Evaluation of different machine learning models for predicting and mapping the susceptibility of gully erosion

    Science.gov (United States)

    Rahmati, Omid; Tahmasebipour, Nasser; Haghizadeh, Ali; Pourghasemi, Hamid Reza; Feizizadeh, Bakhtiar

    2017-12-01

    Gully erosion constitutes a serious problem for land degradation in a wide range of environments. The main objective of this research was to compare the performance of seven state-of-the-art machine learning models (SVM with four kernel types, BP-ANN, RF, and BRT) to model the occurrence of gully erosion in the Kashkan-Poldokhtar Watershed, Iran. In the first step, a gully inventory map consisting of 65 gully polygons was prepared through field surveys. Three different sample data sets (S1, S2, and S3), including both positive and negative cells (70% for training and 30% for validation), were randomly prepared to evaluate the robustness of the models. To model the gully erosion susceptibility, 12 geo-environmental factors were selected as predictors. Finally, the goodness-of-fit and prediction skill of the models were evaluated by different criteria, including efficiency percent, kappa coefficient, and the area under the ROC curves (AUC). In terms of accuracy, the RF, RBF-SVM, BRT, and P-SVM models performed excellently both in the degree of fitting and in predictive performance (AUC values well above 0.9), which resulted in accurate predictions. Therefore, these models can be used in other gully erosion studies, as they are capable of rapidly producing accurate and robust gully erosion susceptibility maps (GESMs) for decision-making and soil and water management practices. Furthermore, it was found that performance of RF and RBF-SVM for modelling gully erosion occurrence is quite stable when the learning and validation samples are changed.

  19. Neural Control and Adaptive Neural Forward Models for Insect-like, Energy-Efficient, and Adaptable Locomotion of Walking Machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast...... on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models...... allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show...

  20. A model for Intelligent Random Access Memory architecture (IRAM) cellular automata algorithms on the Associative String Processing machine (ASTRA)

    CERN Document Server

    Rohrbach, F; Vesztergombi, G

    1997-01-01

    In the near future, the computer performance will be completely determined by how long it takes to access memory. There are bottle-necks in memory latency and memory-to processor interface bandwidth. The IRAM initiative could be the answer by putting Processor-In-Memory (PIM). Starting from the massively parallel processing concept, one reached a similar conclusion. The MPPC (Massively Parallel Processing Collaboration) project and the 8K processor ASTRA machine (Associative String Test bench for Research \\& Applications) developed at CERN \\cite{kuala} can be regarded as a forerunner of the IRAM concept. The computing power of the ASTRA machine, regarded as an IRAM with 64 one-bit processors on a 64$\\times$64 bit-matrix memory chip machine, has been demonstrated by running statistical physics algorithms: one-dimensional stochastic cellular automata, as a simple model for dynamical phase transitions. As a relevant result for physics, the damage spreading of this model has been investigated.

  1. Toward a Progress Indicator for Machine Learning Model Building and Data Mining Algorithm Execution: A Position Paper

    Science.gov (United States)

    Luo, Gang

    2017-01-01

    For user-friendliness, many software systems offer progress indicators for long-duration tasks. A typical progress indicator continuously estimates the remaining task execution time as well as the portion of the task that has been finished. Building a machine learning model often takes a long time, but no existing machine learning software supplies a non-trivial progress indicator. Similarly, running a data mining algorithm often takes a long time, but no existing data mining software provides a nontrivial progress indicator. In this article, we consider the problem of offering progress indicators for machine learning model building and data mining algorithm execution. We discuss the goals and challenges intrinsic to this problem. Then we describe an initial framework for implementing such progress indicators and two advanced, potential uses of them, with the goal of inspiring future research on this topic. PMID:29177022

  2. Development of hardware system using temperature and vibration maintenance models integration concepts for conventional machines monitoring: a case study

    Science.gov (United States)

    Adeyeri, Michael Kanisuru; Mpofu, Khumbulani; Kareem, Buliaminu

    2016-03-01

    This article describes the integration of temperature and vibration models for maintenance monitoring of conventional machinery parts in which their optimal and best functionalities are affected by abnormal changes in temperature and vibration values thereby resulting in machine failures, machines breakdown, poor quality of products, inability to meeting customers' demand, poor inventory control and just to mention a few. The work entails the use of temperature and vibration sensors as monitoring probes programmed in microcontroller using C language. The developed hardware consists of vibration sensor of ADXL345, temperature sensor of AD594/595 of type K thermocouple, microcontroller, graphic liquid crystal display, real time clock, etc. The hardware is divided into two: one is based at the workstation (majorly meant to monitor machines behaviour) and the other at the base station (meant to receive transmission of machines information sent from the workstation), working cooperatively for effective functionalities. The resulting hardware built was calibrated, tested using model verification and validated through principles pivoted on least square and regression analysis approach using data read from the gear boxes of extruding and cutting machines used for polyethylene bag production. The results got therein confirmed related correlation existing between time, vibration and temperature, which are reflections of effective formulation of the developed concept.

  3. Modeling and Forecast Biological Oxygen Demand (BOD using Combination Support Vector Machine with Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Abazar Solgi

    2017-06-01

    Full Text Available Introduction: Chemical pollution of surface water is one of the serious issues that threaten the quality of water. This would be more important when the surface waters used for human drinking supply. One of the key parameters used to measure water pollution is BOD. Because many variables affect the water quality parameters and a complex nonlinear relationship between them is established conventional methods can not solve the problem of quality management of water resources. For years, the Artificial Intelligence methods were used for prediction of nonlinear time series and a good performance of them has been reported. Recently, the wavelet transform that is a signal processing method, has shown good performance in hydrological modeling and is widely used. Extensive research has been globally provided in use of Artificial Neural Network and Adaptive Neural Fuzzy Inference System models to forecast the BOD. But support vector machine has not yet been extensively studied. For this purpose, in this study the ability of support vector machine to predict the monthly BOD parameter based on the available data, temperature, river flow, DO and BOD was evaluated. Materials and Methods: SVM was introduced in 1992 by Vapnik that was a Russian mathematician. This method has been built based on the statistical learning theory. In recent years the use of SVM, is highly taken into consideration. SVM was used in applications such as handwriting recognition, face recognition and has good results. Linear SVM is simplest type of SVM, consists of a hyperplane that dataset of positive and negative is separated with maximum distance. The suitable separator has maximum distance from every one of two dataset. So about this machine that its output groups label (here -1 to +1, the aim is to obtain the maximum distance between categories. This is interpreted to have a maximum margin. Wavelet transform is one of methods in the mathematical science that its main idea was

  4. Near-term technology policies for long-term climate targets--economy wide versus technology specific approaches

    International Nuclear Information System (INIS)

    Sanden, B.A.; Azar, Christian

    2005-01-01

    The aim of this paper is to offer suggestions when it comes to near-term technology policies for long-term climate targets based on some insights into the nature of technical change. We make a distinction between economy wide and technology specific policy instruments and put forward two key hypotheses: (i) Near-term carbon targets such as the Kyoto protocol can be met by economy wide price instruments (carbon taxes, or a cap-and-trade system) changing the technologies we pick from the shelf (higher energy efficiency in cars, buildings and industry, wind, biomass for heat and electricity, natural gas instead of coal, solar thermal, etc.). (ii) Technology specific policies are needed to bring new technologies to the shelf. Without these new technologies, stricter emission reduction targets may be considered impossible to meet by the government, industry and the general public, and therefore not adopted. The policies required to bring these more advanced technologies to the shelf are more complex and include increased public research and development, demonstration, niche market creation, support for networks within the new industries, standard settings and infrastructure policies (e.g., when it comes to hydrogen distribution). There is a risk that the society in its quest for cost-efficiency in meeting near-term emissions targets, becomes blindfolded when it comes to the more difficult, but equally important issue of bringing more advanced technologies to the shelf. The paper presents mechanisms that cause technology look in, how these very mechanisms can be used to get out of the current 'carbon lock-in' and the risk with premature lock-ins into new technologies that do not deliver what they currently promise. We then review certain climate policy proposals with regards to their expected technology impact, and finally we present a let-a-hundred-flowers-bloom strategy for the next couple of decades

  5. Including crystal structure attributes in machine learning models of formation energies via Voronoi tessellations

    Science.gov (United States)

    Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris

    2017-07-01

    While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost.

  6. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  7. Modeling workflow to design machine translation applications for public health practice.

    Science.gov (United States)

    Turner, Anne M; Brownstein, Megumu K; Cole, Kate; Karasz, Hilary; Kirchhoff, Katrin

    2015-02-01

    Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT's potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  9. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  10. LHC 2010: Summary of the Odyssey So Far and Near-Term Prospects (3/3)

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    In 2010, the LHC delivered proton-proton collisions at an energy of 7 TeV, significantly higher than what was previously attained. This has allowed the experiments to complete the commissioning of the detectors and to perform early measurements of key standard model processes. The inclusive production of particles, jets and photons, the observation of onia and heavy-flavored meson decays, the measurement of the W and Z cross sections, and the observation of top-quark production and decay constitute a full set of measurements which form the base from which searches for physics beyond the standard model can be launched. The results from a number of searches for supersymmetry and some exotic signatures are now appearing. The lectures will review this impressive list of physics achievements from 2010 and consider briefly what 2011 may bring.

  11. LHC 2010: Summary of the Odyssey So Far and Near-Term Prospects (2/3)

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    In 2010, the LHC delivered proton-proton collisions at an energy of 7 TeV, significantly higher than what was previously attained. This has allowed the experiments to complete the commissioning of the detectors and to perform early measurements of key standard model processes. The inclusive production of particles, jets and photons, the observation of onia and heavy-flavored meson decays, the measurement of the W and Z cross sections, and the observation of top-quark production and decay constitute a full set of measurements which form the base from which searches for physics beyond the standard model can be launched. The results from a number of searches for supersymmetry and some exotic signatures are now appearing. The lectures will review this impressive list of physics achievements from 2010 and consider briefly what 2011 may bring.

  12. Prediction of recombinant protein overexpression in Escherichia coli using a machine learning based model (RPOLP).

    Science.gov (United States)

    Habibi, Narjeskhatoon; Norouzi, Alireza; Mohd Hashim, Siti Z; Shamsir, Mohd Shahir; Samian, Razip

    2015-11-01

    Recombinant protein overexpression, an important biotechnological process, is ruled by complex biological rules which are mostly unknown, is in need of an intelligent algorithm so as to avoid resource-intensive lab-based trial and error experiments in order to determine the expression level of the recombinant protein. The purpose of this study is to propose a predictive model to estimate the level of recombinant protein overexpression for the first time in the literature using a machine learning approach based on the sequence, expression vector, and expression host. The expression host was confined to Escherichia coli which is the most popular bacterial host to overexpress recombinant proteins. To provide a handle to the problem, the overexpression level was categorized as low, medium and high. A set of features which were likely to affect the overexpression level was generated based on the known facts (e.g. gene length) and knowledge gathered from related literature. Then, a representative sub-set of features generated in the previous objective was determined using feature selection techniques. Finally a predictive model was developed using random forest classifier which was able to adequately classify the multi-class imbalanced small dataset constructed. The result showed that the predictive model provided a promising accuracy of 80% on average, in estimating the overexpression level of a recombinant protein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Optics Studies for the CERN Proton Synchrotron Machine Linear and Nonlinear Modelling using Beam Based Measurements

    CERN Document Server

    Cappi, R; Martini, M; Métral, Elias; Métral, G; Steerenberg, R; Müller, A S

    2003-01-01

    The CERN Proton Synchrotron machine is built using combined function magnets. The control of the linear tune as well as the chromaticity in both planes is achieved by means of special coils added to the main magnets, namely two pole-face-windings and one figure-of-eight loop. As a result, the overall magnetic field configuration is rather complex not to mention the saturation effects induced at top-energy. For these reasons a linear model of the PS main magnet does not provide sufficient precision to model particle dynamics. On the other hand, a sophisticated optical model is the key element for the foreseen intensity upgrade and, in particular, for the novel extraction mode based on adiabatic capture of beam particles inside stable islands in transverse phase space. A solution was found by performing accurate measurement of the nonlinear tune as a function of both amplitude and momentum offset so to extract both linear and nonlinear properties of the lattice. In this paper the measurement results are present...

  14. Mathematical model of the crystallizing blank`s thermal state at the horizontal continuous casting machine

    Directory of Open Access Journals (Sweden)

    Kryukov Igor Yu.

    2017-01-01

    Full Text Available Present article is devoted to the development of the mathematical model, which describes thermal state and crystallization process of the rectangular cross-section blank while continious process of extraction from a horysontal continious casting machine (HCCM.The developed model took cue for the heat-transfer properties of non-iron metal teeming; its temperature on entry to the casting mold; cooling conditions of blank in the carbon molds in the presence of a copper water cooler. Besides, has been considered the asymmetry of heat interchange from blank`s head and drag at mold, coming out from fluid contraction and features of the horizontal casting mold. The developed mathematical model allows to determine alterations in crystallizing blank of the following factors with respect to time: temperature pattern of crystallizing blank under different technical working regimes of HCCM; boundaries of solid two-phase field and liquid two-phase filed; blank`s thickness variation under shrinkage of the ingot`s material

  15. MODEL OF THE QUALITY MANAGEMENT SYSTEM OF A MACHINE TOOL COMPANY

    Directory of Open Access Journals (Sweden)

    Катерина Вікторівна КОЛЕСНІКОВА

    2016-02-01

    Full Text Available Development of models and methods such that would improve the competitive position of enterprises by improving management processes is an important task of project management. Lack of project management within the information technology and continuous improvement of methods for the management of the environment, interaction, community, value and trust, based on the strategic objectives of enterprises and based on models that take into account the relationship of the system, resulting in significant material and resource costs. In the current work the improvement of the quality management system machine-tool company HC MIKRON® and proved that the introduction of new processes critical analysis requirements for products, support processes of the products to consumers and enterprises in the formation of a system of responsibility, division of responsibilities and reporting (according to ISO 9001: 2009 is an important scientific and reasonable step to improve the level of technological maturity and structural modernization of enterprise management. For the improved structure of the analysis model and test the properties of ergodicity, as a condition of efficiency, a new quality management system.

  16. Machine listening intelligence

    Science.gov (United States)

    Cella, C. E.

    2017-05-01

    This manifesto paper will introduce machine listening intelligence, an integrated research framework for acoustic and musical signals modelling, based on signal processing, deep learning and computational musicology.

  17. Improving Simulations of Extreme Flows by Coupling a Physically-based Hydrologic Model with a Machine Learning Model

    Science.gov (United States)

    Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.

    2017-12-01

    With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967

  18. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    Directory of Open Access Journals (Sweden)

    Qiang Shang

    Full Text Available Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS. Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM is proposed based on singular spectrum analysis (SSA and kernel extreme learning machine (KELM. SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA. Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  19. Parameter Identification of Ship Maneuvering Models Using Recursive Least Square Method Based on Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Man Zhu

    2017-03-01

    Full Text Available Determination of ship maneuvering models is a tough task of ship maneuverability prediction. Among several prime approaches of estimating ship maneuvering models, system identification combined with the full-scale or free- running model test is preferred. In this contribution, real-time system identification programs using recursive identification method, such as the recursive least square method (RLS, are exerted for on-line identification of ship maneuvering models. However, this method seriously depends on the objects of study and initial values of identified parameters. To overcome this, an intelligent technology, i.e., support vector machines (SVM, is firstly used to estimate initial values of the identified parameters with finite samples. As real measured motion data of the Mariner class ship always involve noise from sensors and external disturbances, the zigzag simulation test data include a substantial quantity of Gaussian white noise. Wavelet method and empirical mode decomposition (EMD are used to filter the data corrupted by noise, respectively. The choice of the sample number for SVM to decide initial values of identified parameters is extensively discussed and analyzed. With de-noised motion data as input-output training samples, parameters of ship maneuvering models are estimated using RLS and SVM-RLS, respectively. The comparison between identification results and true values of parameters demonstrates that both the identified ship maneuvering models from RLS and SVM-RLS have reasonable agreements with simulated motions of the ship, and the increment of the sample for SVM positively affects the identification results. Furthermore, SVM-RLS using data de-noised by EMD shows the highest accuracy and best convergence.

  20. Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

    OpenAIRE

    Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.

    2016-01-01

    The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...