WorldWideScience

Sample records for machine setup time

  1. Setup Time Reduction On Solder Paste Printing Machine – A Case Study

    Directory of Open Access Journals (Sweden)

    Rajesh Dhake

    2013-06-01

    Full Text Available Lean manufacturing envisages the reduction of the seven deadly wastes referred to as MUDA. Setup time forms a major component of the equipment downtime. It leads to lower machine utilization and restricts the output and product variety. This necessitates the requirement for quick setups. Single Minute Exchange of Die philosophy (a lean manufacturing tool here after referred as “SMED” is one of the important tool which aims at quick setups driving smaller lot sizes, lower production costs, improve productivity in terms of increased output, increased utilization of machine and labor hours, make additional capacity available (often at bottleneck resources, reduce scrap and rework, and increase flexibility[3]. This paper focuses on the application of Single Minute Exchange of Die[1] and Quick Changeover Philosophy[2] for reducing setup time on Solder Past Printing Machine (bottleneck machine in a electronic speedo-cluster manufacturing company. The four step SMED philosophy was adopted to effect reduction in setup time. The initial step was gathering information about the present setup times and its proportion to the total productive time. A detailed video based time study of setup activities was done to classify them into external and internal setup activities in terms of their need (i.e. preparation, replacement or adjustment, time taken and the way these could be reduced, simplified or eliminated. The improvements effected were of three categories viz., mechanical, procedural and organizational. The paper concludes by comparing the present and proposed (implemented methods of setup procedures.

  2. Single-machine scheduling with release dates, due dates and family setup times

    NARCIS (Netherlands)

    Schutten, Johannes M.J.; van de Velde, S.L.; van de Velde, S.L.; Zijm, Willem H.M.

    1996-01-01

    We address the NP-hard problem of scheduling n independent jobs with release dates, due dates, and family setup times on a single machine to minimize the maximum lateness. This problem arises from the constant tug-of-war going on in manufacturing between efficient production and delivery

  3. Single-machine scheduling with release dates, due dates, and family setup times

    NARCIS (Netherlands)

    J.M.J. Schutten (Marco); S.L. van de Velde (Steef); W.H.M. Zijm

    1996-01-01

    textabstractWe address the NP-hard problem of scheduling n independent jobs with release dates, due dates, and family setup times on a single machine to minimize the maximum lateness. This problem arises from the constant tug-of-war going on in manufacturing between efficient production and delivery

  4. Single Machine Scheduling and Due Date Assignment with Past-Sequence-Dependent Setup Time and Position-Dependent Processing Time

    Directory of Open Access Journals (Sweden)

    Chuan-Li Zhao

    2014-01-01

    Full Text Available This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d. It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in On4 time. For the model with job-independent position effects, we proved that the problems can be solved in On3 time by providing a dynamic programming algorithm.

  5. Setup planning for machining

    CERN Document Server

    Hazarika, Manjuri

    2015-01-01

    Professionals as well as researchers can benefit from this comprehensive introduction into the topic of setup planning, which reflects the latest state of research and gives hands-on examples. Starting with a brief but thorough introduction, this book explains the significance of setup planning in process planning and includes a reflection on its external constraints. Step-by-step the different phases of setup planning are outlined and traditional as well as modern approaches, such as fuzzy logic based setup planning, on the solution of setup planning problems are presented. Three detailed examples of applications provide a clear and accessible insight into the up-to-date techniques and various approaches in setup planning.

  6. Robust Parallel Machine Scheduling Problem with Uncertainties and Sequence-Dependent Setup Time

    Directory of Open Access Journals (Sweden)

    Hongtao Hu

    2016-01-01

    Full Text Available A parallel machine scheduling problem in plastic production is studied in this paper. In this problem, the processing time and arrival time are uncertain but lie in their respective intervals. In addition, each job must be processed together with a mold while jobs which belong to one family can share the same mold. Therefore, time changing mold is required for two consecutive jobs that belong to different families, which is known as sequence-dependent setup time. This paper aims to identify a robust schedule by min–max regret criterion. It is proved that the scenario incurring maximal regret for each feasible solution lies in finite extreme scenarios. A mixed integer linear programming formulation and an exact algorithm are proposed to solve the problem. Moreover, a modified artificial bee colony algorithm is developed to solve large-scale problems. The performance of the presented algorithm is evaluated through extensive computational experiments and the results show that the proposed algorithm surpasses the exact method in terms of objective value and computational time.

  7. Minimizing total weighted tardiness for the single machine scheduling problem with dependent setup time and precedence constraints

    Directory of Open Access Journals (Sweden)

    Hamidreza Haddad

    2012-04-01

    Full Text Available This paper tackles the single machine scheduling problem with dependent setup time and precedence constraints. The primary objective of this paper is minimization of total weighted tardiness. Since the complexity of the resulted problem is NP-hard we use metaheuristics method to solve the resulted model. The proposed model of this paper uses genetic algorithm to solve the problem in reasonable amount of time. Because of high sensitivity of GA to its initial values of parameters, a Taguchi approach is presented to calibrate its parameters. Computational experiments validate the effectiveness and capability of proposed method.

  8. Split Scheduling with Uniform Setup Times

    NARCIS (Netherlands)

    Schalekamp, F.; Sitters, R.A.; van der Ster, S.L.; Stougie, L.; Verdugo, V.; van Zuylen, A.

    2015-01-01

    We study a scheduling problem in which jobs may be split into parts, where the parts of a split job may be processed simultaneously on more than one machine. Each part of a job requires a setup time, however, on the machine where the job part is processed. During setup, a machine cannot process or

  9. Parallel machine scheduling with release dates, due dates and family setup times

    NARCIS (Netherlands)

    Schutten, Johannes M.J.; Leussink, R.A.M.

    1996-01-01

    In manufacturing, there is a fundamental conflict between efficient production and delivery performance. Maximizing machine utilization by batching similar jobs may lead to poor delivery performance. Minimizing customers' dissatisfaction may lead to an inefficient use of the machines. In this paper,

  10. Split scheduling with uniform setup times.

    NARCIS (Netherlands)

    F. Schalekamp; R.A. Sitters (René); S.L. van der Ster; L. Stougie (Leen); V. Verdugo; A. van Zuylen

    2015-01-01

    htmlabstractWe study a scheduling problem in which jobs may be split into parts, where the parts of a split job may be processed simultaneously on more than one machine. Each part of a job requires a setup time, however, on the machine where the job part is processed. During setup, a

  11. A Novel Ant Colony Algorithm for the Single-Machine Total Weighted Tardiness Problem with Sequence Dependent Setup Times

    Directory of Open Access Journals (Sweden)

    Fardin Ahmadizar

    2011-08-01

    Full Text Available This paper deals with the NP-hard single-machine total weighted tardiness problem with sequence dependent setup times. Incorporating fuzzy sets and genetic operators, a novel ant colony optimization algorithm is developed for the problem. In the proposed algorithm, artificial ants construct solutions as orders of jobs based on the heuristic information as well as pheromone trails. To calculate the heuristic information, three well-known priority rules are adopted as fuzzy sets and then aggregated. When all artificial ants have terminated their constructions, genetic operators such as crossover and mutation are applied to generate new regions of the solution space. A local search is then performed to improve the performance quality of some of the solutions found. Moreover, at run-time the pheromone trails are locally as well as globally updated, and limited between lower and upper bounds. The proposed algorithm is experimented on a set of benchmark problems from the literature and compared with other metaheuristics.

  12. Hybrid Pareto artificial bee colony algorithm for multi-objective single machine group scheduling problem with sequence-dependent setup times and learning effects.

    Science.gov (United States)

    Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao

    2016-01-01

    Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.

  13. The TwinEBIS setup: Machine description

    Energy Technology Data Exchange (ETDEWEB)

    Breitenfeldt, M. [CERN, Geneva 23 CH-1211 (Switzerland); Mertzig, R. [CERN, Geneva 23 CH-1211 (Switzerland); Technische Universität Dresden, 01069 Dresden (Germany); Pitters, J. [CERN, Geneva 23 CH-1211 (Switzerland); Technische Universität Wien, 1040 Vienna (Austria); Shornikov, A. [CERN, Geneva 23 CH-1211 (Switzerland); GANIL, Bd. Becquerel, BP 55027, 14076 Caen Cedex 05 (France); Wenander, F., E-mail: fredrik.wenander@cern.ch [CERN, Geneva 23 CH-1211 (Switzerland)

    2017-06-01

    TwinEBIS is an Electron Beam Ion Source (EBIS) recently made operational at CERN. The device is similar in construction to the REXEBIS charge breeder operating at the ISOLDE facility. After relocation of the solenoid from the Manne Siegbahn Laboratory (MSL) Stockholm, TwinEBIS was commissioned at CERN and serves as a test bench dedicated to manipulation of low-energy highly charged ions. In this paper we give an overview of the setup and present advanced numerical simulations of the electron optics. In addition, the alignment procedure of the solenoid magnetic field is described and measurement results are presented. Results from cathode investigations, electron beam tests and ion extraction modulation are presented in a follow-up paper.

  14. Single Machine Multi-product Capacitated Lotsizing with Sequence-dependent Setups

    OpenAIRE

    Almada-Lobo , Bernardo; Klabjan , Diego; Carravilla , Maria Antónia; Oliveira , Jose Fernando

    2007-01-01

    Abstract In production planning in the glass container industry, machine dependent setup times and costs are incurred for switchovers from one product to another. The resulting multi-item capacitated lot sizing problem has sequence-dependent setup times and costs. We present two novel linear mixed integer programming formulations for this problem, incorporating all the necessary features of setup carryovers. The compact formulation has polynomially many constraints, while, on the o...

  15. Minimization of number of setups for mounting machines

    Energy Technology Data Exchange (ETDEWEB)

    Kolman, Pavel; Nchor, Dennis; Hampel, David [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, 603 00 Brno (Czech Republic); Žák, Jaroslav [Institute of Technology and Business, Okružní 517/10, 370 01 České Budejovice (Czech Republic)

    2015-03-10

    The article deals with the problem of minimizing the number of setups for mounting SMT machines. SMT is a device used to assemble components on printed circuit boards (PCB) during the manufacturing of electronics. Each type of PCB has a different set of components, which are obligatory. Components are placed in the SMT tray. The problem consists in the fact that the total number of components used for all products is greater than the size of the tray. Therefore, every change of manufactured product requires a complete change of components in the tray (i.e., a setup change). Currently, the number of setups corresponds to the number of printed circuit board type. Any production change affects the change of setup and stops production on one shift. Many components occur in more products therefore the question arose as to how to deploy the products into groups so as to minimize the number of setups. This would result in a huge increase in efficiency of production.

  16. The ATLAS Level-1 Trigger Timing Setup

    CERN Document Server

    Spiwoks, R; Ellis, Nick; Farthouat, P; Gällnö, P; Haller, J; Krasznahorkay, A; Maeno, T; Pauly, T; Pessoa-Lima, H; Resurreccion-Arcas, I; Schuler, G; De Seixas, J M; Torga-Teixeira, R; Wengler, T

    2005-01-01

    The ATLAS detector at CERN's LHC will be exposed to proton-proton collisions at a bunch-crossing rate of 40 MHz. In order to reduce the data rate, a three-level trigger system selects potentially interesting physics. The first trigger level is implemented in electronics and firmware. It aims at reducing the output rate to less than 100 kHz. The Central Trigger Processor combines information from the calorimeter and muon trigger processors and makes the final Level-1-Accept decision. It is a central element in the timing setup of the experiment. Three aspects are considered in this article: the timing setup with respect to the Level-1 trigger, with respect to the expriment, and with respect to the world.

  17. Development of Experimental Setup of Metal Rapid Prototyping Machine using Selective Laser Sintering Technique

    Science.gov (United States)

    Patil, S. N.; Mulay, A. V.; Ahuja, B. B.

    2018-04-01

    Unlike in the traditional manufacturing processes, additive manufacturing as rapid prototyping, allows designers to produce parts that were previously considered too complex to make economically. The shift is taking place from plastic prototype to fully functional metallic parts by direct deposition of metallic powders as produced parts can be directly used for desired purpose. This work is directed towards the development of experimental setup of metal rapid prototyping machine using selective laser sintering and studies the various parameters, which plays important role in the metal rapid prototyping using SLS technique. The machine structure in mainly divided into three main categories namely, (1) Z-movement of bed and table, (2) X-Y movement arrangement for LASER movements and (3) feeder mechanism. Z-movement of bed is controlled by using lead screw, bevel gear pair and stepper motor, which will maintain the accuracy of layer thickness. X-Y movements are controlled using timing belt and stepper motors for precise movements of LASER source. Feeder mechanism is then developed to control uniformity of layer thickness metal powder. Simultaneously, the study is carried out for selection of material. Various types of metal powders can be used for metal RP as Single metal powder, mixture of two metals powder, and combination of metal and polymer powder. Conclusion leads to use of mixture of two metals powder to minimize the problems such as, balling effect and porosity. Developed System can be validated by conducting various experiments on manufactured part to check mechanical and metallurgical properties. After studying the results of these experiments, various process parameters as LASER properties (as power, speed etc.), and material properties (as grain size and structure etc.) will be optimized. This work is mainly focused on the design and development of cost effective experimental setup of metal rapid prototyping using SLS technique which will gives the feel of

  18. Evaluating the performance of constructive heuristics for the blocking flow shop scheduling problem with setup times

    Directory of Open Access Journals (Sweden)

    Mauricio Iwama Takano

    2019-01-01

    Full Text Available This paper addresses the minimization of makespan for the permutation flow shop scheduling problem with blocking and sequence and machine dependent setup times, a problem not yet studied in previous studies. The 14 best known heuristics for the permutation flow shop problem with blocking and no setup times are pre-sented and then adapted to the problem in two different ways; resulting in 28 different heuristics. The heuristics are then compared using the Taillard database. As there is no other work that addresses the problem with blocking and sequence and ma-chine dependent setup times, a database for the setup times was created. The setup time value was uniformly distributed between 1% and 10%, 50%, 100% and 125% of the processing time value. Computational tests are then presented for each of the 28 heuristics, comparing the mean relative deviation of the makespan, the computational time and the percentage of successes of each method. Results show that the heuristics were capable of providing interesting results.

  19. Technology Time Machine 2012

    DEFF Research Database (Denmark)

    Lehner, Wolfgang; Fettweis, Gerhard; Fitzek, Frank

    2013-01-01

    The IEEE Technology Time Machine (TTM) is a unique event for industry leaders, academics, and decision making government officials who direct R&D activities, plan research programs or manage portfolios of research activities. This report covers the main topics of the 2nd Symposium of future...... technologies. The Symposium brought together world renowned experts to discuss the evolutionary and revolutionary advances in technology landscapes as we look towards 2020 and beyond. TTM facilitated informal discussions among the participants and speakers thus providing an excellent opportunity for informal...... interaction between attendees, senior business leaders, world-renowned innovators, and the press. The goal of the Symposium is to discover key critical innovations across technologies which will alter the research and application space of the future. Topics covered the future of Wireless Technology, Smart...

  20. Estimation of functional preparedness of young handballers in setup time

    Directory of Open Access Journals (Sweden)

    Favoritоv V.N.

    2012-11-01

    Full Text Available The dynamics of level of functional preparedness of young handballers in setup time is shown. It was foreseen to make alteration in educational-training process with the purpose of optimization of their functional preparedness. 11 youths were plugged in research, calendar age 14 - 15 years. For determination of level of their functional preparedness the computer program "SVSM" was applied. It is set that at the beginning of setup time of 18,18% of all respondent functional preparedness is characterized by a "middle" level, 27,27% - below the "average", 54,54% - "above" the average. At the end of setup time among sportsmen representatives prevailed with the level of functional preparedness "above" average - 63,63%, with level "high" - 27,27%, sportsmen with level below the average were not observed. Efficiency of the offered system of trainings employments for optimization of functional preparedness of young handballers is well-proven.

  1. Heuristics methods for the flow shop scheduling problem with separated setup times

    Directory of Open Access Journals (Sweden)

    Marcelo Seido Nagano

    2012-06-01

    Full Text Available This paper deals with the permutation flow shop scheduling problem with separated machine setup times. As a result of an investigation on the problem characteristics, four heuristics methods are proposed with procedures of the construction sequencing solution by an analogy with the asymmetric traveling salesman problem with the objective of minimizing makespan. Experimental results show that one of the new heuristics methods proposed provide high quality solutions in comparisons with the evaluated methods considered in the literature.

  2. Integrating the sequence dependent setup time open shop problem and preventive maintenance policies

    Directory of Open Access Journals (Sweden)

    K. Naboureh

    2016-09-01

    Full Text Available In most industrial environments, it is usually considered that machines are accessible throughout the planning horizon, but in real situation, machines may be unavailable due to a scheduled preventive maintenance where the periods of unavailability are known in advance. The main idea of this paper is to consider different preventive maintenance policies on machines regarding open shop scheduling problem (OSSP with sequence dependent setup times (SDST using immune algorithm. The preventive maintenance (PM policies are planned for maximizing availability of machines or keeping minimum level of reliability through the production horizon. The objective function of the paper is to minimize makespan. In total, the proposed algorithm extensively is compared with six adaptations of existing heuristic and meta-heuristic methods for the problem through data sets from benchmarks based on Taillard’s instances with some adjustments. The results show that the proposed algorithm outperforms other algorithms for this problem.

  3. Relative performance of priority rules for hybrid flow shop scheduling with setup times

    Directory of Open Access Journals (Sweden)

    Helio Yochihiro Fuchigami

    2015-12-01

    Full Text Available This paper focuses the hybrid flow shop scheduling problem with explicit and sequence-independent setup times. This production environment is a multistage system with unidirectional flow of jobs, wherein each stage may contain multiple machines available for processing. The optimized measure was the total time to complete the schedule (makespan. The aim was to propose new priority rules to support the schedule and to evaluate their relative performance at the production system considered by the percentage of success, relative deviation, standard deviation of relative deviation, and average CPU time. Computational experiments have indicated that the rules using ascending order of the sum of processing and setup times of the first stage (SPT1 and SPT1_ERD performed better, reaching together more than 56% of success.

  4. Build your own time machine

    CERN Document Server

    Clegg, Brian

    2012-01-01

    There is no physical law to prevent time travel nothing in physics to say it is impossible. So who is to say it can't be done? In Build Your Own Time Machine, acclaimed science writer Brian Clegg takes inspiration from his childhood heroes, Doctor Who and H. G. Wells, to explain the nature of time. How do we understand it and why measure it the way we do? How did the theories of one man change the way time was perceived by the world? Why wouldn't H. G. Wells's time machine have worked? And what would we need to do to make a real one? Build Your Own Time Machine explores the amazing possib

  5. Classical time machine

    International Nuclear Information System (INIS)

    Kapuscik, E.

    1992-02-01

    Generalizing concepts of the Einstein radiolocation method and, as a consequence, special relativity transformation rules we get that the time flow in the moving system depends on the direction of motion. 3 refs. (author)

  6. A proposal simulated annealing algorithm for proportional parallel flow shops with separated setup times

    Directory of Open Access Journals (Sweden)

    Helio Yochihiro Fuchigami

    2014-08-01

    Full Text Available This article addresses the problem of minimizing makespan on two parallel flow shops with proportional processing and setup times. The setup times are separated and sequence-independent. The parallel flow shop scheduling problem is a specific case of well-known hybrid flow shop, characterized by a multistage production system with more than one machine working in parallel at each stage. This situation is very common in various kinds of companies like chemical, electronics, automotive, pharmaceutical and food industries. This work aimed to propose six Simulated Annealing algorithms, their perturbation schemes and an algorithm for initial sequence generation. This study can be classified as “applied research” regarding the nature, “exploratory” about the objectives and “experimental” as to procedures, besides the “quantitative” approach. The proposed algorithms were effective regarding the solution and computationally efficient. Results of Analysis of Variance (ANOVA revealed no significant difference between the schemes in terms of makespan. It’s suggested the use of PS4 scheme, which moves a subsequence of jobs, for providing the best percentage of success. It was also found that there is a significant difference between the results of the algorithms for each value of the proportionality factor of the processing and setup times of flow shops.

  7. Discrete time analysis of a repairable machine

    OpenAIRE

    Alfa, Attahiru Sule; Castro, I. T.

    2002-01-01

    We consider, in discrete time, a single machine system that operates for a period of time represented by a general distribution. This machine is subject to failures during operations and the occurrence of these failures depends on how many times the machine has previously failed. Some failures are repairable and the repair times may or may not depend on the number of times the machine was previously repaired. Repair times also have a general distribution. The operating times...

  8. Reduction In Setup Time By Single Minute Exchange Of Dies SMED Methodology

    Directory of Open Access Journals (Sweden)

    Pallavi A. Gade

    2015-08-01

    Full Text Available Life is a race if you dont chase it someone is definitely chase you and will go ahead. Hence to survive in todays business world every manufacturer has to have some idea and plans for their betterment. Market scenario has nearly change after 1990s that every manufacturer must go through the global competition demand for short lead time demand for variety small lot sizes and also proliferation of OEMs. If we have to increase the frequency of delivery without compromising the quality Single Minute Exchange of Dies is the answer. Single Minute Exchange of Dies is not only apply to bottleneck machines it is to be implemented company wide and aim must be to bring all setup time to less than ten minutes in this paper some techniques basic procedure problems faced by companies are discussed and solution for them are suggested.

  9. A hybrid algorithm for flexible job-shop scheduling problem with setup times

    Directory of Open Access Journals (Sweden)

    Ameni Azzouz

    2017-01-01

    Full Text Available Job-shop scheduling problem is one of the most important fields in manufacturing optimization where a set of n jobs must be processed on a set of m specified machines. Each job consists of a specific set of operations, which have to be processed according to a given order. The Flexible Job Shop problem (FJSP is a generalization of the above-mentioned problem, where each operation can be processed by a set of resources and has a processing time depending on the resource used. The FJSP problems cover two difficulties, namely, machine assignment problem and operation sequencing problem. This paper addresses the flexible job-shop scheduling problem with sequence-dependent setup times to minimize two kinds of objectives function: makespan and bi-criteria objective function. For that, we propose a hybrid algorithm based on genetic algorithm (GA and variable neighbourhood search (VNS to solve this problem. To evaluate the performance of our algorithm, we compare our results with other methods existing in literature. All the results show the superiority of our algorithm against the available ones in terms of solution quality.

  10. An Improved Particle Swarm Optimization for Selective Single Machine Scheduling with Sequence Dependent Setup Costs and Downstream Demands

    Directory of Open Access Journals (Sweden)

    Kun Li

    2015-01-01

    Full Text Available This paper investigates a special single machine scheduling problem derived from practical industries, namely, the selective single machine scheduling with sequence dependent setup costs and downstream demands. Different from traditional single machine scheduling, this problem further takes into account the selection of jobs and the demands of downstream lines. This problem is formulated as a mixed integer linear programming model and an improved particle swarm optimization (PSO is proposed to solve it. To enhance the exploitation ability of the PSO, an adaptive neighborhood search with different search depth is developed based on the decision characteristics of the problem. To improve the search diversity and make the proposed PSO algorithm capable of getting out of local optimum, an elite solution pool is introduced into the PSO. Computational results based on extensive test instances show that the proposed PSO can obtain optimal solutions for small size problems and outperform the CPLEX and some other powerful algorithms for large size problems.

  11. The Time Machine in Our Mind

    Science.gov (United States)

    Stocker, Kurt

    2012-01-01

    This article provides the first comprehensive conceptual account for the imagistic mental machinery that allows us to travel through time--for the time machine in our mind. It is argued that language reveals this imagistic machine and how we use it. Findings from a range of cognitive fields are theoretically unified and a recent proposal about…

  12. Integration of micro milling highspeed spindle on a microEDM-milling machine set-up

    DEFF Research Database (Denmark)

    De Grave, Arnaud; Hansen, Hans Nørgaard; Andolfatto, Loic

    2009-01-01

    In order to cope with repositioning errors and to combine the fast removal rate of micro milling with the precision and small feature size achievable with micro EDM milling, a hybrid micro-milling and micro-EDM milling centre was built and tested. The aim was to build an affordable set-up, easy...... by micro milling. Examples of test parts are shown and used as an experimental validation....

  13. An obstacle to building a time machine

    International Nuclear Information System (INIS)

    Carroll, S.M.; Farhi, E.; Guth, A.H.

    1992-01-01

    Gott has shown that a spacetime with two infinite parallel cosmic strings passing each other with sufficient velocity contains closed timelike curves. We discuss an attempt to build such a time machine. Using the energy-momentum conservation laws in the equivalent (2+1)-dimensional theory, we explicitly construct the spacetime representing the decay of one gravitating particle into two. We find that there is never enough mass in an open universe to build the time machine from the products of decays of stationary particles. More generally, the Gott time machine cannot exist in any open (2+1)-dimensional universe for which the total momentum is timelike

  14. An economic production model for time dependent demand with rework and multiple production setups

    Directory of Open Access Journals (Sweden)

    S.R. Singh

    2014-04-01

    Full Text Available In this paper, we present a model for time dependent demand with multiple productions and rework setups. Production is demand dependent and greater than the demand rate. Production facility produces items in m production setups and one rework setup (m, 1 policy. The major reason of reverse logistic and green supply chain is rework, so it reduces the cost of production and other ecological problems. Most of the researchers developed a rework model without deteriorating items. A numerical example and sensitivity analysis is shown to describe the model.

  15. Machine-Checkable Timed CSP

    Science.gov (United States)

    Goethel, Thomas; Glesner, Sabine

    2009-01-01

    The correctness of safety-critical embedded software is crucial, whereas non-functional properties like deadlock-freedom and real-time constraints are particularly important. The real-time calculus Timed Communicating Sequential Processes (CSP) is capable of expressing such properties and can therefore be used to verify embedded software. In this paper, we present our formalization of Timed CSP in the Isabelle/HOL theorem prover, which we have formulated as an operational coalgebraic semantics together with bisimulation equivalences and coalgebraic invariants. Furthermore, we apply these techniques in an abstract specification with real-time constraints, which is the basis for current work in which we verify the components of a simple real-time operating system deployed on a satellite.

  16. Single product lot-sizing on unrelated parallel machines with non-decreasing processing times

    Science.gov (United States)

    Eremeev, A.; Kovalyov, M.; Kuznetsov, P.

    2018-01-01

    We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.

  17. Anesthesia machine checkout and room setup: a randomized, single-blind, comparison of two teaching modalities.

    Science.gov (United States)

    Spofford, Christina M; Bayman, Emine O; Szeluga, Debra J; From, Robert P

    2012-01-01

    Novel methods for teaching are needed to enhance the efficiency of academic anesthesia departments as well as provide approaches to learning that are aligned with current trends and advances in technology. A video was produced that taught the key elements of anesthesia machine checkout and room set up. Novice learners were randomly assigned to receive either the new video format or traditional lecture-based format for this topic during their regularly scheduled lecture series. Primary outcome was the difference in written examination score before and after teaching between the two groups. Secondary outcome was the satisfaction score of the trainees in the two groups. Forty-two students assigned to the video group and 36 students assigned to the lecture group completed the study. Students in each group similar interest in anesthesia, pre-test scores, post-test scores, and final exam scores. The median posttest to pretest difference was greater in the video groups (3.5 (3.0-5.0) vs 2.5 (2.0-3.0), for video and lecture groups respectively, p 0.002). Despite improved test scores, students reported higher satisfaction the traditional, lecture-based format (22.0 (18.0-24.0) vs 24.0 (20.0-28.0), for video and lecture groups respectively, p students in the video-based teaching group, however students rated traditional, live lectures higher than newer video-based teaching.

  18. Solution Approaches for the Parallel Identical Machine Scheduling Problem with Sequence Dependent Setups

    National Research Council Canada - National Science Library

    Anderson, Bradley

    2002-01-01

    ... delivery is an important scheduling objective in the just-in-time (JIT) environment. Items produced too early incur holding costs, while items produced too late incur costs in the form of dissatisfied customers...

  19. Black holes, wormholes and time machines

    CERN Document Server

    Al-Khalili, Jim

    2011-01-01

    Bringing the material up to date, Black Holes, Wormholes and Time Machines, Second Edition captures the new ideas and discoveries made in physics since the publication of the best-selling first edition. While retaining the popular format and style of its predecessor, this edition explores the latest developments in high-energy astroparticle physics and Big Bang cosmology.The book continues to make the ideas and theories of modern physics easily understood by anyone, from researchers to students to general science enthusiasts. Taking you on a journey through space and time, author Jim Al-Khalil

  20. Aliens and time in the machine age

    Science.gov (United States)

    Brake, Mark; Hook, Neil

    2006-12-01

    The 19th century saw sweeping changes for the development of astrobiology, both in the constituency of empirical science encroaching upon all aspects of life and in the evolution of ideas, with Lyell's Principles of Geology radically raising expectation of the true age of the Earth and the drama of Darwinism questioning biblically literalist accounts of natural history. This paper considers the popular culture spun on the crackling loom of the emergent aspects of astrobiology of the day: Edward Bulwer-Lytton's The Coming Race (1871), which foretold the race of the future, and satirist Samuel Butler's anticipation of machine intelligence, `Darwin Among the Machines', in his Erewhon (1872). Finally, we look at the way Darwin, Huxley and natural selection travelled into space with French astronomer Camille Flammarion's immensely popular Récits de l'infini (Stories of Infinity, 1872), and the social Darwinism of H.G. Wells' The Time Machine (1895) and The War of the Worlds (1898). These works of popular culture presented an effective and inspiring communication of science; their crucial discourse was the reducible gap between the new worlds uncovered by science and exploration and the fantastic strange worlds of the imagination. As such they exemplify a way in which the culture and science of popular astrobiology can be fused.

  1. Analysis of dispatching rules in a stochastic dynamic job shop manufacturing system with sequence-dependent setup times

    Science.gov (United States)

    Sharma, Pankaj; Jain, Ajai

    2014-12-01

    Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.

  2. A comparison of genetic algorithm and artificial bee colony approaches in solving blocking hybrid flowshop scheduling problem with sequence dependent setup/changeover times

    Directory of Open Access Journals (Sweden)

    Pongpan Nakkaew

    2016-06-01

    Full Text Available In manufacturing process where efficiency is crucial in order to remain competitive, flowshop is a common configuration in which machines are arranged in series and products are produced through the stages one by one. In certain production processes, the machines are frequently configured in the way that each production stage may contain multiple processing units in parallel or hybrid. Moreover, along with precedent conditions, the sequence dependent setup times may exist. Finally, in case there is no buffer, a machine is said to be blocked if the next stage to handle its output is being occupied. Such NP-Hard problem, referred as Blocking Hybrid Flowshop Scheduling Problem with Sequence Dependent Setup/Changeover Times, is usually not possible to find the best exact solution to satisfy optimization objectives such as minimization of the overall production time. Thus, it is usually solved by approximate algorithms such as metaheuristics. In this paper, we investigate comparatively the effectiveness of the two approaches: a Genetic Algorithm (GA and an Artificial Bee Colony (ABC algorithm. GA is inspired by the process of natural selection. ABC, in the same manner, resembles the way types of bees perform specific functions and work collectively to find their foods by means of division of labor. Additionally, we apply an algorithm to improve the GA and ABC algorithms so that they can take advantage of parallel processing resources of modern multiple core processors while eliminate the need for screening the optimal parameters of both algorithms in advance.

  3. Stochastic integrated vendor–buyer model with unstable lead time and setup cost

    Directory of Open Access Journals (Sweden)

    Chandra K. Jaggi

    2011-01-01

    Full Text Available This paper presents a new vendor-buyer system where there are different objectives for both sides. The proposed method of this paper is different from the other previously published works since it considers different objectives for both sides. In this paper, the vendor’s emphasis is on the crashing of the setup cost, which not only helps him compete in the market but also provides better services to his customers; and the buyer’s aim is to reduce the lead time, which not only facilitates the buyer to fulfill the customers’ demand on time but also enables him to earn a good reputation in the market or vice versa. In the light of the above stated facts, an integrated vendor-buyer stochastic inventory model is also developed. The propsed model considers two cases for demand during lead time: Case (i Complete demand information, Case (ii Partial demand information. The proposed model jointly optimizes the buyer’s ordered quantity and lead time along with vendor’s setup cost and the number of shipments. The results are demonstrated with the help of numerical examples.

  4. New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times

    Science.gov (United States)

    Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid

    2017-09-01

    In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.

  5. Sleep Management on Multiple Machines for Energy and Flow Time

    DEFF Research Database (Denmark)

    Chan, Sze-Hang; Lam, Tak-Wah; Lee, Lap Kei

    2011-01-01

    In large data centers, determining the right number of operating machines is often non-trivial, especially when the workload is unpredictable. Using too many machines would waste energy, while using too few would affect the performance. This paper extends the traditional study of online flow-time...... scheduling on multiple machines to take sleep management and energy into consideration. Specifically, we study online algorithms that can determine dynamically when and which subset of machines should wake up (or sleep), and how jobs are dispatched and scheduled. We consider schedules whose objective...... is to minimize the sum of flow time and energy, and obtain O(1)-competitive algorithms for two settings: one assumes machines running at a fixed speed, and the other allows dynamic speed scaling to further optimize energy usage. Like the previous work on the tradeoff between flow time and energy, the analysis...

  6. LEAR: a machine ahead of its time

    CERN Multimedia

    Katarina Anthony

    2012-01-01

    Described as a “machine physicist's concert platform”, the Low Energy Antiproton Ring (LEAR) was everything at once: an accelerator, a storage ring, a decelerator, a cooler ring and a beam stretcher. 2012 marks the 30th anniversary of its start-up and an opportunity for the Bulletin to take a look back at the history of this remarkable machine.   This article is a tribute to Dieter Möhl, one of LEAR's founding fathers, who passed away at the end May.   Kilian's graph shows the phase space density of antiprotons produced from 26 GeV protons vs. antiproton momentum. Note that this density is significantly higher at low momentum for a decelerated beam. (Graph published in the 1977 "Low Energy Antiproton Factory" paper.) Like most great CERN projects, LEAR began with a dream and a coffee between colleagues. The year was 1976, the coffee was shared by Kurt Kilian and Diete...

  7. Controlling Mechatronic Set-up Using Real-time Linux and CTC ++

    NARCIS (Netherlands)

    Broenink, Johannes F.; Jovanovic, D.S.; Hilderink, G.H.; van Amerongen, J.; Jonker, B.; Regtien, P.; Stramigioli, S.

    2002-01-01

    The development of control software for mechatronic systems is presented by means of a case study: a 2 DOF mechanical rotational set-up usable as a camera-positioning device. The control software is generated using the code generation facility of 20-SIM, thus guaranteeing the generated code being

  8. Alternated Prone and Supine Whole-Breast Irradiation Using IMRT: Setup Precision, Respiratory Movement and Treatment Time

    International Nuclear Information System (INIS)

    Veldeman, Liv; De Gersem, Werner; Speleers, Bruno; Truyens, Bart; Van Greveling, Annick; Van den Broecke, Rudy; De Neve, Wilfried

    2012-01-01

    Purpose: The objective of this study was to compare setup precision, respiration-related breast movement and treatment time between prone and supine positions for whole-breast irradiation. Methods and Materials: Ten patients with early-stage breast carcinoma after breast-conserving surgery were treated with prone and supine whole breast-irradiation in a daily alternating schedule. Setup precision was monitored using cone-beam computed tomography (CBCT) imaging. Respiration-related breast movement in the vertical direction was assessed by magnetic sensors. The time needed for patient setup and for the CBCT procedure, the beam time, and the length of the whole treatment slot were also recorded. Results: Random and systematic errors were not significantly different between positions in individual patients for each of the three axes (left-right, longitudinal, and vertical). Respiration-related movement was smaller in prone position, but about 80% of observations showed amplitudes <1 mm in both positions. Treatment slots were longer in prone position (21.2 ± 2.5 min) than in supine position (19.4 ± 0.8 min; p = 0.044). Conclusion: Comparison of setup precision between prone and supine position in the same patient showed no significant differences in random and systematic errors. Respiratory movement was smaller in prone position. The longer treatment slots in prone position can probably be attributed to the higher repositioning need.

  9. An efficient genetic algorithm for a hybrid flow shop scheduling problem with time lags and sequence-dependent setup time

    Directory of Open Access Journals (Sweden)

    Farahmand-Mehr Mohammad

    2014-01-01

    Full Text Available In this paper, a hybrid flow shop scheduling problem with a new approach considering time lags and sequence-dependent setup time in realistic situations is presented. Since few works have been implemented in this field, the necessity of finding better solutions is a motivation to extend heuristic or meta-heuristic algorithms. This type of production system is found in industries such as food processing, chemical, textile, metallurgical, printed circuit board, and automobile manufacturing. A mixed integer linear programming (MILP model is proposed to minimize the makespan. Since this problem is known as NP-Hard class, a meta-heuristic algorithm, named Genetic Algorithm (GA, and three heuristic algorithms (Johnson, SPTCH and Palmer are proposed. Numerical experiments of different sizes are implemented to evaluate the performance of presented mathematical programming model and the designed GA in compare to heuristic algorithms and a benchmark algorithm. Computational results indicate that the designed GA can produce near optimal solutions in a short computational time for different size problems.

  10. Parallel-Machine Scheduling with Time-Dependent and Machine Availability Constraints

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2015-01-01

    Full Text Available We consider the parallel-machine scheduling problem in which the machines have availability constraints and the processing time of each job is simple linear increasing function of its starting times. For the makespan minimization problem, which is NP-hard in the strong sense, we discuss the Longest Deteriorating Rate algorithm and List Scheduling algorithm; we also provide a lower bound of any optimal schedule. For the total completion time minimization problem, we analyze the strong NP-hardness, and we present a dynamic programming algorithm and a fully polynomial time approximation scheme for the two-machine problem. Furthermore, we extended the dynamic programming algorithm to the total weighted completion time minimization problem.

  11. Time-resolved soft x-ray absorption setup using multi-bunch operation modes at synchrotrons

    International Nuclear Information System (INIS)

    Stebel, L.; Sigalotti, P.; Ressel, B.; Cautero, G.; Malvestuto, M.; Capogrosso, V.; Bondino, F.; Magnano, E.; Parmigiani, F.

    2011-01-01

    Here, we report on a novel experimental apparatus for performing time-resolved soft x-ray absorption spectroscopy in the sub-ns time scale using non-hybrid multi-bunch mode synchrotron radiation. The present setup is based on a variable repetition rate Ti:sapphire laser (pump pulse) synchronized with the ∼500 MHz x-ray synchrotron radiation bunches and on a detection system that discriminates and singles out the significant x-ray photon pulses by means of a custom made photon counting unit. The whole setup has been validated by measuring the time evolution of the L 3 absorption edge during the melting and the solidification of a Ge single crystal irradiated by an intense ultrafast laser pulse. These results pave the way for performing synchrotron time-resolved experiments in the sub-ns time domain with variable repetition rate exploiting the full flux of the synchrotron radiation.

  12. General bulk service queueing system with N-policy, multiplevacations, setup time and server breakdown without interruption

    Science.gov (United States)

    Sasikala, S.; Indhira, K.; Chandrasekaran, V. M.

    2017-11-01

    In this paper, we have considered an MX / (a,b) / 1 queueing system with server breakdown without interruption, multiple vacations, setup times and N-policy. After a batch of service, if the size of the queue is ξ (customers in the queue. After a vacation, if the server finds at least N customers waiting for service, then the server needs a setup time to start the service. After a batch of service, if the amount of waiting customers in the queue is ξ (≥ a) then the server serves a batch of min(ξ,b) customers, where b ≥ a. We derived the probability generating function of queue length at arbitrary time epoch. Further, we obtained some important performance measures.

  13. Dwell time adjustment for focused ion beam machining

    International Nuclear Information System (INIS)

    Taniguchi, Jun; Satake, Shin-ichi; Oosumi, Takaki; Fukushige, Akihisa; Kogo, Yasuo

    2013-01-01

    Focused ion beam (FIB) machining is potentially useful for micro/nano fabrication of hard brittle materials, because the removal method involves physical sputtering. Usually, micro/nano scale patterning of hard brittle materials is very difficult to achieve by mechanical polishing or dry etching. Furthermore, in most reported examples, FIB machining has been applied to silicon substrates in a limited range of shapes. Therefore, a versatile method for FIB machining is required. We previously established the dwell time adjustment for mechanical polishing. The dwell time adjustment is calculated by using a convolution model derived from Preston’s hypothesis. More specifically, the target removal shape is a convolution of the unit removal shape, and the dwell time is calculated by means of one of four algorithms. We investigate these algorithms for dwell time adjustment in FIB machining, and we found that a combination a fast Fourier transform calculation technique and a constraint-type calculation is suitable. By applying this algorithm, we succeeded in machining a spherical lens shape with a diameter of 2.93 μm and a depth of 203 nm in a glassy carbon substrate by means of FIB with dwell time adjustment

  14. Alternated prone and supine whole-breast irradiation using IMRT: setup precision, respiratory movement and treatment time.

    Science.gov (United States)

    Veldeman, Liv; De Gersem, Werner; Speleers, Bruno; Truyens, Bart; Van Greveling, Annick; Van den Broecke, Rudy; De Neve, Wilfried

    2012-04-01

    The objective of this study was to compare setup precision, respiration-related breast movement and treatment time between prone and supine positions for whole-breast irradiation. Ten patients with early-stage breast carcinoma after breast-conserving surgery were treated with prone and supine whole breast-irradiation in a daily alternating schedule. Setup precision was monitored using cone-beam computed tomography (CBCT) imaging. Respiration-related breast movement in the vertical direction was assessed by magnetic sensors. The time needed for patient setup and for the CBCT procedure, the beam time, and the length of the whole treatment slot were also recorded. Random and systematic errors were not significantly different between positions in individual patients for each of the three axes (left-right, longitudinal, and vertical). Respiration-related movement was smaller in prone position, but about 80% of observations showed amplitudes movement was smaller in prone position. The longer treatment slots in prone position can probably be attributed to the higher repositioning need. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Tool set for distributed real-time machine control

    Science.gov (United States)

    Carrott, Andrew J.; Wright, Christopher D.; West, Andrew A.; Harrison, Robert; Weston, Richard H.

    1997-01-01

    Demands for increased control capabilities require next generation manufacturing machines to comprise intelligent building elements, physically located at the point where the control functionality is required. Networks of modular intelligent controllers are increasingly designed into manufacturing machines and usable standards are slowly emerging. To implement a control system using off-the-shelf intelligent devices from multi-vendor sources requires a number of well defined activities, including (a) the specification and selection of interoperable control system components, (b) device independent application programming and (c) device configuration, management, monitoring and control. This paper briefly discusses the support for the above machine lifecycle activities through the development of an integrated computing environment populated with an extendable software toolset. The toolset supports machine builder activities such as initial control logic specification, logic analysis, machine modeling, mechanical verification, application programming, automatic code generation, simulation/test, version control, distributed run-time support and documentation. The environment itself consists of system management tools and a distributed object-oriented database which provides storage for the outputs from machine lifecycle activities and specific target control solutions.

  16. Time-Frequency Analysis of Signals Generated by Rotating Machines

    Directory of Open Access Journals (Sweden)

    R. Zetik

    1999-06-01

    Full Text Available This contribution is devoted to the higher order time-frequency analyses of signals. Firstly, time-frequency representations of higher order (TFRHO are defined. Then L-Wigner distribution (LWD is given as a special case of TFRHO. Basic properties of LWD are illustrated based on the analysis of mono-component and multi-component synthetic signals and acoustical signals generated by rotating machine. The obtained results confirm usefulness of LWD application for the purpose of rotating machine condition monitoring.

  17. Synchronous machine parameter identification in frequency and time domain

    Directory of Open Access Journals (Sweden)

    Hasni M.

    2007-01-01

    Full Text Available This paper presents the results of a frequency and time-domain identification procedure to estimate the linear parameters of a salient-pole synchronous machine at standstill. The objective of this study is to use several input signals to identify the model structure and parameters of a salient-pole synchronous machine from standstill test data. The procedure consists to define, to conduct the standstill tests and also to identify the model structure. The signals used for identification are the different excitation voltages at standstill and the flowing current in different windings. We estimate the parameters of operational impedances, or in other words the reactance and the time constants. The tests were carried out on synchronous machine of 1.5 kVA 380V 1500 rpm.

  18. Time-series prediction and applications a machine intelligence approach

    CERN Document Server

    Konar, Amit

    2017-01-01

    This book presents machine learning and type-2 fuzzy sets for the prediction of time-series with a particular focus on business forecasting applications. It also proposes new uncertainty management techniques in an economic time-series using type-2 fuzzy sets for prediction of the time-series at a given time point from its preceding value in fluctuating business environments. It employs machine learning to determine repetitively occurring similar structural patterns in the time-series and uses stochastic automaton to predict the most probabilistic structure at a given partition of the time-series. Such predictions help in determining probabilistic moves in a stock index time-series Primarily written for graduate students and researchers in computer science, the book is equally useful for researchers/professionals in business intelligence and stock index prediction. A background of undergraduate level mathematics is presumed, although not mandatory, for most of the sections. Exercises with tips are provided at...

  19. MX/G1, G2/1 with Setup Time, Bernoulli Vacation, Break Down, and Delayed Repair

    Directory of Open Access Journals (Sweden)

    G. Ayyappan

    2014-01-01

    Full Text Available We present a single server in which customers arrive in batches and the server provides service one by one. The server provides two heterogeneous service stages such that service time of both stages is different and mandatory to all arriving customers in such a way that, after the completion of first stage, the second stage should also be provided to the customers. The server may subject to random breakdowns with brake down rate λ and, after break down, it should be repaired but it has to wait for being repaired and such waiting time is called delay time. Both the delay time and repair time follow exponential distribution. Upon the completion of the second stage service, the server will go for vacation with probability p or stay back in the system probability 1-p, if any. The vacation time follows general (arbitrary distribution. Before providing service to a new customer or a batch of customers that joins the system in the renewed busy period, the server enters into a random setup time process such that setup time follows exponential distribution. We discuss the transient behavior and the corresponding steady state results with the performance measures of the model.

  20. Designing and commissioning of a setup for timing-jitter measurements using electro-optic temporal decoding

    International Nuclear Information System (INIS)

    Borissenko, Dennis

    2016-12-01

    Precise measurements of the arrival time jitter between the ionization laser, used to create the plasma, and the driver beam in the PWFA setup of the FLASHForward project are of high interest for the operation and optimization of the experiment. In this thesis, an electro-optic temporal decoding (EOTD) setup with near crossed polarizer detection scheme is presented, which can measure the timing-jitter to an accuracy of around 30 fs. This result was obtained during several measurements conducted at the coherent transition radiation beamline CTR141 at FLASH, using a 100 μm thick GaP crystal and coherent diffraction/transition radiation, generated from the FLASH1 electron bunches. Measurements were performed during long and short electron bunch operation at FLASH, showing that best results are obtained with CDR from long electron bunches. Utilizing CTR led to a higher EO signal and ''over-compensation'' of the SHG background level during the measurement, which resulted in a double-peak structure of the observed THz pulses. To resolve the single-cycle nature of these THz pulses, the SHG background had to be adjusted properly. Furthermore, EOTD measurements during a short bunch operation run at FLASH exhibited strong oscillations in the EO signal, which were suspected to come either from internal lattice resonances of the EO crystal or internal reflections, or excitation of water vapor in the humid air in the laboratory. The oscillations spoiled the observed EOTD trace leading to no sensible measurements of the arrival time jitter during this short bunch operation. To evaluate the capabilities of the setup for monitoring the timing jitter of short PWFA accelerated electron bunches or very short driver bunches at FLASHForward, further investigations on the observed oscillations in the EOTD traces have to be performed during short bunch operation at FLASH with different crystals and under vacuum conditions, to understand the oscillations of the EO signal better.

  1. Designing and commissioning of a setup for timing-jitter measurements using electro-optic temporal decoding

    Energy Technology Data Exchange (ETDEWEB)

    Borissenko, Dennis

    2016-12-15

    Precise measurements of the arrival time jitter between the ionization laser, used to create the plasma, and the driver beam in the PWFA setup of the FLASHForward project are of high interest for the operation and optimization of the experiment. In this thesis, an electro-optic temporal decoding (EOTD) setup with near crossed polarizer detection scheme is presented, which can measure the timing-jitter to an accuracy of around 30 fs. This result was obtained during several measurements conducted at the coherent transition radiation beamline CTR141 at FLASH, using a 100 μm thick GaP crystal and coherent diffraction/transition radiation, generated from the FLASH1 electron bunches. Measurements were performed during long and short electron bunch operation at FLASH, showing that best results are obtained with CDR from long electron bunches. Utilizing CTR led to a higher EO signal and ''over-compensation'' of the SHG background level during the measurement, which resulted in a double-peak structure of the observed THz pulses. To resolve the single-cycle nature of these THz pulses, the SHG background had to be adjusted properly. Furthermore, EOTD measurements during a short bunch operation run at FLASH exhibited strong oscillations in the EO signal, which were suspected to come either from internal lattice resonances of the EO crystal or internal reflections, or excitation of water vapor in the humid air in the laboratory. The oscillations spoiled the observed EOTD trace leading to no sensible measurements of the arrival time jitter during this short bunch operation. To evaluate the capabilities of the setup for monitoring the timing jitter of short PWFA accelerated electron bunches or very short driver bunches at FLASHForward, further investigations on the observed oscillations in the EOTD traces have to be performed during short bunch operation at FLASH with different crystals and under vacuum conditions, to understand the oscillations of the EO

  2. A Flattened Hierarchical Scheduler for Real-Time Virtual Machines

    OpenAIRE

    Drescher, Michael Stuart

    2015-01-01

    The recent trend of migrating legacy computer systems to a virtualized, cloud-based environment has expanded to real-time systems. Unfortunately, modern hypervisors have no mechanism in place to guarantee the real-time performance of applications running on virtual machines. Past solutions to this problem rely on either spatial or temporal resource partitioning, both of which under-utilize the processing capacity of the host system. Paravirtualized solutions in which the guest communicates it...

  3. Software architecture for time-constrained machine vision applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2013-01-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.

  4. An economic production model for deteriorating items and time dependent demand with rework and multiple production setups

    Science.gov (United States)

    Uthayakumar, R.; Tharani, S.

    2017-12-01

    Recently, much emphasis has given to study the control and maintenance of production inventories of the deteriorating items. Rework is one of the main issues in reverse logistic and green supply chain, since it can reduce production cost and the environmental problem. Many researchers have focused on developing rework model, but few of them have developed model for deteriorating items. Due to this fact, we take up productivity and rework with deterioration as the major concern in this paper. In this paper, a production-inventory model with deteriorative items in which one cycle has n production setups and one rework setup (n, 1) policy is considered for deteriorating items with stock-dependent demand in case 1 and exponential demand in case 2. An effective iterative solution procedure is developed to achieve optimal time, so that the total cost of the system is minimized. Numerical and sensitivity analyses are discussed to examine the outcome of the proposed solution procedure presented in this research.

  5. Digital instrumentation and management of dead time: first results on a NaI well-type detector setup.

    Science.gov (United States)

    Censier, B; Bobin, C; Bouchard, J; Aubineau-Lanièce, I

    2010-01-01

    The LNE-LNHB is engaged in a development program on digital instrumentation, the first step being the instrumentation of a NaI well-type detector set-up. The prototype acquisition card and its technical specifications are presented together with the first comparison with the classical NIM-based acquisition chain, for counting rates up to 100 kcps. The digital instrumentation is shown to be counting-loss free in this range. This validates the main option adopted in this project, namely the implementation of an extending dead time with live-time measurement already successfully used in the MTR2 NIM module developed at LNE-LNHB. Copyright 2010. Published by Elsevier Ltd.

  6. The advancement of regulation fee, budget system, and set-up time management

    Energy Technology Data Exchange (ETDEWEB)

    Jung, J. S.; Choi, E. S.; Cho, J. I.; Jung, S. C.; Lee, J. H. [Caleb and Company, Seoul (Korea, Republic of)

    2001-07-15

    Analyze the government's charging fee amendment and suggest the national regulation fee system. Suggest the future business portfolio based in the current business analysis. Design the advanced budget code structure, the performance management of the project budget and the survice level agreement between divisions. Develop the time management and the methodology of the standard man-hour calculation.

  7. Time machines and traversable wormholes in modified theories of gravity

    Directory of Open Access Journals (Sweden)

    Lobo Francisco S.N.

    2013-09-01

    Full Text Available We review recent work on wormhole geometries in the context of modified theories of gravity, in particular, in f(R gravity and with a nonminimal curvature-matter coupling, and in the recently proposed hybrid metric-Palatini theory. In principle, the normal matter threading the throat can be shown to satisfy the energy conditions and it is the higher order curvatures terms that sustain these wormhole geometries. We also briefly review the conversion of wormholes into time-machines, explore several of the time travel paradoxes and possible remedies to these intriguing side-effects in wormhole physics.

  8. An integrated production inventory model of deteriorating items subject to random machine breakdown with a stochastic repair time

    Directory of Open Access Journals (Sweden)

    Huynh Trung Luong

    2016-11-01

    Full Text Available In a continuous manufacturing environment where production and consumption occur simultaneously, one of the biggest challenges is the efficient management of production and inventory system. In order to manage the integrated production inventory system economically it is necessary to identify the optimal production time and the optimal production reorder point that either maximize the profit or minimize the cost. In addition, during production the process has to go through some natural phenomena like random breakdown of machine, deterioration of product over time, uncertainty in repair time that eventually create the possibility of shortage. In this situation, efficient management of inventory & production is crucial. This paper addresses the situation where a perishable (deteriorated product is manufactured and consumed simultaneously, the demand of this product is stable over the time, machine that produce the product also face random failure and the time to repair this machine is also uncertain. In order to describe this scenario more appropriately, the continuously reviewed Economic Production Quantity (EPQ model is considered in this research work. The main goal is to identify the optimal production uptime and the production reorder point that ultimately minimize the expected value of total cost consisting of machine setup, deterioration, inventory holding, shortage and corrective maintenance cost.

  9. Minimizing Experimental Setup Time and Effort at APS beamline 1-ID through Instrumentation Design

    Energy Technology Data Exchange (ETDEWEB)

    Benda, Erika; Almer, Jonathan; Kenesei, Peter; Mashayekhi, Ali; Okasinksi, John; Park, Jun-Sang; Ranay, Rogelio; Shastri, Sarvijt

    2016-01-01

    Sector 1-ID at the APS accommodates a number of dif-ferent experimental techniques in the same spatial enve-lope of the E-hutch end station. These include high-energy small and wide angle X-ray scattering (SAXS and WAXS), high-energy diffraction microscopy (HEDM, both near and far field modes) and high-energy X-ray tomography. These techniques are frequently combined to allow the users to obtain multimodal data, often attaining 1 μm spatial resolution and <0.05º angular resolution. Furthermore, these techniques are utilized while the sam-ple is thermo-mechanically loaded to mimic real operat-ing conditions. The instrumentation required for each of these techniques and environments has been designed and configured in a modular way with a focus on stability and repeatability between changeovers. This approach allows the end station to be more versatile, capable of collecting multi-modal data in-situ while reducing time and effort typically required for set up and alignment, resulting in more efficient beam time use. Key instrumentation de-sign features and layout of the end station are presented.

  10. Development of windows based software to analyze fluorescence decay with time-correlated single photon counting (TCSPC) setup

    International Nuclear Information System (INIS)

    Mallick, M.B.; Ravindranath, S.V.G.; Das, N.C.

    2002-07-01

    A VUV spectroscopic facility for studies in photophysics and photochemistry is being set up at INDUS-I synchrotron source, CAT, Indore. For this purpose, a data acquisition system based on time-correlated single photon counting method is being developed for fluorescence lifetime measurement. To estimate fluorescence lifetime from the data collected with this sytem, a Windows based program has been developed using Visual Basic 5.0. It uses instrument response function (IRF) and observed decay curve and estimates parameters of single exponential decay by least square analysis and Marquardt method as convergence mechanism. Estimation of parameters was performed using data collected with a commercial setup. Goodness of fit was judged by evaluating χR 2 , weighted residuals and autocorrelation function. Performance is compared with two commercial software packages and found to be satisfactory. (author)

  11. A flexible experimental setup for femtosecond time-resolved broad-band ellipsometry and magneto-optics

    Energy Technology Data Exchange (ETDEWEB)

    Boschini, F.; Hedayat, H.; Piovera, C.; Dallera, C. [Dipartimento di Fisica, Politecnico di Milano, p.zza Leonardo da Vinci 32, 20133 Milano (Italy); Gupta, A. [Department of Chemistry, University of Alabama, Tuscaloosa, Alabama 35487 (United States); Carpene, E., E-mail: ettore.carpene@polimi.it [CNR-IFN, Dipartimento di Fisica, Politecnico di Milano, p.zza Leonardo da Vinci 32, 20133 Milano (Italy)

    2015-01-15

    A versatile experimental setup for femtosecond time-resolved ellipsometry and magneto-optical Kerr effect measurements in the visible light range is described. The apparatus is based on the pump-probe technique and combines a broad-band probing beam with an intense near-infrared pump. According to Fresnel scattering matrix formalism, the analysis of the reflected beam at different polarization states of the incident probe light allows one to determine the diagonal and the off-diagonal elements of the dielectric tensor in the investigated sample. Moreover, the pump-probe method permits to study the dynamics of the dielectric response after a short and intense optical excitation. The performance of the experimental apparatus is tested on CrO{sub 2} single crystals as a benchmark.

  12. A flexible experimental setup for femtosecond time-resolved broad-band ellipsometry and magneto-optics

    International Nuclear Information System (INIS)

    Boschini, F.; Hedayat, H.; Piovera, C.; Dallera, C.; Gupta, A.; Carpene, E.

    2015-01-01

    A versatile experimental setup for femtosecond time-resolved ellipsometry and magneto-optical Kerr effect measurements in the visible light range is described. The apparatus is based on the pump-probe technique and combines a broad-band probing beam with an intense near-infrared pump. According to Fresnel scattering matrix formalism, the analysis of the reflected beam at different polarization states of the incident probe light allows one to determine the diagonal and the off-diagonal elements of the dielectric tensor in the investigated sample. Moreover, the pump-probe method permits to study the dynamics of the dielectric response after a short and intense optical excitation. The performance of the experimental apparatus is tested on CrO 2 single crystals as a benchmark

  13. Overlay improvements using a real time machine learning algorithm

    Science.gov (United States)

    Schmitt-Weaver, Emil; Kubis, Michael; Henke, Wolfgang; Slotboom, Daan; Hoogenboom, Tom; Mulkens, Jan; Coogans, Martyn; ten Berge, Peter; Verkleij, Dick; van de Mast, Frank

    2014-04-01

    While semiconductor manufacturing is moving towards the 14nm node using immersion lithography, the overlay requirements are tightened to below 5nm. Next to improvements in the immersion scanner platform, enhancements in the overlay optimization and process control are needed to enable these low overlay numbers. Whereas conventional overlay control methods address wafer and lot variation autonomously with wafer pre exposure alignment metrology and post exposure overlay metrology, we see a need to reduce these variations by correlating more of the TWINSCAN system's sensor data directly to the post exposure YieldStar metrology in time. In this paper we will present the results of a study on applying a real time control algorithm based on machine learning technology. Machine learning methods use context and TWINSCAN system sensor data paired with post exposure YieldStar metrology to recognize generic behavior and train the control system to anticipate on this generic behavior. Specific for this study, the data concerns immersion scanner context, sensor data and on-wafer measured overlay data. By making the link between the scanner data and the wafer data we are able to establish a real time relationship. The result is an inline controller that accounts for small changes in scanner hardware performance in time while picking up subtle lot to lot and wafer to wafer deviations introduced by wafer processing.

  14. Speed and amplitude of lung tumor motion precisely detected in four-dimensional setup and in real-time tumor-tracking radiotherapy

    International Nuclear Information System (INIS)

    Shirato, Hiroki; Suzuki, Keishiro; Sharp, Gregory C.; Fujita, Katsuhisa R.T.; Onimaru, Rikiya; Fujino, Masaharu; Kato, Norio; Osaka, Yasuhiro; Kinoshita, Rumiko; Taguchi, Hiroshi; Onodera, Shunsuke; Miyasaka, Kazuo

    2006-01-01

    Background: To reduce the uncertainty of registration for lung tumors, we have developed a four-dimensional (4D) setup system using a real-time tumor-tracking radiotherapy system. Methods and Materials: During treatment planning and daily setup in the treatment room, the trajectory of the internal fiducial marker was recorded for 1 to 2 min at the rate of 30 times per second by the real-time tumor-tracking radiotherapy system. To maximize gating efficiency, the patient's position on the treatment couch was adjusted using the 4D setup system with fine on-line remote control of the treatment couch. Results: The trajectory of the marker detected in the 4D setup system was well visualized and used for daily setup. Various degrees of interfractional and intrafractional changes in the absolute amplitude and speed of the internal marker were detected. Readjustments were necessary during each treatment session, prompted by baseline shifting of the tumor position. Conclusion: The 4D setup system was shown to be useful for reducing the uncertainty of tumor motion and for increasing the efficiency of gated irradiation. Considering the interfractional and intrafractional changes in speed and amplitude detected in this study, intercepting radiotherapy is the safe and cost-effective method for 4D radiotherapy using real-time tracking technology

  15. Set-up errors analyses in IMRT treatments for nasopharyngeal carcinoma to evaluate time trends, PTV and PRV margins

    Energy Technology Data Exchange (ETDEWEB)

    Mongioj, Valeria (Dept. of Medical Physics, Fondazione IRCCS Istituto Nazionale Tumori, Milan (Italy)), e-mail: valeria.mongioj@istitutotumori.mi.it; Orlandi, Ester (Dept. of Radiotherapy, Fondazione IRCCS Istituto Nazionale Tumori, Milan (Italy)); Palazzi, Mauro (Dept. of Radiotherapy, A.O. Niguarda Ca' Granda, Milan (Italy)) (and others)

    2011-01-15

    Introduction. The aims of this study were to analyze the systematic and random interfractional set-up errors during Intensity Modulated Radiation Therapy (IMRT) in 20 consecutive nasopharyngeal carcinoma (NPC) patients by means of Electronic Portal Images Device (EPID), to define appropriate Planning Target Volume (PTV) and Planning Risk Volume (PRV) margins, as well as to investigate set-up displacement trend as a function of time during fractionated RT course. Material and methods. Before EPID clinical implementation, an anthropomorphic phantom was shifted intentionally 5 mm to all directions and the EPIs were compared with the digitally reconstructed radiographs (DRRs) to test the system's capability to recognize displacements observed in clinical studies. Then, 578 clinical images were analyzed with a mean of 29 images for each patient. Results. Phantom data showed that the system was able to correct shifts with an accuracy of 1 mm. As regards clinical data, the estimated population systematic errors were 1.3 mm for left-right (L-R), 1 mm for superior-inferior (S-I) and 1.1 mm for anterior-posterior (A-P) directions, respectively. Population random errors were 1.3 mm, 1.5 mm and 1.3 mm for L-R, S-I and A-P directions, respectively. PTV margin was at least 3.4, 3 and 3.2 mm for L-R, S-I and A-P direction, respectively. PRV margins for brainstem and spinal cord were 2.3, 2 and 2.1 mm and 3.8, 3.5 and 3.2 mm for L-R, A-P and S-I directions, respectively. Set-up error displacements showed no significant changes as the therapy progressed (p>0.05), although displacements >3 mm were found more frequently when severe weight loss or tumor nodal shrinkage occurred. Discussion. These results enable us to choose margins that guarantee with sufficient accuracy the coverage of PTVs and organs at risk sparing. Collected data confirmed the need for a strict check of patient position reproducibility in case of anatomical changes

  16. The virtual slice setup.

    Science.gov (United States)

    Lytton, William W; Neymotin, Samuel A; Hines, Michael L

    2008-06-30

    In an effort to design a simulation environment that is more similar to that of neurophysiology, we introduce a virtual slice setup in the NEURON simulator. The virtual slice setup runs continuously and permits parameter changes, including changes to synaptic weights and time course and to intrinsic cell properties. The virtual slice setup permits shocks to be applied at chosen locations and activity to be sampled intra- or extracellularly from chosen locations. By default, a summed population display is shown during a run to indicate the level of activity and no states are saved. Simulations can run for hours of model time, therefore it is not practical to save all of the state variables. These, in any case, are primarily of interest at discrete times when experiments are being run: the simulation can be stopped momentarily at such times to save activity patterns. The virtual slice setup maintains an automated notebook showing shocks and parameter changes as well as user comments. We demonstrate how interaction with a continuously running simulation encourages experimental prototyping and can suggest additional dynamical features such as ligand wash-in and wash-out-alternatives to typical instantaneous parameter change. The virtual slice setup currently uses event-driven cells and runs at approximately 2 min/h on a laptop.

  17. STABILITY OF MOTION OF MOBILE MODULE OF EXPERIMENTAL SETUP IN THE STUDY OF ACTIVE ROTARY WORKING OF MACHINES FOR SOIL TREATMENT

    Directory of Open Access Journals (Sweden)

    Vladimir F. Kupryashkin

    2016-06-01

    Full Text Available Introduction. The paper is devoted to the theoretical study of stability of movement of the movable unit of the experimental setup intended for the exploration of the active rotational working organs of the car for soil treatment. This takes into account the design features of the mobile unit and features active rotary force interaction of working bodies with the soil. From the analysis of previously conducted both theoretical and experimental studies of this type of working bodies noted the possibility of breaking the stability of the mobile stroke unit, which in turn will have a negative impact on the enforcement of a given method of the experiment program. From the analysis of previous studies shows that the assumptions under which they were made, not allow you to fully take into account the nature of the effect occurring dynamic processes of interaction of active rotary working bodies with the soil on the experimental setup truck driving stability. Materials and Methods. To address the shortcomings in the research, based on a synthesis of the main provisions and laws of mechanics and the experimental data of active rotary force interaction of working bodies with the soil, carried out theoretical studies of stability of movement of the movable unit of the experimental setup in view of its design features and conditions of the experiment. Results. A theoretical study was composed of loading trolley design scheme of the experimental setup with regard to its design features and power factors acting on its working elements, namely, the wheel bearing and studied active rotary working bodies. Processing results of the study allowed the weary twist zone of stable and unstable movement of the movable unit Expo tal installation. The presence of unstable movement zone carts at-leads to a breach of the conditions set by the plan of experimental-governmental research and a negative impact on their quality and purity. Discussion and Conclusions. All of

  18. Time complexity and linear-time approximation of the ancient two-machine flow shop

    NARCIS (Netherlands)

    Rote, G.; Woeginger, G.J.

    1998-01-01

    We consider the scheduling problems F2¿Cmax and F2|no-wait|Cmax, i.e. makespan minimization in a two-machine flow shop, with and without no wait in process. For both problems solution algorithms based on sorting with O(n log n) running time are known, where n denotes the number of jobs. [1, 2]. We

  19. A Bit-Encoding Based New Data Structure for Time and Memory Efficient Handling of Spike Times in an Electrophysiological Setup.

    Science.gov (United States)

    Ljungquist, Bengt; Petersson, Per; Johansson, Anders J; Schouenborg, Jens; Garwicz, Martin

    2018-04-01

    Recent neuroscientific and technical developments of brain machine interfaces have put increasing demands on neuroinformatic databases and data handling software, especially when managing data in real time from large numbers of neurons. Extrapolating these developments we here set out to construct a scalable software architecture that would enable near-future massive parallel recording, organization and analysis of neurophysiological data on a standard computer. To this end we combined, for the first time in the present context, bit-encoding of spike data with a specific communication format for real time transfer and storage of neuronal data, synchronized by a common time base across all unit sources. We demonstrate that our architecture can simultaneously handle data from more than one million neurons and provide, in real time (based on analysis of previously recorded data. In addition to managing recordings from very large numbers of neurons in real time, it also has the capacity to handle the extensive periods of recording time necessary in certain scientific and clinical applications. Furthermore, the bit-encoding proposed has the additional advantage of allowing an extremely fast analysis of spatiotemporal spike patterns in a large number of neurons. Thus, we conclude that this architecture is well suited to support current and near-future Brain Machine Interface requirements.

  20. Time, space, and the new media machine of the terrorphone

    Directory of Open Access Journals (Sweden)

    John Armitage

    2013-11-01

    Full Text Available In this short article, the author is concerned with how the contemporary form of the telephone, a new media machine which was of deep-rooted significance for Marshall McLuhan, promotes our obsession with forms of shared participation and social implosion. The author argues that the form of the telephone involves a complex abolition of our sense of space, interwoven with unexpected socio-cultural effects, which then create new subjectivities as well as new forms of decentralization that are intuited but not fully understood. To politicize these effects, and following the revelations of the American whistleblower Edward Snowden, the author identifies the form of the mobile telephone as a new form of media and argues that it is no longer an ‘extension of man’, as McLuhan suggested, but an extension of the US State, which is producing new forms of socio-cultural collapse. The author then explores how the remote-controlled time and space of what he calls the ‘terrorphone’ cultivates, among other things, the contemporary visualization of speech. Finally, he questions the desirability of unrelenting mobile telephone interaction as our only ‘intelligent’ choice today when such interaction is, contrary to McLuhan, not a great extension of our central nervous system, but in fact a danger to it.

  1. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    Science.gov (United States)

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  2. A Real-Time Java Virtual Machine for Avionics (Preprint)

    National Research Council Canada - National Science Library

    Armbruster, Austin; Pla, Edward; Baker, Jason; Cunei, Antonio; Flack, Chapman; Pizlo, Filip; Vitek, Jan; Proch zka, Marek; Holmes, David

    2006-01-01

    ...) in the DARPA Program Composition for Embedded System (PCES) program. Within the scope of PCES, Purdue University and the Boeing Company collaborated on the development of Ovm, an open source implementation of the RTSJ virtual machine...

  3. Machine learning in heart failure: ready for prime time.

    Science.gov (United States)

    Awan, Saqib Ejaz; Sohel, Ferdous; Sanfilippo, Frank Mario; Bennamoun, Mohammed; Dwivedi, Girish

    2018-03-01

    The aim of this review is to present an up-to-date overview of the application of machine learning methods in heart failure including diagnosis, classification, readmissions and medication adherence. Recent studies have shown that the application of machine learning techniques may have the potential to improve heart failure outcomes and management, including cost savings by improving existing diagnostic and treatment support systems. Recently developed deep learning methods are expected to yield even better performance than traditional machine learning techniques in performing complex tasks by learning the intricate patterns hidden in big medical data. The review summarizes the recent developments in the application of machine and deep learning methods in heart failure management.

  4. Cyclic flow shop scheduling problem with two-machine cells

    Directory of Open Access Journals (Sweden)

    Bożejko Wojciech

    2017-06-01

    Full Text Available In the paper a variant of cyclic production with setups and two-machine cell is considered. One of the stages of the problem solving consists of assigning each operation to the machine on which it will be carried out. The total number of such assignments is exponential. We propose a polynomial time algorithm finding the optimal operations to machines assignment.

  5. Machine learning application in the life time of materials

    OpenAIRE

    Yu, Xiaojiao

    2017-01-01

    Materials design and development typically takes several decades from the initial discovery to commercialization with the traditional trial and error development approach. With the accumulation of data from both experimental and computational results, data based machine learning becomes an emerging field in materials discovery, design and property prediction. This manuscript reviews the history of materials science as a disciplinary the most common machine learning method used in materials sc...

  6. Multi-objective optimization model of CNC machining to minimize processing time and environmental impact

    Science.gov (United States)

    Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad

    2017-11-01

    Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.

  7. Cone-Beam Computed Tomography–Guided Positioning of Laryngeal Cancer Patients with Large Interfraction Time Trends in Setup and Nonrigid Anatomy Variations

    International Nuclear Information System (INIS)

    Gangsaas, Anne; Astreinidou, Eleftheria; Quint, Sandra; Levendag, Peter C.; Heijmen, Ben

    2013-01-01

    Purpose: To investigate interfraction setup variations of the primary tumor, elective nodes, and vertebrae in laryngeal cancer patients and to validate protocols for cone beam computed tomography (CBCT)-guided correction. Methods and Materials: For 30 patients, CBCT-measured displacements in fractionated treatments were used to investigate population setup errors and to simulate residual setup errors for the no action level (NAL) offline protocol, the extended NAL (eNAL) protocol, and daily CBCT acquisition with online analysis and repositioning. Results: Without corrections, 12 of 26 patients treated with radical radiation therapy would have experienced a gradual change (time trend) in primary tumor setup ≥4 mm in the craniocaudal (CC) direction during the fractionated treatment (11/12 in caudal direction, maximum 11 mm). Due to these trends, correction of primary tumor displacements with NAL resulted in large residual CC errors (required margin 6.7 mm). With the weekly correction vector adjustments in eNAL, the trends could be largely compensated (CC margin 3.5 mm). Correlation between movements of the primary and nodal clinical target volumes (CTVs) in the CC direction was poor (r 2 =0.15). Therefore, even with online setup corrections of the primary CTV, the required CC margin for the nodal CTV was as large as 6.8 mm. Also for the vertebrae, large time trends were observed for some patients. Because of poor CC correlation (r 2 =0.19) between displacements of the primary CTV and the vertebrae, even with daily online repositioning of the vertebrae, the required CC margin around the primary CTV was 6.9 mm. Conclusions: Laryngeal cancer patients showed substantial interfraction setup variations, including large time trends, and poor CC correlation between primary tumor displacements and motion of the nodes and vertebrae (internal tumor motion). These trends and nonrigid anatomy variations have to be considered in the choice of setup verification protocol and

  8. Time machine tales the science fiction adventures and philosophical puzzles of time travel

    CERN Document Server

    Nahin, Paul J

    2017-01-01

    This book contains a broad overview of time travel in science fiction, along with a detailed examination of the philosophical implications of time travel. The emphasis of this book is now on the philosophical and on science fiction, rather than on physics, as in the author's earlier books on the subject. In that spirit there are, for example, no Tech Notes filled with algebra, integrals, and differential equations, as there are in the first and second editions of TIME MACHINES. Writing about time travel is, today, a respectable business. It hasn’t always been so. After all, time travel, prima facie, appears to violate a fundamental law of nature; every effect has a cause, with the cause occurring before the effect. Time travel to the past, however, seems to allow, indeed to demand, backwards causation, with an effect (the time traveler emerging into the past as he exits from his time machine) occurring before its cause (the time traveler pushing the start button on his machine’s control panel to start his...

  9. How to build a time machine: the real science of time travel

    CERN Document Server

    Clegg, Brian

    2013-01-01

    A pop science look at time travel technology, from Einstein to Ronald Mallett to present day experiments. Forget fiction: time travel is real.In How to Build a Time Machine, Brian Clegg provides an understanding of what time is and how it can be manipulated. He explores the fascinating world of physics and the remarkable possibilities of real time travel that emerge from quantum entanglement, superluminal speeds, neutron star cylinders and wormholes in space. With the fascinating paradoxes of time travel echoing in our minds will we realize that travel into the future might never be possible? Or will we realize there is no limit on what can be achieved, and take on this ultimate challenge? Only time will tell.

  10. Electronic setup for fluorescence emission measurements and long-time constant-temperature maintenance of Single-Walled Carbon Nano-Tubes in water solutions

    Directory of Open Access Journals (Sweden)

    De Rosa Matteo

    2017-03-01

    Full Text Available In our previous research we have observed that the fluorescence emission from water solutions of Single-Walled Carbon Nano-Tubes (SWCNT, excited by a laser with a wavelength of 830nm, diminishes with the time. We have already proved that such a fading is a function of the storage time and the storage temperature. In order to study the emission of the SWCNT as a function of these two parameters we have designed and realized a special measurement compartment with a cuvette holder where the SWCNT solutions can be measured and stored at a fixed constant temperature for periods of time as long as several weeks. To maintain the measurement setup under a constant temperature we have designed special experimental setup based on two Peltier cells with electronic temperature control.

  11. Server farms with setup costs

    NARCIS (Netherlands)

    Gandhi, A.; Harchol-Balter, M.; Adan, I.J.B.F.

    2010-01-01

    In this paper we consider server farms with a setup cost. This model is common in manufacturing systems and data centers, where there is a cost to turn servers on. Setup costs always take the form of a time delay, and sometimes there is additionally a power penalty, as in the case of data centers.

  12. One-machine job-scheduling with non-constant capacity - Minimizing weighted completion times

    NARCIS (Netherlands)

    Amaddeo, H.F.; Amaddeo, H.F.; Nawijn, W.M.; van Harten, Aart

    1997-01-01

    In this paper an n-job one-machine scheduling problem is considered, in which the machine capacity is time-dependent and jobs are characterized by their work content. The objective is to minimize the sum of weighted completion times. A necessary optimality condition is presented and we discuss some

  13. SU-E-J-44: A Novel Approach to Quantify Patient Setup and Target Motion for Real-Time Image-Guided Radiotherapy (IGRT)

    Energy Technology Data Exchange (ETDEWEB)

    Li, S; Charpentier, P; Sayler, E; Micaily, B; Miyamoto, C [Temple University Hospital, Phila., PA (United States); Geng, J [Xigen LLC, Gaithersburg, MD (United States)

    2015-06-15

    Purpose Isocenter shifts and rotations to correct patient setup errors and organ motion cannot remedy some shape changes of large targets. We are investigating new methods in quantification of target deformation for realtime IGRT of breast and chest wall cancer. Methods Ninety-five patients of breast or chest wall cancer were accrued in an IRB-approved clinical trial of IGRT using 3D surface images acquired at daily setup and beam-on time via an in-room camera. Shifts and rotations relating to the planned reference surface were determined using iterative-closest-point alignment. Local surface displacements and target deformation are measured via a ray-surface intersection and principal component analysis (PCA) of external surface, respectively. Isocenter shift, upper-abdominal displacement, and vectors of the surface projected onto the two principal components, PC1 and PC2, were evaluated for sensitivity and accuracy in detection of target deformation. Setup errors for some deformed targets were estimated by superlatively registering target volume, inner surface, or external surface in weekly CBCT or these outlines on weekly EPI. Results Setup difference according to the inner-surface, external surface, or target volume could be 1.5 cm. Video surface-guided setup agreed with EPI results to within < 0.5 cm while CBCT results were sometimes (∼20%) different from that of EPI (>0.5 cm) due to target deformation for some large breasts and some chest walls undergoing deep-breath-hold irradiation. Square root of PC1 and PC2 is very sensitive to external surface deformation and irregular breathing. Conclusion PCA of external surfaces is quick and simple way to detect target deformation in IGRT of breast and chest wall cancer. Setup corrections based on the target volume, inner surface, and external surface could be significant different. Thus, checking of target shape changes is essential for accurate image-guided patient setup and motion tracking of large deformable

  14. Phase Space Prediction of Chaotic Time Series with Nu-Support Vector Machine Regression

    International Nuclear Information System (INIS)

    Ye Meiying; Wang Xiaodong

    2005-01-01

    A new class of support vector machine, nu-support vector machine, is discussed which can handle both classification and regression. We focus on nu-support vector machine regression and use it for phase space prediction of chaotic time series. The effectiveness of the method is demonstrated by applying it to the Henon map. This study also compares nu-support vector machine with back propagation (BP) networks in order to better evaluate the performance of the proposed methods. The experimental results show that the nu-support vector machine regression obtains lower root mean squared error than the BP networks and provides an accurate chaotic time series prediction. These results can be attributable to the fact that nu-support vector machine implements the structural risk minimization principle and this leads to better generalization than the BP networks.

  15. Pathologies of van Stockum dust/Tipler's time machine

    Science.gov (United States)

    Lindsay, David S.

    2016-09-01

    We study the internal solution, and external vacuum solution for radial cutoff, of "van Stockum dust", an infinitely long rotating pressureless dust column; its density increases with radius. This interesting but poorly explored spacetime turns out to have a number of exotic properties, especially in the external vacuum region. These solutions have been known for decades, but it seems that they have never been investigated in detail. In this paper we analyze them and describe their peculiar properties. There are three regimes of radial cutoff that are of interest: (1) If the dust column is thick enough that closed timelike loops (CTLs or "time machines") exist inside the column, then the radius of the entire "universe" is finite, and in fact does not extend much beyond the edge of the matter, even though the metric's radial parameter is unbounded. This interesting finite proper radius seems to have been missed by earlier investigators. Other exotic properties of the external vacuum in this regime: CTLs exist in cylindrical shells, alternating with shells having no circular CTLs; there are infinitely many such shells, getting closer and closer together as one gets farther from the rotation axis. Also, a separate set of infinitely many cylindrical shells exists, having what might be termed "extreme frame-dragging", within which motion is possible only in one direction; they alternate with "normal" shells allowing motion in either direction. Gravitational attraction and tides increase with distance from the matter column, and diverge at the "edge of the universe". In addition, though the radius of the universe is finite, its circumference is infinite; and its boundary is a circle, not a cylinder (the z-axis has shrunk to nothing at the edge). (2) For smaller radial cutoff, but still large enough to produce CTLs, the radius of the universe is infinite; but there are still infinitely many cylindrical shells of CTLs alternating with non-CTL shells. However, the innermost

  16. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    Science.gov (United States)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  17. Taxi Time Prediction at Charlotte Airport Using Fast-Time Simulation and Machine Learning Techniques

    Science.gov (United States)

    Lee, Hanbong

    2016-01-01

    Accurate taxi time prediction is required for enabling efficient runway scheduling that can increase runway throughput and reduce taxi times and fuel consumptions on the airport surface. Currently NASA and American Airlines are jointly developing a decision-support tool called Spot and Runway Departure Advisor (SARDA) that assists airport ramp controllers to make gate pushback decisions and improve the overall efficiency of airport surface traffic. In this presentation, we propose to use Linear Optimized Sequencing (LINOS), a discrete-event fast-time simulation tool, to predict taxi times and provide the estimates to the runway scheduler in real-time airport operations. To assess its prediction accuracy, we also introduce a data-driven analytical method using machine learning techniques. These two taxi time prediction methods are evaluated with actual taxi time data obtained from the SARDA human-in-the-loop (HITL) simulation for Charlotte Douglas International Airport (CLT) using various performance measurement metrics. Based on the taxi time prediction results, we also discuss how the prediction accuracy can be affected by the operational complexity at this airport and how we can improve the fast time simulation model before implementing it with an airport scheduling algorithm in a real-time environment.

  18. Decentralized real-time simulation of forest machines

    Science.gov (United States)

    Freund, Eckhard; Adam, Frank; Hoffmann, Katharina; Rossmann, Juergen; Kraemer, Michael; Schluse, Michael

    2000-10-01

    To develop realistic forest machine simulators is a demanding task. A useful simulator has to provide a close- to-reality simulation of the forest environment as well as the simulation of the physics of the vehicle. Customers demand a highly realistic three dimensional forestry landscape and the realistic simulation of the complex motion of the vehicle even in rough terrain in order to be able to use the simulator for operator training under close-to- reality conditions. The realistic simulation of the vehicle, especially with the driver's seat mounted on a motion platform, greatly improves the effect of immersion into the virtual reality of a simulated forest and the achievable level of education of the driver. Thus, the connection of the real control devices of forest machines to the simulation system has to be supported, i.e. the real control devices like the joysticks or the board computer system to control the crane, the aggregate etc. Beyond, the fusion of the board computer system and the simulation system is realized by means of sensors, i.e. digital and analog signals. The decentralized system structure allows several virtual reality systems to evaluate and visualize the information of the control devices and the sensors. So, while the driver is practicing, the instructor can immerse into the same virtual forest to monitor the session from his own viewpoint. In this paper, we are describing the realized structure as well as the necessary software and hardware components and application experiences.

  19. Trip Travel Time Forecasting Based on Selective Forgetting Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Zhiming Gui

    2014-01-01

    Full Text Available Travel time estimation on road networks is a valuable traffic metric. In this paper, we propose a machine learning based method for trip travel time estimation in road networks. The method uses the historical trip information extracted from taxis trace data as the training data. An optimized online sequential extreme machine, selective forgetting extreme learning machine, is adopted to make the prediction. Its selective forgetting learning ability enables the prediction algorithm to adapt to trip conditions changes well. Experimental results using real-life taxis trace data show that the forecasting model provides an effective and practical way for the travel time forecasting.

  20. High-repetition-rate setup for pump-probe time-resolved XUV-IR experiments employing ion and electron momentum imaging

    Science.gov (United States)

    Pathak, Shashank; Robatjazi, Seyyed Javad; Wright Lee, Pearson; Raju Pandiri, Kanaka; Rolles, Daniel; Rudenko, Artem

    2017-04-01

    J.R. Macdonald Laboratory, Department of Physics, Kansas State University, Manhattan KS, USA We report on the development of a versatile experimental setup for XUV-IR pump-probe experiments using a 10 kHz high-harmonic generation (HHG) source and two different charged-particle momentum imaging spectrometers. The HHG source, based on a commercial KM Labs eXtreme Ultraviolet Ultrafast Source, is capable of delivering XUV radiation of less than 30 fs pulse duration in the photon energy range of 17 eV to 100 eV. It can be coupled either to a conventional velocity map imaging (VMI) setup with an atomic, molecular, or nanoparticle target; or to a novel double-sided VMI spectrometer equipped with two delay-line detectors for coincidence studies. An overview of the setup and results of first pump-probe experiments including studies of two-color double ionization of Xe and time-resolved dynamics of photoionized CO2 molecule will be presented. This project is supported in part by National Science Foundation (NSF-EPSCOR) Award No. IIA-1430493 and in part by the Chemical science, Geosciences, and Bio-Science division, Office of Basic Energy Science, Office of science, U.S. Department of Energy. K.

  1. ANN Surface Roughness Optimization of AZ61 Magnesium Alloy Finish Turning: Minimum Machining Times at Prime Machining Costs

    Directory of Open Access Journals (Sweden)

    Adel Taha Abbas

    2018-05-01

    Full Text Available Magnesium alloys are widely used in aerospace vehicles and modern cars, due to their rapid machinability at high cutting speeds. A novel Edgeworth–Pareto optimization of an artificial neural network (ANN is presented in this paper for surface roughness (Ra prediction of one component in computer numerical control (CNC turning over minimal machining time (Tm and at prime machining costs (C. An ANN is built in the Matlab programming environment, based on a 4-12-3 multi-layer perceptron (MLP, to predict Ra, Tm, and C, in relation to cutting speed, vc, depth of cut, ap, and feed per revolution, fr. For the first time, a profile of an AZ61 alloy workpiece after finish turning is constructed using an ANN for the range of experimental values vc, ap, and fr. The global minimum length of a three-dimensional estimation vector was defined with the following coordinates: Ra = 0.087 μm, Tm = 0.358 min/cm3, C = $8.2973. Likewise, the corresponding finish-turning parameters were also estimated: cutting speed vc = 250 m/min, cutting depth ap = 1.0 mm, and feed per revolution fr = 0.08 mm/rev. The ANN model achieved a reliable prediction accuracy of ±1.35% for surface roughness.

  2. ANN Surface Roughness Optimization of AZ61 Magnesium Alloy Finish Turning: Minimum Machining Times at Prime Machining Costs.

    Science.gov (United States)

    Abbas, Adel Taha; Pimenov, Danil Yurievich; Erdakov, Ivan Nikolaevich; Taha, Mohamed Adel; Soliman, Mahmoud Sayed; El Rayes, Magdy Mostafa

    2018-05-16

    Magnesium alloys are widely used in aerospace vehicles and modern cars, due to their rapid machinability at high cutting speeds. A novel Edgeworth⁻Pareto optimization of an artificial neural network (ANN) is presented in this paper for surface roughness ( Ra ) prediction of one component in computer numerical control (CNC) turning over minimal machining time ( T m ) and at prime machining costs ( C ). An ANN is built in the Matlab programming environment, based on a 4-12-3 multi-layer perceptron (MLP), to predict Ra , T m , and C , in relation to cutting speed, v c , depth of cut, a p , and feed per revolution, f r . For the first time, a profile of an AZ61 alloy workpiece after finish turning is constructed using an ANN for the range of experimental values v c , a p , and f r . The global minimum length of a three-dimensional estimation vector was defined with the following coordinates: Ra = 0.087 μm, T m = 0.358 min/cm³, C = $8.2973. Likewise, the corresponding finish-turning parameters were also estimated: cutting speed v c = 250 m/min, cutting depth a p = 1.0 mm, and feed per revolution f r = 0.08 mm/rev. The ANN model achieved a reliable prediction accuracy of ±1.35% for surface roughness.

  3. Experimental setup and commissioning baseline study in search of time-variations in beta-decay half-lives

    Energy Technology Data Exchange (ETDEWEB)

    Goddard, Braden, E-mail: goddard.braden@gmail.com [Department of Nuclear Engineering, Khalifa University of Science, Technology & Research, P.O. Box 127788, Abu Dhabi (United Arab Emirates); Hitt, George W. [Department of Nuclear Engineering, Khalifa University of Science, Technology & Research, P.O. Box 127788, Abu Dhabi (United Arab Emirates); Department of Applied Mathematics and Science, Khalifa University of Science, Technology & Research, P.O. Box 127788, Abu Dhabi (United Arab Emirates); Solodov, Alexander A. [Department of Nuclear Engineering, Khalifa University of Science, Technology & Research, P.O. Box 127788, Abu Dhabi (United Arab Emirates); Bridi, Dorian; Isakovic, A.F. [Department of Applied Mathematics and Science, Khalifa University of Science, Technology & Research, P.O. Box 127788, Abu Dhabi (United Arab Emirates); El-Khazali, Reyad [Department of Electrical and Computer Engineering, Khalifa University of Science, Technology & Research, P.O. Box 127788, Abu Dhabi (United Arab Emirates); Abulail, Ayman, E-mail: aabulail@pi.ac.ae [Department of Applied Mathematics and Science, Khalifa University of Science, Technology & Research, P.O. Box 127788, Abu Dhabi (United Arab Emirates)

    2016-03-11

    Recently there have been a number of investigations into whether the decay constant of a radioactive isotope can be influenced by external factors, such as the Earth–Sun distance or Solar flare activity. Positive claims suggest that annual oscillations of ~0.1% and accelerations of ~0.4% in the relative activity of beta-emitters coincide with the Earth–Sun distance and solar flare activity, respectively. Results from replication experiments have so far been conflicting. The main criticism of the measurements used to trace and quantify these effects is that the data is of poor quality or limited in scope. Data have often been collected as part of short duration weekly calibration measurements, measured with a single type of low precision detector, only using one isotope, and having no environmental conditions information (temperature, pressure, humidity) accompanying the radiation measurements. This paper describes the setup of a series of counting experiments commissioned for addressing these criticisms. Six dedicated detector systems (four different types) measuring six different isotopes ({sup 14}C, {sup 54}Mn, {sup 60}Co, {sup 90}Sr, {sup 204}Tl, and {sup 226}Ra) have been continuously collecting source activity synchronously with environmental data for a period of one month (April 2014). The results of this baseline commissioning study show that there are correlations between activity and environmental conditions for some detector types which are then quantified. The results also show that the one sigma counting uncertainties in all the detectors are less than 0.024% for a given 24 h period. After accounting for propagated uncertainties from corrections against correlations with environmental data, the ability to resolve 0.1% activity changes varies, from 8 min to 1.6 days, depending on the specific detector. All six experiments therefore, will have sufficient precision over the upcoming year to scrutinize claims of both annual activity oscillations and

  4. CAT-PUMA: CME Arrival Time Prediction Using Machine learning Algorithms

    Science.gov (United States)

    Liu, Jiajia; Ye, Yudong; Shen, Chenglong; Wang, Yuming; Erdélyi, Robert

    2018-04-01

    CAT-PUMA (CME Arrival Time Prediction Using Machine learning Algorithms) quickly and accurately predicts the arrival of Coronal Mass Ejections (CMEs) of CME arrival time. The software was trained via detailed analysis of CME features and solar wind parameters using 182 previously observed geo-effective partial-/full-halo CMEs and uses algorithms of the Support Vector Machine (SVM) to make its predictions, which can be made within minutes of providing the necessary input parameters of a CME.

  5. Optimal replacement time estimation for machines and equipment based on cost function

    OpenAIRE

    J. Šebo; J. Buša; P. Demeč; J. Svetlík

    2013-01-01

    The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables). Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is ...

  6. Correlation between use time of machine and decline curve for emerging enterprise information systems

    Science.gov (United States)

    Chang, Yao-Chung; Lai, Chin-Feng; Chuang, Chi-Cheng; Hou, Cheng-Yu

    2018-04-01

    With the progress of science and technology, more and more machines are adpot to help human life better and more convenient. When the machines have been used for a longer period of time so that the machine components are getting old, the amount of power comsumption will increase and easily cause the machine to overheat. This also causes a waste of invisible resources. If the Internet of Everything (IoE) technologies are able to be applied into the enterprise information systems for monitoring the machines use time, it can not only make energy can be effectively used, but aslo create a safer living environment. To solve the above problem, the correlation predict model is established to collect the data of power consumption converted into power eigenvalues. This study takes the power eigenvalue as the independent variable and use time as the dependent variable in order to establish the decline curve. Ultimately, the scoring and estimation modules are employed to seek the best power eigenvalue as the independent variable. To predict use time, the correlation is discussed between the use time and the decline curve to improve the entire behavioural analysis of the facilitate recognition of the use time of machines.

  7. Setup Analysis: Combining SMED with Other Tools

    Directory of Open Access Journals (Sweden)

    Stadnicka Dorota

    2015-02-01

    Full Text Available The purpose of this paper is to propose the methodology for the setup analysis, which can be implemented mainly in small and medium enterprises which are not convinced to implement the setups development. The methodology was developed after the research which determined the problem. Companies still have difficulties with a long setup time. Many of them do nothing to decrease this time. A long setup is not a sufficient reason for companies to undertake any actions towards the setup time reduction. To encourage companies to implement SMED it is essential to make some analyses of changeovers in order to discover problems. The methodology proposed can really encourage the management to take a decision about the SMED implementation, and that was verified in a production company. The setup analysis methodology is made up of seven steps. Four of them concern a setups analysis in a chosen area of a company, such as a work stand which is a bottleneck with many setups. The goal is to convince the management to begin actions concerning the setups improvement. The last three steps are related to a certain setup and, there, the goal is to reduce a setup time and the risk of problems which can appear during the setup. In this paper, the tools such as SMED, Pareto analysis, statistical analysis, FMEA and other were used.

  8. Iterated greedy algorithms to minimize the total family flow time for job-shop scheduling with job families and sequence-dependent set-ups

    Science.gov (United States)

    Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho

    2017-10-01

    This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.

  9. A Pareto-Based Adaptive Variable Neighborhood Search for Biobjective Hybrid Flow Shop Scheduling Problem with Sequence-Dependent Setup Time

    Directory of Open Access Journals (Sweden)

    Huixin Tian

    2016-01-01

    Full Text Available Different from most researches focused on the single objective hybrid flowshop scheduling (HFS problem, this paper investigates a biobjective HFS problem with sequence dependent setup time. The two objectives are the minimization of total weighted tardiness and the total setup time. To efficiently solve this problem, a Pareto-based adaptive biobjective variable neighborhood search (PABOVNS is developed. In the proposed PABOVNS, a solution is denoted as a sequence of all jobs and a decoding procedure is presented to obtain the corresponding complete schedule. In addition, the proposed PABOVNS has three major features that can guarantee a good balance of exploration and exploitation. First, an adaptive selection strategy of neighborhoods is proposed to automatically select the most promising neighborhood instead of the sequential selection strategy of canonical VNS. Second, a two phase multiobjective local search based on neighborhood search and path relinking is designed for each selected neighborhood. Third, an external archive with diversity maintenance is adopted to store the nondominated solutions and at the same time provide initial solutions for the local search. Computational results based on randomly generated instances show that the PABOVNS is efficient and even superior to some other powerful multiobjective algorithms in the literature.

  10. Evaluation of containment failure and cleanup time for Pu shots on the Z machine.

    Energy Technology Data Exchange (ETDEWEB)

    Darby, John L.

    2010-02-01

    Between November 30 and December 11, 2009 an evaluation was performed of the probability of containment failure and the time for cleanup of contamination of the Z machine given failure, for plutonium (Pu) experiments on the Z machine at Sandia National Laboratories (SNL). Due to the unique nature of the problem, there is little quantitative information available for the likelihood of failure of containment components or for the time to cleanup. Information for the evaluation was obtained from Subject Matter Experts (SMEs) at the Z machine facility. The SMEs provided the State of Knowledge (SOK) for the evaluation. There is significant epistemic- or state of knowledge- uncertainty associated with the events that comprise both failure of containment and cleanup. To capture epistemic uncertainty and to allow the SMEs to reason at the fidelity of the SOK, we used the belief/plausibility measure of uncertainty for this evaluation. We quantified two variables: the probability that the Pu containment system fails given a shot on the Z machine, and the time to cleanup Pu contamination in the Z machine given failure of containment. We identified dominant contributors for both the time to cleanup and the probability of containment failure. These results will be used by SNL management to decide the course of action for conducting the Pu experiments on the Z machine.

  11. Two-Agent Single-Machine Scheduling of Jobs with Time-Dependent Processing Times and Ready Times

    Directory of Open Access Journals (Sweden)

    Jan-Yee Kung

    2013-01-01

    Full Text Available Scheduling involving jobs with time-dependent processing times has recently attracted much research attention. However, multiagent scheduling with simultaneous considerations of jobs with time-dependent processing times and ready times is relatively unexplored. Inspired by this observation, we study a two-agent single-machine scheduling problem in which the jobs have both time-dependent processing times and ready times. We consider the model in which the actual processing time of a job of the first agent is a decreasing function of its scheduled position while the actual processing time of a job of the second agent is an increasing function of its scheduled position. In addition, each job has a different ready time. The objective is to minimize the total completion time of the jobs of the first agent with the restriction that no tardy job is allowed for the second agent. We propose a branch-and-bound and several genetic algorithms to obtain optimal and near-optimal solutions for the problem, respectively. We also conduct extensive computational results to test the proposed algorithms and examine the impacts of different problem parameters on their performance.

  12. Time series forecasting based on deep extreme learning machine

    NARCIS (Netherlands)

    Guo, Xuqi; Pang, Y.; Yan, Gaowei; Qiao, Tiezhu; Yang, Guang-Hong; Yang, Dan

    2017-01-01

    Multi-layer Artificial Neural Networks (ANN) has caught widespread attention as a new method for time series forecasting due to the ability of approximating any nonlinear function. In this paper, a new local time series prediction model is established with the nearest neighbor domain theory, in

  13. Single machine scheduling with time-dependent linear deterioration and rate-modifying maintenance

    OpenAIRE

    Rustogi, Kabir; Strusevich, Vitaly A.

    2015-01-01

    We study single machine scheduling problems with linear time-dependent deterioration effects and maintenance activities. Maintenance periods (MPs) are included into the schedule, so that the machine, that gets worse during the processing, can be restored to a better state. We deal with a job-independent version of the deterioration effects, that is, all jobs share a common deterioration rate. However, we introduce a novel extension to such models and allow the deterioration rates to change af...

  14. Preliminary Development of Real Time Usage-Phase Monitoring System for CNC Machine Tools with a Case Study on CNC Machine VMC 250

    Science.gov (United States)

    Budi Harja, Herman; Prakosa, Tri; Raharno, Sri; Yuwana Martawirya, Yatna; Nurhadi, Indra; Setyo Nogroho, Alamsyah

    2018-03-01

    The production characteristic of job-shop industry at which products have wide variety but small amounts causes every machine tool will be shared to conduct production process with dynamic load. Its dynamic condition operation directly affects machine tools component reliability. Hence, determination of maintenance schedule for every component should be calculated based on actual usage of machine tools component. This paper describes study on development of monitoring system to obtaining information about each CNC machine tool component usage in real time approached by component grouping based on its operation phase. A special device has been developed for monitoring machine tool component usage by utilizing usage phase activity data taken from certain electronics components within CNC machine. The components are adaptor, servo driver and spindle driver, as well as some additional components such as microcontroller and relays. The obtained data are utilized for detecting machine utilization phases such as power on state, machine ready state or spindle running state. Experimental result have shown that the developed CNC machine tool monitoring system is capable of obtaining phase information of machine tool usage as well as its duration and displays the information at the user interface application.

  15. High dose three-dimensional conformal boost (3DCB) using an orthogonal diagnostic X-ray set-up for patients with gynecological malignancy: a new application of real-time tumor-tracking system

    International Nuclear Information System (INIS)

    Yamamoto, Ritsu; Yonesaka, Akio; Nishioka, Seiko; Watari, Hidemichi; Hashimoto, Takayuki; Uchida, Daichi; Taguchi, Hiroshi; Nishioka, Takeshi; Miyasaka, Kazuo; Sakuragi, Noriaki; Shirato, Hiroki

    2004-01-01

    The feasibility and accuracy of high dose three-dimensional conformal boost (3DCB) using three internal fiducial markers and a two-orthogonal X-ray set-up of the real-time tumor-tracking system on patients with gynecological malignancy were investigated in 10 patients. The standard deviation of the distribution of systematic deviations (Σ) was reduced from 3.8, 4.6, and 4.9 mm in the manual set-up to 2.3, 2.3 and 2.7 mm in the set-up using the internal markers. The average standard deviation of the distribution of random deviations (σ) was reduced from 3.7, 5.0, and 4.5 mm in the manual set-up to 3.3, 3.0, and 4.2 mm in the marker set-up. The appropriate PTV margin was estimated to be 10.2, 12.8, and 12.9 mm in the manual set-up and 6.9, 6.7, and 8.3 mm in the gold marker set-up, respectively, using the formula 2Σ+0.7σ. Set-up of the patients with three markers and two fluoroscopy is useful to reduce PTV margin and perform 3DCB

  16. The Microphone Feedback Analogy for Chatter in Machining

    Directory of Open Access Journals (Sweden)

    Tony Schmitz

    2015-01-01

    Full Text Available This paper provides experimental evidence for the analogy between the time-delay feedback in public address systems and chatter in machining. Machining stability theory derived using the Nyquist criterion is applied to predict the squeal frequency in a microphone/speaker setup. Comparisons between predictions and measurements are presented.

  17. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  18. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  19. Quality assurance of a system for improved target localization and patient set-up that combines real-time infrared tracking and stereoscopic X-ray imaging.

    Science.gov (United States)

    Verellen, Dirk; Soete, Guy; Linthout, Nadine; Van Acker, Swana; De Roover, Patsy; Vinh-Hung, Vincent; Van de Steene, Jan; Storme, Guy

    2003-04-01

    The aim of this study is to investigate the positional accuracy of a prototype X-ray imaging tool in combination with a real-time infrared tracking device allowing automated patient set-up in three dimensions. A prototype X-ray imaging tool has been integrated with a commercially released real-time infrared tracking device. The system, consisting of two X-ray tubes mounted to the ceiling and a centrally located amorphous silicon detector has been developed for automated patient positioning from outside the treatment room prior to treatment. Two major functions are supported: (a) automated fusion of the actual treatment images with digitally reconstructed radiographs (DRRs) representing the desired position; (b) matching of implanted radio opaque markers. Measurements of known translational (up to 30.0mm) and rotational (up to 4.0 degrees ) set-up errors in three dimensions as well as hidden target tests have been performed on anthropomorphic phantoms. The system's accuracy can be represented with the mean three-dimensional displacement vector, which yielded 0.6mm (with an overall SD of 0.9mm) for the fusion of DRRs and X-ray images. Average deviations between known translational errors and calculations varied from -0.3 to 0.6mm with a standard deviation in the range of 0.6-1.2mm. The marker matching algorithm yielded a three-dimensional uncertainty of 0.3mm (overall SD: 0.4mm), with averages ranging from 0.0 to 0.3mm and a standard deviation in the range between 0.3 and 0.4mm. The stereoscopic X-ray imaging device integrated with the real-time infrared tracking device represents a positioning tool allowing for the geometrical accuracy that is required for conformal radiation therapy of abdominal and pelvic lesions, within an acceptable time-frame.

  20. Molecular beam studies with a time-of-flight machine

    International Nuclear Information System (INIS)

    Beijerinck, H.C.W.

    1975-01-01

    The study concerns the development of the time-of-flight method for the velocity analysis of molecular beams and its application to the measurement of the velocity dependence of the total cross-section of the noble gases. It reviews the elastic scattering theory, both in the framework of classical mechanics and in the quantum mechanical description. Attention is paid to the semiclassical correspondence of classical particle trajectories with the partial waves of the quantum mechanical solution. The total cross-section and the small angle differential cross-section are discussed with special emphasis on their relation. The results of this chapter are used later to derive the correction on the measured total cross-section due to the finite angular resolution of the apparatus. Reviewed also is the available information on the intermolecular potential of the Ar-Ar system. Then a discussion of the measurement of total cross-sections with the molecular beam method and the time-of-flight method is compared to other methods used. It is shown that the single burst time-of-flight method can be developed into a reliable and well-calibrated method for the analysis of the velocity distribution of molecular beams. A comparison of the single burst time-of-flight method with the cross-correlation time-of-flight method shows that the two methods are complementary and that the specific experimental circumstances determine which method is to be preferred. Molecular beam sources are discussed. The peaking factor formalism is introduced and helps to compare the performance of different types of sources. The effusive and the supersonic source are treated and recent experimental results are given. The multichannel source is treated in more detail. For the opaque mode, an experimental investigation of the velocity distribution and the angular distribution of the flow pattern is presented. Comparison of these results with Monte Carlo calculations for free molecular flow in a cylindrical

  1. Life time evaluation of spectrum loaded machine parts

    Energy Technology Data Exchange (ETDEWEB)

    Rabb, R. [Waertsilae NSD Corporation, Vaasa (Finland)

    1998-12-31

    In a medium speed diesel engine there are some important components, such as the cylinder head, the piston and the cylinder liner, which are subjected to a specific load spectrum consisting of mainly two distinct parts. One is the low cycle part which is due to the temperature field that builds up after that the engine has been started. This low cycle part causes a big stress amplitude but consists of only a couple of thousand cycles during the engine life time. The other part of the load spectrum is the high cycle part due to the firing pressure. The high cycle part has a smaller amplitude but consists of billions of cycles during the engine life time. The cylinder head and the cylinder liner are made of cast iron. In this investigation the true extension into the high cycle domain of the S-N curve for grey cast iron grade 300/ISO 185 was established through fatigue tests with a load spectrum resembling the existing one. This testing resulted in much new and improved knowledge about the fatigue properties of grey cast iron and it was even possible to generalize the outcome of the spectrum fatigue tests into a simple design curve. (orig.) 11 refs.

  2. Toward transient finite element simulation of thermal deformation of machine tools in real-time

    Science.gov (United States)

    Naumann, Andreas; Ruprecht, Daniel; Wensch, Joerg

    2018-01-01

    Finite element models without simplifying assumptions can accurately describe the spatial and temporal distribution of heat in machine tools as well as the resulting deformation. In principle, this allows to correct for displacements of the Tool Centre Point and enables high precision manufacturing. However, the computational cost of FE models and restriction to generic algorithms in commercial tools like ANSYS prevents their operational use since simulations have to run faster than real-time. For the case where heat diffusion is slow compared to machine movement, we introduce a tailored implicit-explicit multi-rate time stepping method of higher order based on spectral deferred corrections. Using the open-source FEM library DUNE, we show that fully coupled simulations of the temperature field are possible in real-time for a machine consisting of a stock sliding up and down on rails attached to a stand.

  3. Time cycle calculation procedure for the special crew during the mining mobile machine complex operation

    International Nuclear Information System (INIS)

    Shmurygin, V; Lukyanov, V; Maslovsky, A

    2015-01-01

    The relevance of the research is specified by the necessity to optimize the delft mobile tunneling equipment operation. Target of the research is tunneling time cycle justification for the special crew during the mining mobile machine complex operation. Methods of the research included the consideration of operation organization schemes in the drifting face and effective use of the mobile equipment during mine exploratory working operations. Time cycle calculation procedures for major processes have been considered. This has been done for the special crew during the mobile machine complex operations for several working faces and various organization schemes

  4. Improved mortar setup technique

    CSIR Research Space (South Africa)

    De Villiers, D

    2008-10-01

    Full Text Available bearing sensor. This concept focuses directly on one of the most cumbersome aspects of a mortar set-up, namely the use of aiming posts. The prismatic mirror and bearing dials is described as well as the required setup procedures. The measurement...

  5. Optimal replacement time estimation for machines and equipment based on cost function

    Directory of Open Access Journals (Sweden)

    J. Šebo

    2013-01-01

    Full Text Available The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables. Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is sufficient to use simpler models. In addition to the testing of models we developed the method (tested on selected simple model which enable us in actual real time (with limited data set to indicate the optimal replacement time. The indicated time moment is close enough to the optimal replacement time t*.

  6. Trip time prediction in mass transit companies. A machine learning approach

    OpenAIRE

    João M. Moreira; Alípio Jorge; Jorge Freire de Sousa; Carlos Soares

    2005-01-01

    In this paper we discuss how trip time prediction can be useful foroperational optimization in mass transit companies and which machine learningtechniques can be used to improve results. Firstly, we analyze which departmentsneed trip time prediction and when. Secondly, we review related work and thirdlywe present the analysis of trip time over a particular path. We proceed by presentingexperimental results conducted on real data with the forecasting techniques wefound most adequate, and concl...

  7. Wormholes and time-machines in nonminimally coupled matter-curvature theories of gravity

    DEFF Research Database (Denmark)

    Bertolami, O.; Ferreira, R. Z.

    2013-01-01

    In this work we show the existence of traversable wormhole and time-machine solutions in a modified theory of gravity where matter and curvature are nonminimally coupled. Those solutions present a nontrivial redshift function and exist even in the presence of ordinary matter which satisfies...

  8. Wormholes and Time-Machines in Nonminimally Coupled Matter-Curvature Theories of Gravity

    Directory of Open Access Journals (Sweden)

    Bertolami Orfeu

    2013-09-01

    Full Text Available In this work we show the existence of traversable wormhole and time-machine solutions in a modified theory of gravity where matter and curvature are nonminimally coupled. Those solutions present a nontrivial redshift function and exist even in the presence of ordinary matter which satisfies the dominant energy condition.

  9. On-line scheduling of two-machine open shops where jobs arrive over time

    NARCIS (Netherlands)

    Chen, B.; Vestjens, A.P.A.; Woeginger, G.J.

    1998-01-01

    We investigate the problem of on-line scheduling two-machine open shops with the objective of minimizing the makespan.Jobs arrive independently over time, and the existence of a job is not known until its arrival. In the clairvoyant on-line model, the processing requirement of every job becomes

  10. Feasibility of a real-time hand hygiene notification machine learning system in outpatient clinics.

    Science.gov (United States)

    Geilleit, R; Hen, Z Q; Chong, C Y; Loh, A P; Pang, N L; Peterson, G M; Ng, K C; Huis, A; de Korne, D F

    2018-04-09

    Various technologies have been developed to improve hand hygiene (HH) compliance in inpatient settings; however, little is known about the feasibility of machine learning technology for this purpose in outpatient clinics. To assess the effectiveness, user experiences, and costs of implementing a real-time HH notification machine learning system in outpatient clinics. In our mixed methods study, a multi-disciplinary team co-created an infrared guided sensor system to automatically notify clinicians to perform HH just before first patient contact. Notification technology effects were measured by comparing HH compliance at baseline (without notifications) with real-time auditory notifications that continued till HH was performed (intervention I) or notifications lasting 15 s (intervention II). User experiences were collected during daily briefings and semi-structured interviews. Costs of implementation of the system were calculated and compared to the current observational auditing programme. Average baseline HH performance before first patient contact was 53.8%. With real-time auditory notifications that continued till HH was performed, overall HH performance increased to 100% (P machine learning system were estimated to be 46% lower than the observational auditing programme. Machine learning technology that enables real-time HH notification provides a promising cost-effective approach to both improving and monitoring HH, and deserves further development in outpatient settings. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  11. A two-level real-time vision machine combining coarse and fine grained parallelism

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Kjær-Nielsen, Anders; Pauwels, Karl

    2010-01-01

    In this paper, we describe a real-time vision machine having a stereo camera as input generating visual information on two different levels of abstraction. The system provides visual low-level and mid-level information in terms of dense stereo and optical flow, egomotion, indicating areas...... a factor 90 and a reduction of latency of a factor 26 compared to processing on a single CPU--core. Since the vision machine provides generic visual information it can be used in many contexts. Currently it is used in a driver assistance context as well as in two robotic applications....

  12. One method for life time estimation of a bucket wheel machine for coal moving

    Science.gov (United States)

    Vîlceanu, Fl; Iancu, C.

    2016-08-01

    Rehabilitation of outdated equipment with lifetime expired, or in the ultimate life period, together with high cost investments for their replacement, makes rational the efforts made to extend their life. Rehabilitation involves checking operational safety based on relevant expertise of metal structures supporting effective resistance and assessing the residual lifetime. The bucket wheel machine for coal constitute basic machine within deposits of coal of power plants. The estimate of remaining life can be done by checking the loading on the most stressed subassembly by Finite Element Analysis on a welding detail. The paper presents step-by-step the method of calculus applied in order to establishing the residual lifetime of a bucket wheel machine for coal moving using non-destructive methods of study (fatigue cracking analysis + FEA). In order to establish the actual state of machine and areas subject to study, was done FEA of this mining equipment, performed on the geometric model of mechanical analyzed structures, with powerful CAD/FEA programs. By applying the method it can be calculated residual lifetime, by extending the results from the most stressed area of the equipment to the entire machine, and thus saving time and money from expensive replacements.

  13. Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques

    Science.gov (United States)

    Lee, Hanbong; Malik, Waqar; Jung, Yoon C.

    2016-01-01

    Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.

  14. Analysis of labor employment assessment on production machine to minimize time production

    Science.gov (United States)

    Hernawati, Tri; Suliawati; Sari Gumay, Vita

    2018-03-01

    Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.

  15. Spatially and time-resolved element-specific in situ corrosion investigations with an online hyphenated microcapillary flow injection inductively coupled plasma mass spectrometry set-up

    International Nuclear Information System (INIS)

    Homazava, N.; Ulrich, A.; Kraehenbuehl, U.

    2008-01-01

    A novel technique for in situ spatial, time-resolved element-specific investigations of corrosion processes is developed. The technique is based on an online hyphenation of a specially designed microflow-capillary set-up to inductively coupled plasma mass spectrometry (ICP-MS) using flow injection sample introduction. Detailed aspects of the method development, optimization of the sample microflow introduction and flow injection characteristics for the localized corrosion analysis are described. Moreover, specific challenges of the ICP-MS analysis as applied to the analysis of corrosion sample probes, e.g. high matrix load and limited sample volume, are discussed. The efficiency of the developed technique is proved by corrosion susceptibility analysis of a commercial Al alloy. Results of the corrosion experiments of the aluminum alloy AA 6111 are presented to demonstrate the influence of various factors such as exposure time and pH value of the corrosive medium on the element-specific dissolution rates of the alloy. This novel technique provides new aspects in corrosion science and sheds new light on corrosion mechanisms

  16. Leadership set-up

    DEFF Research Database (Denmark)

    Thude, Bettina Ravnborg; Stenager, Egon; von Plessen, Christian

    2018-01-01

    . Findings: The study found that the leadership set-up did not have any clear influence on interdisciplinary cooperation, as all wards had a high degree of interdisciplinary cooperation independent of which leadership set-up they had. Instead, the authors found a relation between leadership set-up and leader...... could influence legitimacy. Originality/value: The study shows that leadership set-up is not the predominant factor that creates interdisciplinary cooperation; but rather, leader legitimacy also should be considered. Additionally, the study shows that leader legitimacy can be difficult to establish...... and that it cannot be taken for granted. This is something chief executive officers should bear in mind when they plan and implement new leadership structures. Therefore, it would also be useful to look more closely at how to achieve legitimacy in cases where the leader is from a different profession to the staff....

  17. Field Observation of Setup

    National Research Council Canada - National Science Library

    Yemm, Sean

    2004-01-01

    Setup is defined as the superelevation of mean water surface within the surfzone and is caused by the reduction in wave momentum shoreward of the breaking point and compensating positive pressure gradient...

  18. SU-F-P-20: Predicting Waiting Times in Radiation Oncology Using Machine Learning

    International Nuclear Information System (INIS)

    Joseph, A; Herrera, D; Hijal, T; Kildea, J; Hendren, L; Leung, A; Wainberg, J; Sawaf, M; Gorshkov, M; Maglieri, R; Keshavarz, M

    2016-01-01

    Purpose: Waiting times remain one of the most vexing patient satisfaction challenges facing healthcare. Waiting time uncertainty can cause patients, who are already sick or in pain, to worry about when they will receive the care they need. These waiting periods are often difficult for staff to predict and only rough estimates are typically provided based on personal experience. This level of uncertainty leaves most patients unable to plan their calendar, making the waiting experience uncomfortable, even painful. In the present era of electronic health records (EHRs), waiting times need not be so uncertain. Extensive EHRs provide unprecedented amounts of data that can statistically cluster towards representative values when appropriate patient cohorts are selected. Predictive modelling, such as machine learning, is a powerful approach that benefits from large, potentially complex, datasets. The essence of machine learning is to predict future outcomes by learning from previous experience. The application of a machine learning algorithm to waiting time data has the potential to produce personalized waiting time predictions such that the uncertainty may be removed from the patient’s waiting experience. Methods: In radiation oncology, patients typically experience several types of waiting (eg waiting at home for treatment planning, waiting in the waiting room for oncologist appointments and daily waiting in the waiting room for radiotherapy treatments). A daily treatment wait time model is discussed in this report. To develop a prediction model using our large dataset (with more than 100k sample points) a variety of machine learning algorithms from the Python package sklearn were tested. Results: We found that the Random Forest Regressor model provides the best predictions for daily radiotherapy treatment waiting times. Using this model, we achieved a median residual (actual value minus predicted value) of 0.25 minutes and a standard deviation residual of 6.5 minutes

  19. SU-F-P-20: Predicting Waiting Times in Radiation Oncology Using Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, A; Herrera, D; Hijal, T; Kildea, J [McGill University Health Centre, Montreal, Quebec (Canada); Hendren, L; Leung, A; Wainberg, J; Sawaf, M; Gorshkov, M; Maglieri, R; Keshavarz, M [McGill University, Montreal, Quebec (Canada)

    2016-06-15

    Purpose: Waiting times remain one of the most vexing patient satisfaction challenges facing healthcare. Waiting time uncertainty can cause patients, who are already sick or in pain, to worry about when they will receive the care they need. These waiting periods are often difficult for staff to predict and only rough estimates are typically provided based on personal experience. This level of uncertainty leaves most patients unable to plan their calendar, making the waiting experience uncomfortable, even painful. In the present era of electronic health records (EHRs), waiting times need not be so uncertain. Extensive EHRs provide unprecedented amounts of data that can statistically cluster towards representative values when appropriate patient cohorts are selected. Predictive modelling, such as machine learning, is a powerful approach that benefits from large, potentially complex, datasets. The essence of machine learning is to predict future outcomes by learning from previous experience. The application of a machine learning algorithm to waiting time data has the potential to produce personalized waiting time predictions such that the uncertainty may be removed from the patient’s waiting experience. Methods: In radiation oncology, patients typically experience several types of waiting (eg waiting at home for treatment planning, waiting in the waiting room for oncologist appointments and daily waiting in the waiting room for radiotherapy treatments). A daily treatment wait time model is discussed in this report. To develop a prediction model using our large dataset (with more than 100k sample points) a variety of machine learning algorithms from the Python package sklearn were tested. Results: We found that the Random Forest Regressor model provides the best predictions for daily radiotherapy treatment waiting times. Using this model, we achieved a median residual (actual value minus predicted value) of 0.25 minutes and a standard deviation residual of 6.5 minutes

  20. Single-machine common/slack due window assignment problems with linear decreasing processing times

    Science.gov (United States)

    Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia

    2017-08-01

    This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.

  1. Non-contact test set-up for aeroelasticity in a rotating turbomachine combining a novel acoustic excitation system with tip-timing

    International Nuclear Information System (INIS)

    Freund, O; Seume, J R; Montgomery, M; Mittelbach, M

    2014-01-01

    Due to trends in aero-design, aeroelasticity becomes increasingly important in modern turbomachines. Design requirements of turbomachines lead to the development of high aspect ratio blades and blade integral disc designs (blisks), which are especially prone to complex modes of vibration. Therefore, experimental investigations yielding high quality data are required for improving the understanding of aeroelastic effects in turbomachines. One possibility to achieve high quality data is to excite and measure blade vibrations in turbomachines. The major requirement for blade excitation and blade vibration measurements is to minimize interference with the aeroelastic effects to be investigated. Thus in this paper, a non-contact—and thus low interference—experimental set-up for exciting and measuring blade vibrations is proposed and shown to work. A novel acoustic system excites rotor blade vibrations, which are measured with an optical tip-timing system. By performing measurements in an axial compressor, the potential of the acoustic excitation method for investigating aeroelastic effects is explored. The basic principle of this method is described and proven through the analysis of blade responses at different acoustic excitation frequencies and at different rotational speeds. To verify the accuracy of the tip-timing system, amplitudes measured by tip-timing are compared with strain gage measurements. They are found to agree well. Two approaches to vary the nodal diameter (ND) of the excited vibration mode by controlling the acoustic excitation are presented. By combining the different excitable acoustic modes with a phase-lag control, each ND of the investigated 30 blade rotor can be excited individually. This feature of the present acoustic excitation system is of great benefit to aeroelastic investigations and represents one of the main advantages over other excitation methods proposed in the past. In future studies, the acoustic excitation method will be used

  2. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    Science.gov (United States)

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  3. Time-frequency feature analysis and recognition of fission neutrons signal based on support vector machine

    International Nuclear Information System (INIS)

    Jin Jing; Wei Biao; Feng Peng; Tang Yuelin; Zhou Mi

    2010-01-01

    Based on the interdependent relationship between fission neutrons ( 252 Cf) and fission chain ( 235 U system), the paper presents the time-frequency feature analysis and recognition in fission neutron signal based on support vector machine (SVM) through the analysis on signal characteristics and the measuring principle of the 252 Cf fission neutron signal. The time-frequency characteristics and energy features of the fission neutron signal are extracted by using wavelet decomposition and de-noising wavelet packet decomposition, and then applied to training and classification by means of support vector machine based on statistical learning theory. The results show that, it is effective to obtain features of nuclear signal via wavelet decomposition and de-noising wavelet packet decomposition, and the latter can reflect the internal characteristics of the fission neutron system better. With the training accomplished, the SVM classifier achieves an accuracy rate above 70%, overcoming the lack of training samples, and verifying the effectiveness of the algorithm. (authors)

  4. Minimum Time Trajectory Optimization of CNC Machining with Tracking Error Constraints

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2014-01-01

    Full Text Available An off-line optimization approach of high precision minimum time feedrate for CNC machining is proposed. Besides the ordinary considered velocity, acceleration, and jerk constraints, dynamic performance constraint of each servo drive is also considered in this optimization problem to improve the tracking precision along the optimized feedrate trajectory. Tracking error is applied to indicate the servo dynamic performance of each axis. By using variable substitution, the tracking error constrained minimum time trajectory planning problem is formulated as a nonlinear path constrained optimal control problem. Bang-bang constraints structure of the optimal trajectory is proved in this paper; then a novel constraint handling method is proposed to realize a convex optimization based solution of the nonlinear constrained optimal control problem. A simple ellipse feedrate planning test is presented to demonstrate the effectiveness of the approach. Then the practicability and robustness of the trajectory generated by the proposed approach are demonstrated by a butterfly contour machining example.

  5. Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine

    International Nuclear Information System (INIS)

    Xu Ruirui; Bian Guoxing; Gao Chenfeng; Chen Tianlun

    2005-01-01

    The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction. First, the parameter γ and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.

  6. Production Planning of a Failure-Prone Manufacturing System under Different Setup Scenarios

    Directory of Open Access Journals (Sweden)

    Guy-Richard Kibouka

    2016-01-01

    Full Text Available This paper presents a control problem for the optimization of the production and setup activities of an industrial system operating in an uncertain environment. This system is subject to random disturbances (breakdowns and repairs. These disturbances can engender stock shortages. The considered industrial system represents a well-known production context in industry and consists of a machine producing two types of products. In order to switch production from one product type to another, a time factor and a reconfiguration cost for the machine are associated with the setup activities. The parts production rates and the setup strategies are the decision variables which influence the inventory and the capacity of the system. The objective of the study is to find the production and setup policies which minimize the setup and inventory costs, as well as those associated with shortages. A modeling approach based on stochastic optimal control theory and a numerical algorithm used to solve the obtained optimality conditions are presented. The contribution of the paper, for industrial systems not studied in the literature, is illustrated through a numerical example and a comparative study.

  7. A Finite State Machine Approach to Algorithmic Lateral Inhibition for Real-Time Motion Detection †

    Directory of Open Access Journals (Sweden)

    María T. López

    2018-05-01

    Full Text Available Many researchers have explored the relationship between recurrent neural networks and finite state machines. Finite state machines constitute the best-characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The neurally-inspired lateral inhibition method, and its application to motion detection tasks, have been successfully implemented in recent years. In this paper, control knowledge of the algorithmic lateral inhibition (ALI method is described and applied by means of finite state machines, in which the state space is constituted from the set of distinguishable cases of accumulated charge in a local memory. The article describes an ALI implementation for a motion detection task. For the implementation, we have chosen to use one of the members of the 16-nm Kintex UltraScale+ family of Xilinx FPGAs. FPGAs provide the necessary accuracy, resolution, and precision to run neural algorithms alongside current sensor technologies. The results offered in this paper demonstrate that this implementation provides accurate object tracking performance on several datasets, obtaining a high F-score value (0.86 for the most complex sequence used. Moreover, it outperforms implementations of a complete ALI algorithm and a simplified version of the ALI algorithm—named “accumulative computation”—which was run about ten years ago, now reaching real-time processing times that were simply not achievable at that time for ALI.

  8. Single machine total completion time minimization scheduling with a time-dependent learning effect and deteriorating jobs

    Science.gov (United States)

    Wang, Ji-Bo; Wang, Ming-Zheng; Ji, Ping

    2012-05-01

    In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of -1 < a < 0, where a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.

  9. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  10. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-07-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  11. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    International Nuclear Information System (INIS)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-01-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  12. Real-time depth monitoring and control of laser machining through scanning beam delivery system

    International Nuclear Information System (INIS)

    Ji, Yang; Grindal, Alexander W; Fraser, James M; Webster, Paul J L

    2015-01-01

    Scanning optics enable many laser applications in manufacturing because their low inertia allows rapid movement of the process beam across the sample. We describe our method of inline coherent imaging for real-time (up to 230 kHz) micron-scale (7–8 µm axial resolution) tracking and control of laser machining depth through a scanning galvo-telecentric beam delivery system. For 1 cm trench etching in stainless steel, we collect high speed intrapulse and interpulse morphology which is useful for further understanding underlying mechanisms or comparison with numerical models. We also collect overall sweep-to-sweep depth penetration which can be used for feedback depth control. For trench etching in silicon, we show the relationship of etch rate with average power and scan speed by computer processing of depth information without destructive sample post-processing. We also achieve three-dimensional infrared continuous wave (modulated) laser machining of a 3.96 × 3.96 × 0.5 mm 3 (length × width × maximum depth) pattern on steel with depth feedback. To the best of our knowledge, this is the first successful demonstration of direct real-time depth monitoring and control of laser machining with scanning optics. (paper)

  13. Uncertainties in global radiation time series forecasting using machine learning: The multilayer perceptron case

    International Nuclear Information System (INIS)

    Voyant, Cyril; Notton, Gilles; Darras, Christophe; Fouilloy, Alexis; Motte, Fabrice

    2017-01-01

    As global solar radiation forecasting is a very important challenge, several methods are devoted to this goal with different levels of accuracy and confidence. In this study we propose to better understand how the uncertainty is propagated in the context of global radiation time series forecasting using machine learning. Indeed we propose to decompose the error considering four kinds of uncertainties: the error due to the measurement, the variability of time series, the machine learning uncertainty and the error related to the horizon. All these components of the error allow to determinate a global uncertainty generating prediction bands related to the prediction efficiency. We also have defined a reliability index which could be very interesting for the grid manager in order to estimate the validity of predictions. We have experimented this method on a multilayer perceptron which is a popular machine learning technique. We have shown that the global error and its components are essential to quantify in order to estimate the reliability of the model outputs. The described method has been successfully applied to four meteorological stations in Mediterranean area. - Highlights: • Solar irradiation predictions require confidence bands. • There are a lot of kinds of uncertainties to take into account in order to propose prediction bands. • the ranking of different kinds of uncertainties is essential to propose an operational tool for the grid managers.

  14. Photoinduced charge transfer in a transition metal complex investigated by time-resolved X-ray absorption fine structure spectroscopy. Setup and experiment

    International Nuclear Information System (INIS)

    Goeries, Dennis

    2015-02-01

    In the framework of this thesis the development of a time-resolved X-ray absorption spectroscopy experiment and its application to fac-Ir(ppy) 3 is described. Such experiments require a very stable setup in terms of spatial and temporal accuracy. Therefore, the stability properties of the present installation were investigated in detail and continuously improved, in particular the synchronization of the ultrashort pulse laser system to the storage ring as well as the spatial stability of both X-ray and laser beam. Experiments utilizing the laser pump and X-ray probe configuration were applied on the green phosphorescence emitter complex fac-Ir(ppy) 3 dissolved in dimethyl sulfoxide. Structural and electronic changes were triggered by photoexcitation of the metal-to-ligand charge transfer band with ultrashort laser pulses at a wavelength of 343 nm. The excited triplet state spectrum was extracted from the measured pump-probe X-ray absorption spectrum using an ionic approximation. The results con rm the anticipated metal-to-ligand charge transfer as shown by an ionization potential shift of the iridium atom. The symmetry of the complex was found to be pseudo-octahedral. This allowed the first experimental determination of the bond length of fac-Ir(ppy) 3 in an octahedral approximation and revealed a decrease of bond length of the first coordination shell in the triplet state. The first and second-order decay kinetics of the triplet state were investigated in a combination of X-ray and laser based experiments and revealed self-quenching as well as triplet-triplet annihilation rate constants.

  15. Real-time spot size camera for pulsed high-energy radiographic machines

    International Nuclear Information System (INIS)

    Watson, S.A.

    1993-01-01

    The focal spot size of an x-ray source is a critical parameter which degrades resolution in a flash radiograph. For best results, a small round focal spot is required. Therefore, a fast and accurate measurement of the spot size is highly desirable to facilitate machine tuning. This paper describes two systems developed for Los Alamos National Laboratory's Pulsed High-Energy Radiographic Machine Emitting X-rays (PHERMEX) facility. The first uses a CCD camera combined with high-brightness floors, while the second utilizes phosphor storage screens. Other techniques typically record only the line spread function on radiographic film, while systems in this paper measure the more general two-dimensional point-spread function and associated modulation transfer function in real time for shot-to-shot comparison

  16. Network of time-multiplexed optical parametric oscillators as a coherent Ising machine

    Science.gov (United States)

    Marandi, Alireza; Wang, Zhe; Takata, Kenta; Byer, Robert L.; Yamamoto, Yoshihisa

    2014-12-01

    Finding the ground states of the Ising Hamiltonian maps to various combinatorial optimization problems in biology, medicine, wireless communications, artificial intelligence and social network. So far, no efficient classical and quantum algorithm is known for these problems and intensive research is focused on creating physical systems—Ising machines—capable of finding the absolute or approximate ground states of the Ising Hamiltonian. Here, we report an Ising machine using a network of degenerate optical parametric oscillators (OPOs). Spins are represented with above-threshold binary phases of the OPOs and the Ising couplings are realized by mutual injections. The network is implemented in a single OPO ring cavity with multiple trains of femtosecond pulses and configurable mutual couplings, and operates at room temperature. We programmed a small non-deterministic polynomial time-hard problem on a 4-OPO Ising machine and in 1,000 runs no computational error was detected.

  17. On non-permutation solutions to some two machine flow shop scheduling problems

    NARCIS (Netherlands)

    V. Strusevich (Vitaly); P.J. Zwaneveld (Peter)

    1994-01-01

    textabstractIn this paper, we study two versions of the two machine flow shop scheduling problem, where schedule length is to be minimized. First, we consider the two machine flow shop with setup, processing, and removal times separated. It is shown that an optimal solution need not be a permutation

  18. Machine learning methods as a tool to analyse incomplete or irregularly sampled radon time series data.

    Science.gov (United States)

    Janik, M; Bossew, P; Kurihara, O

    2018-07-15

    Machine learning is a class of statistical techniques which has proven to be a powerful tool for modelling the behaviour of complex systems, in which response quantities depend on assumed controls or predictors in a complicated way. In this paper, as our first purpose, we propose the application of machine learning to reconstruct incomplete or irregularly sampled data of time series indoor radon ( 222 Rn). The physical assumption underlying the modelling is that Rn concentration in the air is controlled by environmental variables such as air temperature and pressure. The algorithms "learn" from complete sections of multivariate series, derive a dependence model and apply it to sections where the controls are available, but not the response (Rn), and in this way complete the Rn series. Three machine learning techniques are applied in this study, namely random forest, its extension called the gradient boosting machine and deep learning. For a comparison, we apply the classical multiple regression in a generalized linear model version. Performance of the models is evaluated through different metrics. The performance of the gradient boosting machine is found to be superior to that of the other techniques. By applying learning machines, we show, as our second purpose, that missing data or periods of Rn series data can be reconstructed and resampled on a regular grid reasonably, if data of appropriate physical controls are available. The techniques also identify to which degree the assumed controls contribute to imputing missing Rn values. Our third purpose, though no less important from the viewpoint of physics, is identifying to which degree physical, in this case environmental variables, are relevant as Rn predictors, or in other words, which predictors explain most of the temporal variability of Rn. We show that variables which contribute most to the Rn series reconstruction, are temperature, relative humidity and day of the year. The first two are physical

  19. Parallel patterns determination in solving cyclic flow shop problem with setups

    Directory of Open Access Journals (Sweden)

    Bożejko Wojciech

    2017-06-01

    Full Text Available The subject of this work is the new idea of blocks for the cyclic flow shop problem with setup times, using multiple patterns with different sizes determined for each machine constituting optimal schedule of cities for the traveling salesman problem (TSP. We propose to take advantage of the Intel Xeon Phi parallel computing environment during so-called ’blocks’ determination basing on patterns, in effect significantly improving the quality of obtained results.

  20. Set-up for differential manometers testing

    International Nuclear Information System (INIS)

    Ratushnyj, M.I.; Galkin, Yu.V.; Nechaj, A.G.

    1985-01-01

    Set-up characteristic for controlling and testing metrological characteristics of TPP and NPP differential manometers with extreme pressure drop upto 250 kPa is briefly described. The set-up provides with automatic and manual assignment of values of gauge air pressure with errors of 0.1 and 0.25% correspondingly. The set-up is supplied with standard equipment to measure output signals. Set-up supply is carried out by a one-phase alternating current circuit with 220 V. Air supply is carried out by O.4-0.6 MPa. pressure of a pneumatic system. Application of the set-up increases operating efficiency 5 times while checking and turning differential manometers

  1. Implementation of Real-Time Machining Process Control Based on Fuzzy Logic in a New STEP-NC Compatible System

    Directory of Open Access Journals (Sweden)

    Po Hu

    2016-01-01

    Full Text Available Implementing real-time machining process control at shop floor has great significance on raising the efficiency and quality of product manufacturing. A framework and implementation methods of real-time machining process control based on STEP-NC are presented in this paper. Data model compatible with ISO 14649 standard is built to transfer high-level real-time machining process control information between CAPP systems and CNC systems, in which EXPRESS language is used to define new STEP-NC entities. Methods for implementing real-time machining process control at shop floor are studied and realized on an open STEP-NC controller, which is developed using object-oriented, multithread, and shared memory technologies conjunctively. Cutting force at specific direction of machining feature in side mill is chosen to be controlled object, and a fuzzy control algorithm with self-adjusting factor is designed and embedded in the software CNC kernel of STEP-NC controller. Experiments are carried out to verify the proposed framework, STEP-NC data model, and implementation methods for real-time machining process control. The results of experiments prove that real-time machining process control tasks can be interpreted and executed correctly by the STEP-NC controller at shop floor, in which actual cutting force is kept around ideal value, whether axial cutting depth changes suddenly or continuously.

  2. Simultaneous Scheduling of Jobs, AGVs and Tools Considering Tool Transfer Times in Multi Machine FMS By SOS Algorithm

    Science.gov (United States)

    Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.

    2017-08-01

    This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.

  3. A Virtual Astronomical Research Machine in No Time (VARMiNT)

    Science.gov (United States)

    Beaver, John

    2012-05-01

    We present early results of using virtual machine software to help make astronomical research computing accessible to a wider range of individuals. Our Virtual Astronomical Research Machine in No Time (VARMiNT) is an Ubuntu Linux virtual machine with free, open-source software already installed and configured (and in many cases documented). The purpose of VARMiNT is to provide a ready-to-go astronomical research computing environment that can be freely shared between researchers, or between amateur and professional, teacher and student, etc., and to circumvent the often-difficult task of configuring a suitable computing environment from scratch. Thus we hope that VARMiNT will make it easier for individuals to engage in research computing even if they have no ready access to the facilities of a research institution. We describe our current version of VARMiNT and some of the ways it is being used at the University of Wisconsin - Fox Valley, a two-year teaching campus of the University of Wisconsin System, as a means to enhance student independent study research projects and to facilitate collaborations with researchers at other locations. We also outline some future plans and prospects.

  4. Automated business process management – in times of digital transformation using machine learning or artificial intelligence

    Directory of Open Access Journals (Sweden)

    Paschek Daniel

    2017-01-01

    Full Text Available The continuous optimization of business processes is still a challenge for companies. In times of digital transformation, faster changing internal and external framework conditions and new customer expectations for fastest delivery and best quality of goods and many more, companies should set up their internal process at the best way. But what to do if framework conditions changed unexpectedly? The purpose of the paper is to analyse how the digital transformation will impact the Business Process Management (BPM while using methods like machine learning or artificial intelligence. Therefore, the core components will be explained, compared and set up in relation. To identify application areas interviews and analysis will be held up with digital companies. The finding of the paper will be recommendation for action in the field of BPM and process optimization through machine learning and artificial intelligence. The Approach of optimizing and management processes via machine learning and artificial intelligence will support companies to decide which tool will be the best for automated BPM.

  5. Comparison of Three Smart Camera Architectures for Real-Time Machine Vision System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2013-12-01

    Full Text Available This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency. Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external microcontroller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

  6. Patients setup verification tool for RT (PSVTs): DRR, simulation, portal and digital images

    International Nuclear Information System (INIS)

    Lee, Suk; Seong, Jin Sil; Chu, Sung Sil; Lee, Chang Geol; Suh, Chang Ok; Kwon, Soo Il

    2003-01-01

    To develop a patients' setup verification tool (PSVT) to verify the alignment of the machine and the target isocenters, and the reproducibility of patients' setup for three dimensional conformal radiotherapy (3DCRT) and intensity modulated radiotherapy (MRT). The utilization of this system is evaluated through phantom and patient case studies. We developed and clinically tested a new method for patients' setup verification, using digitally reconstructed radiography (DRR), simulation, portal and digital images. The PSVT system was networked to a Pentium PC for the transmission of the acquired images to the PC for analysis. To verify the alignment of the machine and target isocenters, orthogonal pairs of simulation images were used as verification images. Errors in the isocenter alignment were measured by comparing the verification images with DRR of CT images. Orthogonal films were taken of all the patients once a week. These verification films were compared with the DRR were used for the treatment setup. By performing this procedure every treatment, using humanoid phantom and patient cases, the errors of localization can be analyzed, with adjustments made from the translation. The reproducibility of the patients' setup was verified using portal and digital images. The PSVT system was developed to verify the alignment of the machine and the target isocenters, and the reproducibility of the patients' setup for 3DCRT and IMRT The results show that the localization errors are 0.8±0.2 mm (AP) and 1.0±0.3 mm (Lateral) in the cases relating to the brain and 1.1± 0.5 mm (AP) and 1.0±0.6 mm (Lateral) in the cases relating to the pelvis. The reproducibility of the patients' setup was verified by visualization, using real-time image acquisition, leading to the practical utilization of our software. A PSVT system was developed for the verification of the alignment between machine and the target isocenters, and the reproducibility of the patients' setup in 3DCRT and IMRT

  7. Information extraction from dynamic PS-InSAR time series using machine learning

    Science.gov (United States)

    van de Kerkhof, B.; Pankratius, V.; Chang, L.; van Swol, R.; Hanssen, R. F.

    2017-12-01

    Due to the increasing number of SAR satellites, with shorter repeat intervals and higher resolutions, SAR data volumes are exploding. Time series analyses of SAR data, i.e. Persistent Scatterer (PS) InSAR, enable the deformation monitoring of the built environment at an unprecedented scale, with hundreds of scatterers per km2, updated weekly. Potential hazards, e.g. due to failure of aging infrastructure, can be detected at an early stage. Yet, this requires the operational data processing of billions of measurement points, over hundreds of epochs, updating this data set dynamically as new data come in, and testing whether points (start to) behave in an anomalous way. Moreover, the quality of PS-InSAR measurements is ambiguous and heterogeneous, which will yield false positives and false negatives. Such analyses are numerically challenging. Here we extract relevant information from PS-InSAR time series using machine learning algorithms. We cluster (group together) time series with similar behaviour, even though they may not be spatially close, such that the results can be used for further analysis. First we reduce the dimensionality of the dataset in order to be able to cluster the data, since applying clustering techniques on high dimensional datasets often result in unsatisfying results. Our approach is to apply t-distributed Stochastic Neighbor Embedding (t-SNE), a machine learning algorithm for dimensionality reduction of high-dimensional data to a 2D or 3D map, and cluster this result using Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The results show that we are able to detect and cluster time series with similar behaviour, which is the starting point for more extensive analysis into the underlying driving mechanisms. The results of the methods are compared to conventional hypothesis testing as well as a Self-Organising Map (SOM) approach. Hypothesis testing is robust and takes the stochastic nature of the observations into account

  8. Model-based setup assistant for progressive tools

    Science.gov (United States)

    Springer, Robert; Gräler, Manuel; Homberg, Werner; Henke, Christian; Trächtler, Ansgar

    2018-05-01

    In the field of production systems, globalization and technological progress lead to increasing requirements regarding part quality, delivery time and costs. Hence, today's production is challenged much more than a few years ago: it has to be very flexible and produce economically small batch sizes to satisfy consumer's demands and avoid unnecessary stock. Furthermore, a trend towards increasing functional integration continues to lead to an ongoing miniaturization of sheet metal components. In the industry of electric connectivity for example, the miniaturized connectors are manufactured by progressive tools, which are usually used for very large batches. These tools are installed in mechanical presses and then set up by a technician, who has to manually adjust a wide range of punch-bending operations. Disturbances like material thickness, temperatures, lubrication or tool wear complicate the setup procedure. In prospect of the increasing demand of production flexibility, this time-consuming process has to be handled more and more often. In this paper, a new approach for a model-based setup assistant is proposed as a solution, which is exemplarily applied in combination with a progressive tool. First, progressive tools, more specifically, their setup process is described and based on that, the challenges are pointed out. As a result, a systematic process to set up the machines is introduced. Following, the process is investigated with an FE-Analysis regarding the effects of the disturbances. In the next step, design of experiments is used to systematically develop a regression model of the system's behaviour. This model is integrated within an optimization in order to calculate optimal machine parameters and the following necessary adjustment of the progressive tool due to the disturbances. Finally, the assistant is tested in a production environment and the results are discussed.

  9. Extracting Date/Time Expressions in Super-Function Based Japanese-English Machine Translation

    Science.gov (United States)

    Sasayama, Manabu; Kuroiwa, Shingo; Ren, Fuji

    Super-Function Based Machine Translation(SFBMT) which is a type of Example-Based Machine Translation has a feature which makes it possible to expand the coverage of examples by changing nouns into variables, however, there were problems extracting entire date/time expressions containing parts-of-speech other than nouns, because only nouns/numbers were changed into variables. We describe a method for extracting date/time expressions for SFBMT. SFBMT uses noun determination rules to extract nouns and a bilingual dictionary to obtain correspondence of the extracted nouns between the source and the target languages. In this method, we add a rule to extract date/time expressions and then extract date/time expressions from a Japanese-English bilingual corpus. The evaluation results shows that the precision of this method for Japanese sentences is 96.7%, with a recall of 98.2% and the precision for English sentences is 94.7%, with a recall of 92.7%.

  10. Generation and Validation of Spatial Distribution of Hourly Wind Speed Time-Series using Machine Learning

    International Nuclear Information System (INIS)

    Veronesi, F; Grassi, S

    2016-01-01

    Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners. (paper)

  11. The ASDEX upgrade digital video processing system for real-time machine protection

    Energy Technology Data Exchange (ETDEWEB)

    Drube, Reinhard, E-mail: reinhard.drube@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Neu, Gregor [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard H.; Lüddecke, Klaus [Unlimited Computer Systems GmbH, Seeshaupterstr. 15, 82393 Iffeldorf (Germany); Lunt, Tilmann; Herrmann, Albrecht [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany)

    2013-11-15

    Highlights: • We present the Real-Time Video diagnostic system of ASDEX Upgrade. • We show the implemented image processing algorithms for machine protection. • The way to achieve a robust operating multi-threading Real-Time system is described. -- Abstract: This paper describes the design, implementation, and operation of the Video Real-Time (VRT) diagnostic system of the ASDEX Upgrade plasma experiment and its integration with the ASDEX Upgrade Discharge Control System (DCS). Hot spots produced by heating systems erroneously or accidentally hitting the vessel walls, or from objects in the vessel reaching into the plasma outer border, show up as bright areas in the videos during and after the reaction. A system to prevent damage to the machine by allowing for intervention in a running discharge of the experiment was proposed and implemented. The VRT was implemented on a multi-core real-time Linux system. Up to 16 analog video channels (color and b/w) are acquired and multiple regions of interest (ROI) are processed on each video frame. Detected critical states can be used to initiate appropriate reactions – e.g. gracefully terminate the discharge. The system has been in routine operation since 2007.

  12. Generation and Validation of Spatial Distribution of Hourly Wind Speed Time-Series using Machine Learning

    Science.gov (United States)

    Veronesi, F.; Grassi, S.

    2016-09-01

    Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners.

  13. Lagged kernel machine regression for identifying time windows of susceptibility to exposures of complex mixtures.

    Science.gov (United States)

    Liu, Shelley H; Bobb, Jennifer F; Lee, Kyu Ha; Gennings, Chris; Claus Henn, Birgit; Bellinger, David; Austin, Christine; Schnaas, Lourdes; Tellez-Rojo, Martha M; Hu, Howard; Wright, Robert O; Arora, Manish; Coull, Brent A

    2018-07-01

    The impact of neurotoxic chemical mixtures on children's health is a critical public health concern. It is well known that during early life, toxic exposures may impact cognitive function during critical time intervals of increased vulnerability, known as windows of susceptibility. Knowledge on time windows of susceptibility can help inform treatment and prevention strategies, as chemical mixtures may affect a developmental process that is operating at a specific life phase. There are several statistical challenges in estimating the health effects of time-varying exposures to multi-pollutant mixtures, such as: multi-collinearity among the exposures both within time points and across time points, and complex exposure-response relationships. To address these concerns, we develop a flexible statistical method, called lagged kernel machine regression (LKMR). LKMR identifies critical exposure windows of chemical mixtures, and accounts for complex non-linear and non-additive effects of the mixture at any given exposure window. Specifically, LKMR estimates how the effects of a mixture of exposures change with the exposure time window using a Bayesian formulation of a grouped, fused lasso penalty within a kernel machine regression (KMR) framework. A simulation study demonstrates the performance of LKMR under realistic exposure-response scenarios, and demonstrates large gains over approaches that consider each time window separately, particularly when serial correlation among the time-varying exposures is high. Furthermore, LKMR demonstrates gains over another approach that inputs all time-specific chemical concentrations together into a single KMR. We apply LKMR to estimate associations between neurodevelopment and metal mixtures in Early Life Exposures in Mexico and Neurotoxicology, a prospective cohort study of child health in Mexico City.

  14. Numerical experimentation on focusing time and neutron yield in GN1 plasma focus machine

    International Nuclear Information System (INIS)

    Singh, Arwinder; Lee, Sing; Saw, S.H.

    2014-01-01

    In this paper, we have shown how we have fitted Lee's six phase model code to analyze the current waveform of the GN1 plasma focus machine working in deuterium gas. The Lee's 6-phase model codes was later configured to work between 0.5 to 6 Torr and the results of both focusing time and neutron yield was then compared with the published experimental results. The final results indicate that Lee's code, gives realistic plasma dynamics and focus properties together with a realistic neutron yield for GN1 plasma focus, without the need of any adjustable parameters, needing only to fit the computed current trace to a measured current trace. (author)

  15. Algorithm for determining two-periodic steady-states in AC machines directly in time domain

    Directory of Open Access Journals (Sweden)

    Sobczyk Tadeusz J.

    2016-09-01

    Full Text Available This paper describes an algorithm for finding steady states in AC machines for the cases of their two-periodic nature. The algorithm enables to specify the steady-state solution identified directly in time domain despite of the fact that two-periodic waveforms are not repeated in any finite time interval. The basis for such an algorithm is a discrete differential operator that specifies the temporary values of the derivative of the two-periodic function in the selected set of points on the basis of the values of that function in the same set of points. It allows to develop algebraic equations defining the steady state solution reached in a chosen point set for the nonlinear differential equations describing the AC machines when electrical and mechanical equations should be solved together. That set of those values allows determining the steady state solution at any time instant up to infinity. The algorithm described in this paper is competitive with respect to the one known in literature an approach based on the harmonic balance method operated in frequency domain.

  16. Time-domain prefilter design for enhanced tracking and vibration suppression in machine motion control

    Science.gov (United States)

    Cole, Matthew O. T.; Shinonawanik, Praween; Wongratanaphisan, Theeraphong

    2018-05-01

    Structural flexibility can impact negatively on machine motion control systems by causing unmeasured positioning errors and vibration at locations where accurate motion is important for task execution. To compensate for these effects, command signal prefiltering may be applied. In this paper, a new FIR prefilter design method is described that combines finite-time vibration cancellation with dynamic compensation properties. The time-domain formulation exploits the relation between tracking error and the moment values of the prefilter impulse response function. Optimal design solutions for filters having minimum H2 norm are derived and evaluated. The control approach does not require additional actuation or sensing and can be effective even without complete and accurate models of the machine dynamics. Results from implementation and testing on an experimental high-speed manipulator having a Delta robot architecture with directionally compliant end-effector are presented. The results show the importance of prefilter moment values for tracking performance and confirm that the proposed method can achieve significant reductions in both peak and RMS tracking error, as well as settling time, for complex motion patterns.

  17. A Novel Approach for Multi Class Fault Diagnosis in Induction Machine Based on Statistical Time Features and Random Forest Classifier

    Science.gov (United States)

    Sonje, M. Deepak; Kundu, P.; Chowdhury, A.

    2017-08-01

    Fault diagnosis and detection is the important area in health monitoring of electrical machines. This paper proposes the recently developed machine learning classifier for multi class fault diagnosis in induction machine. The classification is based on random forest (RF) algorithm. Initially, stator currents are acquired from the induction machine under various conditions. After preprocessing the currents, fourteen statistical time features are estimated for each phase of the current. These parameters are considered as inputs to the classifier. The main scope of the paper is to evaluate effectiveness of RF classifier for individual and mixed fault diagnosis in induction machine. The stator, rotor and mixed faults (stator and rotor faults) are classified using the proposed classifier. The obtained performance measures are compared with the multilayer perceptron neural network (MLPNN) classifier. The results show the much better performance measures and more accurate than MLPNN classifier. For demonstration of planned fault diagnosis algorithm, experimentally obtained results are considered to build the classifier more practical.

  18. Machine Shop Grinding Machines.

    Science.gov (United States)

    Dunn, James

    This curriculum manual is one in a series of machine shop curriculum manuals intended for use in full-time secondary and postsecondary classes, as well as part-time adult classes. The curriculum can also be adapted to open-entry, open-exit programs. Its purpose is to equip students with basic knowledge and skills that will enable them to enter the…

  19. Using machine learning to identify structural breaks in single-group interrupted time series designs.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R

    2016-12-01

    Single-group interrupted time series analysis (ITSA) is a popular evaluation methodology in which a single unit of observation is being studied, the outcome variable is serially ordered as a time series and the intervention is expected to 'interrupt' the level and/or trend of the time series, subsequent to its introduction. Given that the internal validity of the design rests on the premise that the interruption in the time series is associated with the introduction of the treatment, treatment effects may seem less plausible if a parallel trend already exists in the time series prior to the actual intervention. Thus, sensitivity analyses should focus on detecting structural breaks in the time series before the intervention. In this paper, we introduce a machine-learning algorithm called optimal discriminant analysis (ODA) as an approach to determine if structural breaks can be identified in years prior to the initiation of the intervention, using data from California's 1988 voter-initiated Proposition 99 to reduce smoking rates. The ODA analysis indicates that numerous structural breaks occurred prior to the actual initiation of Proposition 99 in 1989, including perfect structural breaks in 1983 and 1985, thereby casting doubt on the validity of treatment effects estimated for the actual intervention when using a single-group ITSA design. Given the widespread use of ITSA for evaluating observational data and the increasing use of machine-learning techniques in traditional research, we recommend that structural break sensitivity analysis is routinely incorporated in all research using the single-group ITSA design. © 2016 John Wiley & Sons, Ltd.

  20. Métodos heurísticos construtivos para redução do estoque em processo em ambientes de produção flow shop híbridos com tempos de setup dependentes da sequência Constructive heuristics methods to minimizing work in process in environment production hybrid flow shop with asymmetric sequence dependent setup times

    Directory of Open Access Journals (Sweden)

    Márcia de Fátima Morais

    2010-01-01

    Full Text Available Este artigo apresenta uma investigação sobre o problema de programação da produção em ambientes flow shop com múltiplas máquinas (híbridos e tempos de preparação (setup das máquinas assimétricos e dependentes da sequência, e propõe métodos heurísticos construtivos para a minimização do tempo médio de fluxo (Mean Flow Time, que objetiva uma resposta rápida à demanda e à redução do estoque em processamento. Os algoritmos propostos foram comparados entre si, uma vez que nenhum método de solução para o problema investigado foi encontrado na literatura. Um estudo da influência da relação entre as ordens de grandeza dos tempos de processamento das tarefas e de preparação das máquinas em cada método de solução, bem como a influência do procedimento de programação adotado, foi efetuado com o intuito de avaliar o desempenho dos métodosThis paper presents an investigation about the hybrid flow shop problem with asymmetric sequence dependent setup times and proposes constructive heuristic methods to minimize the mean flow time aiming at fast demand response and work in process reduction. The proposed heuristic methods were compared among themselves since no constructive heuristic method was found in the literature on the scheduling problem considered in this work. A study on the influence of the relation between the orders of magnitude of processing and setup time for each method was carried out. The influence of the scheduling procedure adopted was investigated to assess the performance of the methods use

  1. Two Machine Learning Approaches for Short-Term Wind Speed Time-Series Prediction.

    Science.gov (United States)

    Ak, Ronay; Fink, Olga; Zio, Enrico

    2016-08-01

    The increasing liberalization of European electricity markets, the growing proportion of intermittent renewable energy being fed into the energy grids, and also new challenges in the patterns of energy consumption (such as electric mobility) require flexible and intelligent power grids capable of providing efficient, reliable, economical, and sustainable energy production and distribution. From the supplier side, particularly, the integration of renewable energy sources (e.g., wind and solar) into the grid imposes an engineering and economic challenge because of the limited ability to control and dispatch these energy sources due to their intermittent characteristics. Time-series prediction of wind speed for wind power production is a particularly important and challenging task, wherein prediction intervals (PIs) are preferable results of the prediction, rather than point estimates, because they provide information on the confidence in the prediction. In this paper, two different machine learning approaches to assess PIs of time-series predictions are considered and compared: 1) multilayer perceptron neural networks trained with a multiobjective genetic algorithm and 2) extreme learning machines combined with the nearest neighbors approach. The proposed approaches are applied for short-term wind speed prediction from a real data set of hourly wind speed measurements for the region of Regina in Saskatchewan, Canada. Both approaches demonstrate good prediction precision and provide complementary advantages with respect to different evaluation criteria.

  2. Sparse Bayesian learning machine for real-time management of reservoir releases

    Science.gov (United States)

    Khalil, Abedalrazq; McKee, Mac; Kemblowski, Mariush; Asefa, Tirusew

    2005-11-01

    Water scarcity and uncertainties in forecasting future water availabilities present serious problems for basin-scale water management. These problems create a need for intelligent prediction models that learn and adapt to their environment in order to provide water managers with decision-relevant information related to the operation of river systems. This manuscript presents examples of state-of-the-art techniques for forecasting that combine excellent generalization properties and sparse representation within a Bayesian paradigm. The techniques are demonstrated as decision tools to enhance real-time water management. A relevance vector machine, which is a probabilistic model, has been used in an online fashion to provide confident forecasts given knowledge of some state and exogenous conditions. In practical applications, online algorithms should recognize changes in the input space and account for drift in system behavior. Support vectors machines lend themselves particularly well to the detection of drift and hence to the initiation of adaptation in response to a recognized shift in system structure. The resulting model will normally have a structure and parameterization that suits the information content of the available data. The utility and practicality of this proposed approach have been demonstrated with an application in a real case study involving real-time operation of a reservoir in a river basin in southern Utah.

  3. An edge over diagnostic setup

    Directory of Open Access Journals (Sweden)

    Sridhar Kannan

    2017-01-01

    Full Text Available Diagnostic setup proposed by H.D. Kingsley serves as a practical aid in treatment planning and diagnosis. These setups have some inherent shortcomings. A simple technique of duplication of the setups in dental stone can solve problems encountered before as well as provide many other advantages over the conventional procedure. The diagnostic setup is prepared by the conventional method [Figure 1]. An alginate impression is then taken of the setups and poured in dental stone to obtain the derived treatment model [Figure 2]. The same setup can now be further modified for alternate lines of treatment. Subsequently models could then be obtained as required [Figure 3].

  4. Impact of Model Detail of Synchronous Machines on Real-time Transient Stability Assessment

    DEFF Research Database (Denmark)

    Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Østergaard, Jacob

    2013-01-01

    In this paper, it is investigated how detailed the model of a synchronous machine needs to be in order to assess transient stability using a Single Machine Equivalent (SIME). The results will show how the stability mechanism and the stability assessment are affected by the model detail. In order...... of the machine models is varied. Analyses of the results suggest that a 4th-order model may be sufficient to represent synchronous machines in transient stability studies....

  5. Real Time Monitoring System of Pollution Waste on Musi River Using Support Vector Machine (SVM) Method

    Science.gov (United States)

    Fachrurrozi, Muhammad; Saparudin; Erwin

    2017-04-01

    Real-time Monitoring and early detection system which measures the quality standard of waste in Musi River, Palembang, Indonesia is a system for determining air and water pollution level. This system was designed in order to create an integrated monitoring system and provide real time information that can be read. It is designed to measure acidity and water turbidity polluted by industrial waste, as well as to show and provide conditional data integrated in one system. This system consists of inputting and processing the data, and giving output based on processed data. Turbidity, substances, and pH sensor is used as a detector that produce analog electrical direct current voltage (DC). Early detection system works by determining the value of the ammonia threshold, acidity, and turbidity level of water in Musi River. The results is then presented based on the level group pollution by the Support Vector Machine classification method.

  6. Hybrid machine learning technique for forecasting Dhaka stock market timing decisions.

    Science.gov (United States)

    Banik, Shipra; Khodadad Khan, A F M; Anwer, Mohammad

    2014-01-01

    Forecasting stock market has been a difficult job for applied researchers owing to nature of facts which is very noisy and time varying. However, this hypothesis has been featured by several empirical experiential studies and a number of researchers have efficiently applied machine learning techniques to forecast stock market. This paper studied stock prediction for the use of investors. It is always true that investors typically obtain loss because of uncertain investment purposes and unsighted assets. This paper proposes a rough set model, a neural network model, and a hybrid neural network and rough set model to find optimal buy and sell of a share on Dhaka stock exchange. Investigational findings demonstrate that our proposed hybrid model has higher precision than the single rough set model and the neural network model. We believe this paper findings will help stock investors to decide about optimal buy and/or sell time on Dhaka stock exchange.

  7. A real-time surface inspection system for precision steel balls based on machine vision

    Science.gov (United States)

    Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen

    2016-07-01

    Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.

  8. A finite element-based machine learning approach for modeling the mechanical behavior of the breast tissues under compression in real-time.

    Science.gov (United States)

    Martínez-Martínez, F; Rupérez-Moreno, M J; Martínez-Sober, M; Solves-Llorens, J A; Lorente, D; Serrano-López, A J; Martínez-Sanchis, S; Monserrat, C; Martín-Guerrero, J D

    2017-11-01

    This work presents a data-driven method to simulate, in real-time, the biomechanical behavior of the breast tissues in some image-guided interventions such as biopsies or radiotherapy dose delivery as well as to speed up multimodal registration algorithms. Ten real breasts were used for this work. Their deformation due to the displacement of two compression plates was simulated off-line using the finite element (FE) method. Three machine learning models were trained with the data from those simulations. Then, they were used to predict in real-time the deformation of the breast tissues during the compression. The models were a decision tree and two tree-based ensemble methods (extremely randomized trees and random forest). Two different experimental setups were designed to validate and study the performance of these models under different conditions. The mean 3D Euclidean distance between nodes predicted by the models and those extracted from the FE simulations was calculated to assess the performance of the models in the validation set. The experiments proved that extremely randomized trees performed better than the other two models. The mean error committed by the three models in the prediction of the nodal displacements was under 2 mm, a threshold usually set for clinical applications. The time needed for breast compression prediction is sufficiently short to allow its use in real-time (<0.2 s). Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Near Real-Time Dust Aerosol Detection with Support Vector Machines for Regression

    Science.gov (United States)

    Rivas-Perea, P.; Rivas-Perea, P. E.; Cota-Ruiz, J.; Aragon Franco, R. A.

    2015-12-01

    Remote sensing instruments operating in the near-infrared spectrum usually provide the necessary information for further dust aerosol spectral analysis using statistical or machine learning algorithms. Such algorithms have proven to be effective in analyzing very specific case studies or dust events. However, very few make the analysis open to the public on a regular basis, fewer are designed specifically to operate in near real-time to higher resolutions, and almost none give a global daily coverage. In this research we investigated a large-scale approach to a machine learning algorithm called "support vector regression". The algorithm uses four near-infrared spectral bands from NASA MODIS instrument: B20 (3.66-3.84μm), B29 (8.40-8.70μm), B31 (10.78-11.28μm), and B32 (11.77-12.27μm). The algorithm is presented with ground truth from more than 30 distinct reported dust events, from different geographical regions, at different seasons, both over land and sea cover, in the presence of clouds and clear sky, and in the presence of fires. The purpose of our algorithm is to learn to distinguish the dust aerosols spectral signature from other spectral signatures, providing as output an estimate of the probability of a data point being consistent with dust aerosol signatures. During modeling with ground truth, our algorithm achieved more than 90% of accuracy, and the current live performance of the algorithm is remarkable. Moreover, our algorithm is currently operating in near real-time using NASA's Land, Atmosphere Near real-time Capability for EOS (LANCE) servers, providing a high resolution global overview including 64, 32, 16, 8, 4, 2, and 1km. The near real-time analysis of our algorithm is now available to the general public at http://dust.reev.us and archives of the results starting from 2012 are available upon request.

  10. Calculation of upper esophageal sphincter restitution time from high resolution manometry data using machine learning.

    Science.gov (United States)

    Jungheim, Michael; Busche, Andre; Miller, Simone; Schilling, Nicolas; Schmidt-Thieme, Lars; Ptok, Martin

    2016-10-15

    After swallowing, the upper esophageal sphincter (UES) needs a certain amount of time to return from maximum pressure to the resting condition. Disturbances of sphincter function not only during the swallowing process but also in this phase of pressure restitution may lead to globus sensation or dysphagia. Since UES pressures do not decrease in a linear or asymptotic manner, it is difficult to determine the exact time when the resting pressure is reached, even when using high resolution manometry (HRM). To overcome this problem a Machine Learning model was established to objectively determine the UES restitution time (RT) and moreover to collect physiological data on sphincter function after swallowing. HRM-data of 15 healthy participants performing 10 swallows each were included. After manual annotation of the RT interval by two swallowing experts, data were transferred to the Machine Learning model, which applied a sequence labeling modeling approach based on logistic regression to learn and objectivize the characteristics of all swallows. Individually computed RT values were then compared with the annotated values. Estimates of the RT were generated by the Machine Learning model for all 150 swallows. When annotated by swallowing experts mean RT of 11.16s±5.7 (SD) and 10.04s±5.74 were determined respectively, compared to model-generated values from 8.91s±3.71 to 10.87s±4.68 depending on model selection. The correlation score for the annotated RT of both examiners was 0.76 and 0.63 to 0.68 for comparison of model predicted values. Restitution time represents an important physiologic swallowing parameter not previously considered in HRM-studies of the UES, especially since disturbances of UES restitution may increase the risk of aspiration. The data presented here show that it takes approximately 9 to 11s for the UES to come to rest after swallowing. Based on maximal RT values, we demonstrate that an interval of 25-30s in between swallows is necessary until the

  11. Real-time machine vision system using FPGA and soft-core processor

    Science.gov (United States)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  12. Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics

    Science.gov (United States)

    Yu, Tao; Cai, Weiwei; Liu, Yingzheng

    2018-04-01

    Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.

  13. A design for living technology: experiments with the mind time machine.

    Science.gov (United States)

    Ikegami, Takashi

    2013-01-01

    Living technology aims to help people expand their experiences in everyday life. The environment offers people ways to interact with it, which we call affordances. Living technology is a design for new affordances. When we experience something new, we remember it by the way we perceive and interact with it. Recent studies in neuroscience have led to the idea of a default mode network, which is a baseline activity of a brain system. The autonomy of artificial life must be understood as a sort of default mode that self-organizes its baseline activity, preparing for its external inputs and its interaction with humans. I thus propose a method for creating a suitable default mode as a design principle for living technology. I built a machine called the mind time machine (MTM), which runs continuously for 10 h per day and receives visual data from its environment using 15 video cameras. The MTM receives and edits the video inputs while it self-organizes the momentary now. Its base program is a neural network that includes chaotic dynamics inside the system and a meta-network that consists of video feedback systems. Using this system as the hardware and a default mode network as a conceptual framework, I describe the system's autonomous behavior. Using the MTM as a testing ground, I propose a design principle for living technology.

  14. A New Tool for CME Arrival Time Prediction using Machine Learning Algorithms: CAT-PUMA

    Science.gov (United States)

    Liu, Jiajia; Ye, Yudong; Shen, Chenglong; Wang, Yuming; Erdélyi, Robert

    2018-03-01

    Coronal mass ejections (CMEs) are arguably the most violent eruptions in the solar system. CMEs can cause severe disturbances in interplanetary space and can even affect human activities in many aspects, causing damage to infrastructure and loss of revenue. Fast and accurate prediction of CME arrival time is vital to minimize the disruption that CMEs may cause when interacting with geospace. In this paper, we propose a new approach for partial-/full halo CME Arrival Time Prediction Using Machine learning Algorithms (CAT-PUMA). Via detailed analysis of the CME features and solar-wind parameters, we build a prediction engine taking advantage of 182 previously observed geo-effective partial-/full halo CMEs and using algorithms of the Support Vector Machine. We demonstrate that CAT-PUMA is accurate and fast. In particular, predictions made after applying CAT-PUMA to a test set unknown to the engine show a mean absolute prediction error of ∼5.9 hr within the CME arrival time, with 54% of the predictions having absolute errors less than 5.9 hr. Comparisons with other models reveal that CAT-PUMA has a more accurate prediction for 77% of the events investigated that can be carried out very quickly, i.e., within minutes of providing the necessary input parameters of a CME. A practical guide containing the CAT-PUMA engine and the source code of two examples are available in the Appendix, allowing the community to perform their own applications for prediction using CAT-PUMA.

  15. Using Overall Equipment Effectiveness indicator to measure the level of planned production time usage of sewing machine

    Directory of Open Access Journals (Sweden)

    Marek Krynke

    2014-12-01

    Full Text Available The chapter presents the results of utilization of the OEE indicator to measure the level of operating time usage of sewing machine production of air bags. The idea of an OEE indictor, which is a key metrics in Total Productive Maintenance (TPM program, is presented. The goals and benefits of its calculation are included. The research object – KL 110 air bags sewing machine - what for the machine is used. The calculation of TPM indicators for the analysed machine is presented. The calculation of TPM indicators was undertaken over a period of six months of the machine’s working time. It was indicated that the overall effectiveness of the machine is at a level of 65,7%, the time losses were 34,3%. Most of the losses were related to low performance. Only Availability indicator reaches a word class level, if other indicators such as Performance, Quality and OEE should be improved, their value should be increased. Activities to improve the effectiveness of the machine utilization were determined.

  16. Single Machine Problem with Multi-Rate-Modifying Activities under a Time-Dependent Deterioration

    Directory of Open Access Journals (Sweden)

    M. Huang

    2013-01-01

    Full Text Available The single machine scheduling problem with multi-rate-modifying activities under a time-dependent deterioration to minimize makespan is studied. After examining the characteristics of the problem, a number of properties and a lower bound are proposed. A branch and bound algorithm and a heuristic algorithm are used in the solution, and two special cases are also examined. The computational experiments show that, for the situation with a rate-modifying activity, the proposed branch and bound algorithm can solve situations with 50 jobs within a reasonable time, and the heuristic algorithm can obtain the near-optimal solution with an error percentage less than 0.053 in a very short time. In situations with multi-rate-modifying activities, the proposed branch and bound algorithm can solve the case with 15 jobs within a reasonable time, and the heuristic algorithm can obtain the near-optimal with an error percentage less than 0.070 in a very short time. The branch and bound algorithm and the heuristic algorithm are both shown to be efficient and effective.

  17. Real-Time Measurement of Machine Efficiency during Inertia Friction Welding.

    Energy Technology Data Exchange (ETDEWEB)

    Tung, Daniel Joseph [The Ohio State Univ., Columbus, OH (United States); Mahaffey, David [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States); Senkov, Oleg [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States); Semiatin, Sheldon [Air Force Research Lab. (AFRL), Wright-Patterson AFB, OH (United States); Zhang, Wei [The Ohio State Univ., Columbus, OH (United States)

    2017-12-01

    Process efficiency is a crucial parameter for inertia friction welding (IFW) that is largely unknown at the present time. A new method has been developed to determine the transient profile of the IFW process efficiency by comparing the workpiece torque used to heat and deform the joint region to the total torque. Particularly, the former is measured by a torque load cell attached to the non-rotating workpiece while the latter is calculated from the deceleration rate of flywheel rotation. The experimentally-measured process efficiency for IFW of AISI 1018 steel rods is validated independently by the upset length estimated from an analytical equation of heat balance and the flash profile calculated from a finite element based thermal stress model. The transient behaviors of torque and efficiency during IFW are discussed based on the energy loss to machine bearings and the bond formation at the joint interface.

  18. High-Density Liquid-State Machine Circuitry for Time-Series Forecasting.

    Science.gov (United States)

    Rosselló, Josep L; Alomar, Miquel L; Morro, Antoni; Oliver, Antoni; Canals, Vincent

    2016-08-01

    Spiking neural networks (SNN) are the last neural network generation that try to mimic the real behavior of biological neurons. Although most research in this area is done through software applications, it is in hardware implementations in which the intrinsic parallelism of these computing systems are more efficiently exploited. Liquid state machines (LSM) have arisen as a strategic technique to implement recurrent designs of SNN with a simple learning methodology. In this work, we show a new low-cost methodology to implement high-density LSM by using Boolean gates. The proposed method is based on the use of probabilistic computing concepts to reduce hardware requirements, thus considerably increasing the neuron count per chip. The result is a highly functional system that is applied to high-speed time series forecasting.

  19. A combination of HARMONIE short time direct normal irradiance forecasts and machine learning: The #hashtdim procedure

    Science.gov (United States)

    Gastón, Martín; Fernández-Peruchena, Carlos; Körnich, Heiner; Landelius, Tomas

    2017-06-01

    The present work describes the first approach of a new procedure to forecast Direct Normal Irradiance (DNI): the #hashtdim that treats to combine ground information and Numerical Weather Predictions. The system is centered in generate predictions for the very short time. It combines the outputs from the Numerical Weather Prediction Model HARMONIE with an adaptive methodology based on Machine Learning. The DNI predictions are generated with 15-minute and hourly temporal resolutions and presents 3-hourly updates. Each update offers forecasts to the next 12 hours, the first nine hours are generated with 15-minute temporal resolution meanwhile the last three hours present hourly temporal resolution. The system is proved over a Spanish emplacement with BSRN operative station in south of Spain (PSA station). The #hashtdim has been implemented in the framework of the Direct Normal Irradiance Nowcasting methods for optimized operation of concentrating solar technologies (DNICast) project, under the European Union's Seventh Programme for research, technological development and demonstration framework.

  20. Heuristic and Exact Algorithms for the Two-Machine Just in Time Job Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Mohammed Al-Salem

    2016-01-01

    Full Text Available The problem addressed in this paper is the two-machine job shop scheduling problem when the objective is to minimize the total earliness and tardiness from a common due date (CDD for a set of jobs when their weights equal 1 (unweighted problem. This objective became very significant after the introduction of the Just in Time manufacturing approach. A procedure to determine whether the CDD is restricted or unrestricted is developed and a semirestricted CDD is defined. Algorithms are introduced to find the optimal solution when the CDD is unrestricted and semirestricted. When the CDD is restricted, which is a much harder problem, a heuristic algorithm is proposed to find approximate solutions. Through computational experiments, the heuristic algorithms’ performance is evaluated with problems up to 500 jobs.

  1. MINIMIZING THE PREPARATION TIME OF A TUBES MACHINE: EXACT SOLUTION AND HEURISTICS

    Directory of Open Access Journals (Sweden)

    Robinson S.V. Hoto

    Full Text Available ABSTRACT In this paper we optimize the preparation time of a tubes machine. Tubes are hard tubes made by gluing strips of paper that are packed in paper reels, and some of them may be reused between the production of one and another tube. We present a mathematical model for the minimization of changing reels and movements and also implementations for the heuristics Nearest Neighbor, an improvement of a nearest neighbor (Best Nearest Neighbor, refinements of the Best Nearest Neighbor heuristic and a heuristic of permutation called Best Configuration using the IDE (integrated development environment WxDev C++. The results obtained by simulations improve the one used by the company.

  2. Machine scheduling to minimize weighted completion times the use of the α-point

    CERN Document Server

    Gusmeroli, Nicoló

    2018-01-01

    This work reviews the most important results regarding the use of the α-point in Scheduling Theory. It provides a number of different LP-relaxations for scheduling problems and seeks to explain their polyhedral consequences. It also explains the concept of the α-point and how the conversion algorithm works, pointing out the relations to the sum of the weighted completion times. Lastly, the book explores the latest techniques used for many scheduling problems with different constraints, such as release dates, precedences, and parallel machines. This reference book is intended for advanced undergraduate and postgraduate students who are interested in scheduling theory. It is also inspiring for researchers wanting to learn about sophisticated techniques and open problems of the field.

  3. Real-time, adaptive machine learning for non-stationary, near chaotic gasoline engine combustion time series.

    Science.gov (United States)

    Vaughan, Adam; Bohac, Stanislav V

    2015-10-01

    Fuel efficient Homogeneous Charge Compression Ignition (HCCI) engine combustion timing predictions must contend with non-linear chemistry, non-linear physics, period doubling bifurcation(s), turbulent mixing, model parameters that can drift day-to-day, and air-fuel mixture state information that cannot typically be resolved on a cycle-to-cycle basis, especially during transients. In previous work, an abstract cycle-to-cycle mapping function coupled with ϵ-Support Vector Regression was shown to predict experimentally observed cycle-to-cycle combustion timing over a wide range of engine conditions, despite some of the aforementioned difficulties. The main limitation of the previous approach was that a partially acasual randomly sampled training dataset was used to train proof of concept offline predictions. The objective of this paper is to address this limitation by proposing a new online adaptive Extreme Learning Machine (ELM) extension named Weighted Ring-ELM. This extension enables fully causal combustion timing predictions at randomly chosen engine set points, and is shown to achieve results that are as good as or better than the previous offline method. The broader objective of this approach is to enable a new class of real-time model predictive control strategies for high variability HCCI and, ultimately, to bring HCCI's low engine-out NOx and reduced CO2 emissions to production engines. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Hierarchical Meta-Learning in Time Series Forecasting for Improved Interference-Less Machine Learning

    Directory of Open Access Journals (Sweden)

    David Afolabi

    2017-11-01

    Full Text Available The importance of an interference-less machine learning scheme in time series prediction is crucial, as an oversight can have a negative cumulative effect, especially when predicting many steps ahead of the currently available data. The on-going research on noise elimination in time series forecasting has led to a successful approach of decomposing the data sequence into component trends to identify noise-inducing information. The empirical mode decomposition method separates the time series/signal into a set of intrinsic mode functions ranging from high to low frequencies, which can be summed up to reconstruct the original data. The usual assumption that random noises are only contained in the high-frequency component has been shown not to be the case, as observed in our previous findings. The results from that experiment reveal that noise can be present in a low frequency component, and this motivates the newly-proposed algorithm. Additionally, to prevent the erosion of periodic trends and patterns within the series, we perform the learning of local and global trends separately in a hierarchical manner which succeeds in detecting and eliminating short/long term noise. The algorithm is tested on four datasets from financial market data and physical science data. The simulation results are compared with the conventional and state-of-the-art approaches for time series machine learning, such as the non-linear autoregressive neural network and the long short-term memory recurrent neural network, respectively. Statistically significant performance gains are recorded when the meta-learning algorithm for noise reduction is used in combination with these artificial neural networks. For time series data which cannot be decomposed into meaningful trends, applying the moving average method to create meta-information for guiding the learning process is still better than the traditional approach. Therefore, this new approach is applicable to the forecasting

  5. A geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time

    Science.gov (United States)

    Yu, Miaomiao; Tang, Yinghui; Fu, Yonghong

    2013-06-01

    In this article, we consider a geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time. A maintenance policy (N - 1, N) based on the number of failures of the service machine is introduced into the system. Assuming that a failed service machine after repair will not be 'as good as new', and the spare service machine for replacement is only available by an order. More specifically, we suppose that the procurement lead time for delivering the spare service machine follows a phase-type (PH) distribution. Under such assumptions, we apply the matrix-analytic method to develop the steady state probabilities of the system, and then we obtain some system performance measures. Finally, employing an important Lemma, the explicit expression of the long-run average cost rate for the service machine is derived, and the direct search method is also implemented to determine the optimal value of N for minimising the average cost rate.

  6. Numerical and machine learning simulation of parametric distributions of groundwater residence time in streams and wells

    Science.gov (United States)

    Starn, J. J.; Belitz, K.; Carlson, C.

    2017-12-01

    Groundwater residence-time distributions (RTDs) are critical for assessing susceptibility of water resources to contamination. This novel approach for estimating regional RTDs was to first simulate groundwater flow using existing regional digital data sets in 13 intermediate size watersheds (each an average of 7,000 square kilometers) that are representative of a wide range of glacial systems. RTDs were simulated with particle tracking. We refer to these models as "general models" because they are based on regional, as opposed to site-specific, digital data. Parametric RTDs were created from particle RTDs by fitting 1- and 2-component Weibull, gamma, and inverse Gaussian distributions, thus reducing a large number of particle travel times to 3 to 7 parameters (shape, location, and scale for each component plus a mixing fraction) for each modeled area. The scale parameter of these distributions is related to the mean exponential age; the shape parameter controls departure from the ideal exponential distribution and is partly a function of interaction with bedrock and with drainage density. Given the flexible shape and mathematical similarity of these distributions, any of them are potentially a good fit to particle RTDs. The 1-component gamma distribution provided a good fit to basin-wide particle RTDs. RTDs at monitoring wells and streams often have more complicated shapes than basin-wide RTDs, caused in part by heterogeneity in the model, and generally require 2-component distributions. A machine learning model was trained on the RTD parameters using features derived from regionally available watershed characteristics such as recharge rate, material thickness, and stream density. RTDs appeared to vary systematically across the landscape in relation to watershed features. This relation was used to produce maps of useful metrics with respect to risk-based thresholds, such as the time to first exceedance, time to maximum concentration, time above the threshold

  7. Real-time PCR Machine System Modeling and a Systematic Approach for the Robust Design of a Real-time PCR-on-a-Chip System

    OpenAIRE

    Lee, Da-Sheng

    2010-01-01

    Chip-based DNA quantification systems are widespread, and used in many point-of-care applications. However, instruments for such applications may not be maintained or calibrated regularly. Since machine reliability is a key issue for normal operation, this study presents a system model of the real-time Polymerase Chain Reaction (PCR) machine to analyze the instrument design through numerical experiments. Based on model analysis, a systematic approach was developed to lower the variation of DN...

  8. Polynomial-time solution of prime factorization and NP-complete problems with digital memcomputing machines

    Science.gov (United States)

    Traversa, Fabio L.; Di Ventra, Massimiliano

    2017-02-01

    We introduce a class of digital machines, we name Digital Memcomputing Machines, (DMMs) able to solve a wide range of problems including Non-deterministic Polynomial (NP) ones with polynomial resources (in time, space, and energy). An abstract DMM with this power must satisfy a set of compatible mathematical constraints underlying its practical realization. We prove this by making a connection with the dynamical systems theory. This leads us to a set of physical constraints for poly-resource resolvability. Once the mathematical requirements have been assessed, we propose a practical scheme to solve the above class of problems based on the novel concept of self-organizing logic gates and circuits (SOLCs). These are logic gates and circuits able to accept input signals from any terminal, without distinction between conventional input and output terminals. They can solve boolean problems by self-organizing into their solution. They can be fabricated either with circuit elements with memory (such as memristors) and/or standard MOS technology. Using tools of functional analysis, we prove mathematically the following constraints for the poly-resource resolvability: (i) SOLCs possess a global attractor; (ii) their only equilibrium points are the solutions of the problems to solve; (iii) the system converges exponentially fast to the solutions; (iv) the equilibrium convergence rate scales at most polynomially with input size. We finally provide arguments that periodic orbits and strange attractors cannot coexist with equilibria. As examples, we show how to solve the prime factorization and the search version of the NP-complete subset-sum problem. Since DMMs map integers into integers, they are robust against noise and hence scalable. We finally discuss the implications of the DMM realization through SOLCs to the NP = P question related to constraints of poly-resources resolvability.

  9. FPGA-based multisensor real-time machine vision for banknote printing

    Science.gov (United States)

    Li, Rui; Türke, Thomas; Schaede, Johannes; Willeke, Harald; Lohweg, Volker

    2009-02-01

    Automatic sheet inspection in banknote production has been used as a standard quality control tool for more than a decade. As more and more print techniques and new security features are established, total quality in bank note printing must be guaranteed. This aspect has a direct impact on the research and development for bank note inspection systems in general in the sense of technological sustainability. It is accepted, that print defects are generated not only by printing parameter changes, but also by mechanical machine parameter changes, which will change unnoticed in production. Therefore, a new concept for a multi-sensory adaptive learning and classification model based on Fuzzy-Pattern- Classifiers for data inspection and machine conditioning is proposed. A general aim is to improve the known inspection techniques and propose an inspection methodology that can ensure a comprehensive quality control of the printed substrates processed by printing presses, especially printing presses which are designed to process substrates used in the course of the production of banknotes, security documents and others. Therefore, the research and development work in this area necessitates a change in concept for banknote inspection in general. In this paper a new generation of FPGA (Field Programmable Gate Array) based real time inspection technology is presented, which allows not only colour inspection on banknote sheets, but has also the implementation flexibility for various inspection algorithms for security features, such as window threads, embedded threads, OVDs, watermarks, screen printing etc., and multi-sensory data processing. A variety of algorithms is described in the paper, which are designed for and implemented on FPGAs. The focus is based on algorithmic approaches.

  10. Failure and reliability prediction by support vector machines regression of time series data

    International Nuclear Information System (INIS)

    Chagas Moura, Marcio das; Zio, Enrico; Lins, Isis Didier; Droguett, Enrique

    2011-01-01

    Support Vector Machines (SVMs) are kernel-based learning methods, which have been successfully adopted for regression problems. However, their use in reliability applications has not been widely explored. In this paper, a comparative analysis is presented in order to evaluate the SVM effectiveness in forecasting time-to-failure and reliability of engineered components based on time series data. The performance on literature case studies of SVM regression is measured against other advanced learning methods such as the Radial Basis Function, the traditional MultiLayer Perceptron model, Box-Jenkins autoregressive-integrated-moving average and the Infinite Impulse Response Locally Recurrent Neural Networks. The comparison shows that in the analyzed cases, SVM outperforms or is comparable to other techniques. - Highlights: → Realistic modeling of reliability demands complex mathematical formulations. → SVM is proper when the relation input/output is unknown or very costly to be obtained. → Results indicate the potential of SVM for reliability time series prediction. → Reliability estimates support the establishment of adequate maintenance strategies.

  11. Machine-learning-based Brokers for Real-time Classification of the LSST Alert Stream

    Science.gov (United States)

    Narayan, Gautham; Zaidi, Tayeb; Soraisam, Monika D.; Wang, Zhe; Lochner, Michelle; Matheson, Thomas; Saha, Abhijit; Yang, Shuo; Zhao, Zhenge; Kececioglu, John; Scheidegger, Carlos; Snodgrass, Richard T.; Axelrod, Tim; Jenness, Tim; Maier, Robert S.; Ridgway, Stephen T.; Seaman, Robert L.; Evans, Eric Michael; Singh, Navdeep; Taylor, Clark; Toeniskoetter, Jackson; Welch, Eric; Zhu, Songzhe; The ANTARES Collaboration

    2018-05-01

    The unprecedented volume and rate of transient events that will be discovered by the Large Synoptic Survey Telescope (LSST) demand that the astronomical community update its follow-up paradigm. Alert-brokers—automated software system to sift through, characterize, annotate, and prioritize events for follow-up—will be critical tools for managing alert streams in the LSST era. The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is one such broker. In this work, we develop a machine learning pipeline to characterize and classify variable and transient sources only using the available multiband optical photometry. We describe three illustrative stages of the pipeline, serving the three goals of early, intermediate, and retrospective classification of alerts. The first takes the form of variable versus transient categorization, the second a multiclass typing of the combined variable and transient data set, and the third a purity-driven subtyping of a transient class. Although several similar algorithms have proven themselves in simulations, we validate their performance on real observations for the first time. We quantitatively evaluate our pipeline on sparse, unevenly sampled, heteroskedastic data from various existing observational campaigns, and demonstrate very competitive classification performance. We describe our progress toward adapting the pipeline developed in this work into a real-time broker working on live alert streams from time-domain surveys.

  12. Machine Learning-based Transient Brokers for Real-time Classification of the LSST Alert Stream

    Science.gov (United States)

    Narayan, Gautham; Zaidi, Tayeb; Soraisam, Monika; ANTARES Collaboration

    2018-01-01

    The number of transient events discovered by wide-field time-domain surveys already far outstrips the combined followup resources of the astronomical community. This number will only increase as we progress towards the commissioning of the Large Synoptic Survey Telescope (LSST), breaking the community's current followup paradigm. Transient brokers - software to sift through, characterize, annotate and prioritize events for followup - will be a critical tool for managing alert streams in the LSST era. Developing the algorithms that underlie the brokers, and obtaining simulated LSST-like datasets prior to LSST commissioning, to train and test these algorithms are formidable, though not insurmountable challenges. The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is a joint project of the National Optical Astronomy Observatory and the Department of Computer Science at the University of Arizona. We have been developing completely automated methods to characterize and classify variable and transient events from their multiband optical photometry. We describe the hierarchical ensemble machine learning algorithm we are developing, and test its performance on sparse, unevenly sampled, heteroskedastic data from various existing observational campaigns, as well as our progress towards incorporating these into a real-time event broker working on live alert streams from time-domain surveys.

  13. Transient time of an Ising machine based on injection-locked laser network

    International Nuclear Information System (INIS)

    Takata, Kenta; Utsunomiya, Shoko; Yamamoto, Yoshihisa

    2012-01-01

    We numerically study the dynamics and frequency response of the recently proposed Ising machine based on the polarization degrees of freedom of an injection-locked laser network (Utsunomiya et al 2011 Opt. Express 19 18091). We simulate various anti-ferromagnetic Ising problems, including the ones with symmetric Ising and Zeeman coefficients, which enable us to study the problem size up to M = 1000. Transient time, to reach a steady-state polarization configuration after a given Ising problem is mapped onto the system, is inversely proportional to the locking bandwidth and does not scale exponentially with the problem size. In the Fourier analysis with first-order linearization approximation, we find that the cut-off frequency of a system's response is almost identical to the locking bandwidth, which supports the time-domain analysis. It is also shown that the Zeeman term, which is created by the horizontally polarized injection signal from the master laser, serves as an initial driving force on the system and contributes to the transient time in addition to the inverse locking bandwidth. (paper)

  14. Simulation and Community-Based Instruction of Vending Machines with Time Delay.

    Science.gov (United States)

    Browder, Diane M.; And Others

    1988-01-01

    The study evaluated the use of simulated instruction on vending machine use as an adjunct to community-based instruction with two moderately retarded children. Results showed concurrent acquisition of the vending machine skills across trained and untrained sites. (Author/DB)

  15. Simple machines

    CERN Document Server

    Graybill, George

    2007-01-01

    Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s

  16. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.

    Science.gov (United States)

    Gong, Xiajing; Hu, Meng; Zhao, Liang

    2018-05-01

    Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  17. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    Science.gov (United States)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  18. Recognition of Time Stamps on Full-Disk Hα Images Using Machine Learning Methods

    Science.gov (United States)

    Xu, Y.; Huang, N.; Jing, J.; Liu, C.; Wang, H.; Fu, G.

    2016-12-01

    Observation and understanding of the physics of the 11-year solar activity cycle and 22-year magnetic cycle are among the most important research topics in solar physics. The solar cycle is responsible for magnetic field and particle fluctuation in the near-earth environment that have been found increasingly important in affecting the living of human beings in the modern era. A systematic study of large-scale solar activities, as made possible by our rich data archive, will further help us to understand the global-scale magnetic fields that are closely related to solar cycles. The long-time-span data archive includes both full-disk and high-resolution Hα images. Prior to the widely use of CCD cameras in 1990s, 35-mm films were the major media to store images. The research group at NJIT recently finished the digitization of film data obtained by the National Solar Observatory (NSO) and Big Bear Solar Observatory (BBSO) covering the period of 1953 to 2000. The total volume of data exceeds 60 TB. To make this huge database scientific valuable, some processing and calibration are required. One of the most important steps is to read the time stamps on all of the 14 million images, which is almost impossible to be done manually. We implemented three different methods to recognize the time stamps automatically, including Optical Character Recognition (OCR), Classification Tree and TensorFlow. The latter two are known as machine learning algorithms which are very popular now a day in pattern recognition area. We will present some sample images and the results of clock recognition from all three methods.

  19. The Baltic Sea as a time machine for the future coastal ocean

    DEFF Research Database (Denmark)

    Reusch, Thorsten B. H.; Dierking, Jan; Andersson, Helen C.

    2018-01-01

    Coastal global oceans are expected to undergo drastic changes driven by climate change and increasing anthropogenic pressures in coming decades. Predicting specific future conditions and assessing the best management strategies to maintain ecosystem integrity and sustainable resource use are diff......Coastal global oceans are expected to undergo drastic changes driven by climate change and increasing anthropogenic pressures in coming decades. Predicting specific future conditions and assessing the best management strategies to maintain ecosystem integrity and sustainable resource use...... are difficult, because of multiple interacting pressures, uncertain projections, and a lack of test cases for management. We argue that the Baltic Sea can serve as a time machine to study consequences and mitigation of future coastal perturbations, due to its unique combination of an early history...... of multistressor disturbance and ecosystem deterioration and early implementation of cross-border environmental management to address these problems. The Baltic Sea also stands out in providing a strong scientific foundation and accessibility to long-term data series that provide a unique opportunity to assess...

  20. Real-Time Probabilistic Structural Health Management Using Machine Learning and GPU Computing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project seeks to deliver an ultra-efficient, high-fidelity structural health management (SHM) framework using machine learning and graphics processing...

  1. Time-resolved temperature measurements in a rapid compression machine using quantum cascade laser absorption in the intrapulse mode

    KAUST Repository

    Nasir, Ehson Fawad; Farooq, Aamir

    2016-01-01

    A temperature sensor based on the intrapulse absorption spectroscopy technique has been developed to measure in situ temperature time-histories in a rapid compression machine (RCM). Two quantum-cascade lasers (QCLs) emitting near 4.55μm and 4.89μm

  2. Analysed potential of big data and supervised machine learning techniques in effectively forecasting travel times from fused data

    Directory of Open Access Journals (Sweden)

    Ivana Šemanjski

    2015-12-01

    Full Text Available Travel time forecasting is an interesting topic for many ITS services. Increased availability of data collection sensors increases the availability of the predictor variables but also highlights the high processing issues related to this big data availability. In this paper we aimed to analyse the potential of big data and supervised machine learning techniques in effectively forecasting travel times. For this purpose we used fused data from three data sources (Global Positioning System vehicles tracks, road network infrastructure data and meteorological data and four machine learning techniques (k-nearest neighbours, support vector machines, boosting trees and random forest. To evaluate the forecasting results we compared them in-between different road classes in the context of absolute values, measured in minutes, and the mean squared percentage error. For the road classes with the high average speed and long road segments, machine learning techniques forecasted travel times with small relative error, while for the road classes with the small average speeds and segment lengths this was a more demanding task. All three data sources were proven itself to have a high impact on the travel time forecast accuracy and the best results (taking into account all road classes were achieved for the k-nearest neighbours and random forest techniques.

  3. Real-time wavelet-based inline banknote-in-bundle counting for cut-and-bundle machines

    Science.gov (United States)

    Petker, Denis; Lohweg, Volker; Gillich, Eugen; Türke, Thomas; Willeke, Harald; Lochmüller, Jens; Schaede, Johannes

    2011-03-01

    Automatic banknote sheet cut-and-bundle machines are widely used within the scope of banknote production. Beside the cutting-and-bundling, which is a mature technology, image-processing-based quality inspection for this type of machine is attractive. We present in this work a new real-time Touchless Counting and perspective cutting blade quality insurance system, based on a Color-CCD-Camera and a dual-core Computer, for cut-and-bundle applications in banknote production. The system, which applies Wavelet-based multi-scale filtering is able to count banknotes inside a 100-bundle within 200-300 ms depending on the window size.

  4. Scheduling with Learning Effects and/or Time-Dependent Processing Times to Minimize the Weighted Number of Tardy Jobs on a Single Machine

    Directory of Open Access Journals (Sweden)

    Jianbo Qian

    2013-01-01

    Full Text Available We consider single machine scheduling problems with learning/deterioration effects and time-dependent processing times, with due date assignment consideration, and our objective is to minimize the weighted number of tardy jobs. By reducing all versions of the problem to an assignment problem, we solve them in O(n4 time. For some important special cases, the time complexity can be improved to be O(n2 using dynamic programming techniques.

  5. Objected constrained registration and manifold learning: A new patient setup approach in image guided radiation therapy of thoracic cancer

    Energy Technology Data Exchange (ETDEWEB)

    Chen Ting; Jabbour, Salma K.; Haffty, Bruce G.; Yue, Ning [Radiation Oncology Department, Cancer Institute of New Jersey, University of Medicine and Dentistry of New Jersey, 195 Little Albany Street, New Brunswick, New Jersey 08901 (United States); Qin Songbing [Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou 215006 (China)

    2013-04-15

    Purpose: The management of thoracic malignancies with radiation therapy is complicated by continuous target motion. In this study, a real time motion analysis approach is proposed to improve the accuracy of patient setup. Methods: For 11 lung cancer patients a long training fluoroscopy was acquired before the first treatment, and multiple short testing fluoroscopies were acquired weekly at the pretreatment patient setup of image guided radiotherapy (IGRT). The data analysis consisted of three steps: first a 4D target motion model was constructed from 4DCT and projected to the training fluoroscopy through deformable registration. Then the manifold learning method was used to construct a 2D subspace based on the target motion (kinetic) and location (static) information in the training fluoroscopy. Thereafter the respiratory phase in the testing fluoroscopy was determined by finding its location in the subspace. Finally, the phase determined testing fluoroscopy was registered to the corresponding 4DCT to derive the pretreatment patient position adjustment for the IGRT. The method was tested on clinical image sets and numerical phantoms. Results: The registration successfully reconstructed the 4D motion model with over 98% volume similarity in 4DCT, and over 95% area similarity in the training fluoroscopy. The machine learning method derived the phase values in over 98% and 93% test images of the phantom and patient images, respectively, with less than 3% phase error. The setup approach achieved an average accumulated setup error less than 1.7 mm in the cranial-caudal direction and less than 1 mm in the transverse plane. All results were validated against the ground truth of manual delineations by an experienced radiation oncologist. The expected total time for the pretreatment setup analysis was less than 10 s. Conclusions: By combining the registration and machine learning, the proposed approach has the potential to improve the accuracy of pretreatment setup for

  6. Objected constrained registration and manifold learning: A new patient setup approach in image guided radiation therapy of thoracic cancer

    International Nuclear Information System (INIS)

    Chen Ting; Jabbour, Salma K.; Haffty, Bruce G.; Yue, Ning; Qin Songbing

    2013-01-01

    Purpose: The management of thoracic malignancies with radiation therapy is complicated by continuous target motion. In this study, a real time motion analysis approach is proposed to improve the accuracy of patient setup. Methods: For 11 lung cancer patients a long training fluoroscopy was acquired before the first treatment, and multiple short testing fluoroscopies were acquired weekly at the pretreatment patient setup of image guided radiotherapy (IGRT). The data analysis consisted of three steps: first a 4D target motion model was constructed from 4DCT and projected to the training fluoroscopy through deformable registration. Then the manifold learning method was used to construct a 2D subspace based on the target motion (kinetic) and location (static) information in the training fluoroscopy. Thereafter the respiratory phase in the testing fluoroscopy was determined by finding its location in the subspace. Finally, the phase determined testing fluoroscopy was registered to the corresponding 4DCT to derive the pretreatment patient position adjustment for the IGRT. The method was tested on clinical image sets and numerical phantoms. Results: The registration successfully reconstructed the 4D motion model with over 98% volume similarity in 4DCT, and over 95% area similarity in the training fluoroscopy. The machine learning method derived the phase values in over 98% and 93% test images of the phantom and patient images, respectively, with less than 3% phase error. The setup approach achieved an average accumulated setup error less than 1.7 mm in the cranial-caudal direction and less than 1 mm in the transverse plane. All results were validated against the ground truth of manual delineations by an experienced radiation oncologist. The expected total time for the pretreatment setup analysis was less than 10 s. Conclusions: By combining the registration and machine learning, the proposed approach has the potential to improve the accuracy of pretreatment setup for

  7. A wireless brain-machine interface for real-time speech synthesis.

    Directory of Open Access Journals (Sweden)

    Frank H Guenther

    2009-12-01

    Full Text Available Brain-machine interfaces (BMIs involving electrodes implanted into the human cerebral cortex have recently been developed in an attempt to restore function to profoundly paralyzed individuals. Current BMIs for restoring communication can provide important capabilities via a typing process, but unfortunately they are only capable of slow communication rates. In the current study we use a novel approach to speech restoration in which we decode continuous auditory parameters for a real-time speech synthesizer from neuronal activity in motor cortex during attempted speech.Neural signals recorded by a Neurotrophic Electrode implanted in a speech-related region of the left precentral gyrus of a human volunteer suffering from locked-in syndrome, characterized by near-total paralysis with spared cognition, were transmitted wirelessly across the scalp and used to drive a speech synthesizer. A Kalman filter-based decoder translated the neural signals generated during attempted speech into continuous parameters for controlling a synthesizer that provided immediate (within 50 ms auditory feedback of the decoded sound. Accuracy of the volunteer's vowel productions with the synthesizer improved quickly with practice, with a 25% improvement in average hit rate (from 45% to 70% and 46% decrease in average endpoint error from the first to the last block of a three-vowel task.Our results support the feasibility of neural prostheses that may have the potential to provide near-conversational synthetic speech output for individuals with severely impaired speech motor control. They also provide an initial glimpse into the functional properties of neurons in speech motor cortical areas.

  8. The research of knitting needle status monitoring setup

    Science.gov (United States)

    Liu, Lu; Liao, Xiao-qing; Zhu, Yong-kang; Yang, Wei; Zhang, Pei; Zhao, Yong-kai; Huang, Hui-jie

    2013-09-01

    In textile production, quality control and testing is the key to ensure the process and improve the efficiency. Defect of the knitting needles is the main factor affecting the quality of the appearance of textiles. Defect detection method based on machine vision and image processing technology is universal. This approach does not effectively identify the defect generated by damaged knitting needles and raise the alarm. We developed a knitting needle status monitoring setup using optical imaging, photoelectric detection and weak signal processing technology to achieve real-time monitoring of weaving needles' position. Depending on the shape of the knitting needle, we designed a kind of Glass Optical Fiber (GOF) light guides with a rectangular port used for transmission of the signal light. To be able to capture the signal of knitting needles accurately, we adopt a optical 4F system which has better imaging quality and simple structure and there is a rectangle image on the focal plane after the system. When a knitting needle passes through position of the rectangle image, the reflected light from needle surface will back to the GOF light guides along the same optical system. According to the intensity of signals, the computer control unit distinguish that the knitting needle is broken or curving. The experimental results show that this system can accurately detect the broken needles and the curving needles on the knitting machine in operating condition.

  9. Languages, compilers and run-time environments for distributed memory machines

    CERN Document Server

    Saltz, J

    1992-01-01

    Papers presented within this volume cover a wide range of topics related to programming distributed memory machines. Distributed memory architectures, although having the potential to supply the very high levels of performance required to support future computing needs, present awkward programming problems. The major issue is to design methods which enable compilers to generate efficient distributed memory programs from relatively machine independent program specifications. This book is the compilation of papers describing a wide range of research efforts aimed at easing the task of programmin

  10. Sci-Fri AM: Quality, Safety, and Professional Issues 04: Predicting waiting times in Radiation Oncology using machine learning

    International Nuclear Information System (INIS)

    Joseph, Ackeem; Herrera, David; Hijal, Tarek; Hendren, Laurie; Leung, Alvin; Wainberg, Justin; Sawaf, Marya; Maxim, Gorshkov; Maglieri, Robert; Keshavarz, Mehryar; Kildea, John

    2016-01-01

    We describe a method for predicting waiting times in radiation oncology. Machine learning is a powerful predictive modelling tool that benefits from large, potentially complex, datasets. The essence of machine learning is to predict future outcomes by learning from previous experience. The patient waiting experience remains one of the most vexing challenges facing healthcare. Waiting time uncertainty can cause patients, who are already sick and in pain, to worry about when they will receive the care they need. In radiation oncology, patients typically experience three types of waiting: Waiting at home for their treatment plan to be prepared Waiting in the waiting room for daily radiotherapy Waiting in the waiting room to see a physician in consultation or follow-up These waiting periods are difficult for staff to predict and only rough estimates are typically provided, based on personal experience. In the present era of electronic health records, waiting times need not be so uncertain. At our centre, we have incorporated the electronic treatment records of all previously-treated patients into our machine learning model. We found that the Random Forest Regression model provides the best predictions for daily radiotherapy treatment waiting times (type 2). Using this model, we achieved a median residual (actual minus predicted value) of 0.25 minutes and a standard deviation residual of 6.5 minutes. The main features that generated the best fit model (from most to least significant) are: Allocated time, median past duration, fraction number and the number of treatment fields.

  11. Real-time power angle determination of salient-pole synchronous machine based on air gap measurements

    Energy Technology Data Exchange (ETDEWEB)

    Despalatovic, Marin; Jadric, Martin; Terzic, Bozo [FESB University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, R. Boskovica bb, 21000 Split (Croatia)

    2008-11-15

    This paper presents a new method for the real-time power angle determination of the salient-pole synchronous machines. This method is based on the terminal voltage and air gap measurements, which are the common features of the hydroturbine generator monitoring system. The raw signal of the air gap sensor is used to detect the rotor displacement with reference to the fundamental component of the terminal voltage. First, the algorithm developed for the real-time power angle determination is tested using the synthetic data obtained by the standard machine model simulation. Thereafter, the experimental investigation is carried out on the 26 MVA utility generator. The validity of the method is verified by comparing with another method, which is based on a tooth gear mounted on the rotor shaft. The proposed real-time algorithm has an adequate accuracy and needs a very short processing time. For applications that do not require real-time processing, such as the estimation of the synchronous machine parameters, the accuracy is additionally increased by applying an off-line data-processing algorithm. (author)

  12. Sci-Fri AM: Quality, Safety, and Professional Issues 04: Predicting waiting times in Radiation Oncology using machine learning

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, Ackeem; Herrera, David; Hijal, Tarek; Hendren, Laurie; Leung, Alvin; Wainberg, Justin; Sawaf, Marya; Maxim, Gorshkov; Maglieri, Robert; Keshavarz, Mehryar; Kildea, John [McGill University Health Centre (Canada)

    2016-08-15

    We describe a method for predicting waiting times in radiation oncology. Machine learning is a powerful predictive modelling tool that benefits from large, potentially complex, datasets. The essence of machine learning is to predict future outcomes by learning from previous experience. The patient waiting experience remains one of the most vexing challenges facing healthcare. Waiting time uncertainty can cause patients, who are already sick and in pain, to worry about when they will receive the care they need. In radiation oncology, patients typically experience three types of waiting: Waiting at home for their treatment plan to be prepared Waiting in the waiting room for daily radiotherapy Waiting in the waiting room to see a physician in consultation or follow-up These waiting periods are difficult for staff to predict and only rough estimates are typically provided, based on personal experience. In the present era of electronic health records, waiting times need not be so uncertain. At our centre, we have incorporated the electronic treatment records of all previously-treated patients into our machine learning model. We found that the Random Forest Regression model provides the best predictions for daily radiotherapy treatment waiting times (type 2). Using this model, we achieved a median residual (actual minus predicted value) of 0.25 minutes and a standard deviation residual of 6.5 minutes. The main features that generated the best fit model (from most to least significant) are: Allocated time, median past duration, fraction number and the number of treatment fields.

  13. A novel machine learning model to predict abnormal Runway Occupancy Times and observe related precursors

    NARCIS (Netherlands)

    Herrema, Herrema Floris; Treve, V; Desart, B; Curran, R.; Visser, H.G.

    2017-01-01

    Accidents on the runway triggered the development and implementation of mitigation strategies. Therefore, the airline industry is moving toward proactive risk management, which aims to identify and predict risk percursors and to mitigate risks before accidents occur. For certain predictions Machine

  14. Performance optimization of a CNC machine through exploration of the timed state space

    NARCIS (Netherlands)

    Mota, M.A. Mujica; Piera, Miquel Angel

    2010-01-01

    Flexible production units provide very efficient mechanisms to adapt the type and production rate according to fluctuations in demand. The optimal sequence of the different manufacturing tasks in each machine is a challenging problem that can deal with important productivity benefits.

  15. Real Time Robot Soccer Game Event Detection Using Finite State Machines with Multiple Fuzzy Logic Probability Evaluators

    Directory of Open Access Journals (Sweden)

    Elmer P. Dadios

    2009-01-01

    Full Text Available This paper presents a new algorithm for real time event detection using Finite State Machines with multiple Fuzzy Logic Probability Evaluators (FLPEs. A machine referee for a robot soccer game is developed and is used as the platform to test the proposed algorithm. A novel technique to detect collisions and other events in microrobot soccer game under inaccurate and insufficient information is presented. The robots' collision is used to determine goalkeeper charging and goal score events which are crucial for the machine referee's decisions. The Main State Machine (MSM handles the schedule of event activation. The FLPE calculates the probabilities of the true occurrence of the events. Final decisions about the occurrences of events are evaluated and compared through threshold crisp probability values. The outputs of FLPEs can be combined to calculate the probability of an event composed of subevents. Using multiple fuzzy logic system, the FLPE utilizes minimal number of rules and can be tuned individually. Experimental results show the accuracy and robustness of the proposed algorithm.

  16. Experimental set-up for time resolved small angle X-ray scattering studies of nanoparticles formation using a free-jet micromixer

    Energy Technology Data Exchange (ETDEWEB)

    Marmiroli, Benedetta [Institute for Biophysics and Nanosystem Research, Austrian Academy of Science, Schmiedlstrasse 6, Graz (Austria); Grenci, Gianluca [TASC INFM/CNR, SS 14 km 163.5, Basovizza, TS (Italy); Cacho-Nerin, Fernando; Sartori, Barbara; Laggner, Peter [Institute for Biophysics and Nanosystem Research, Austrian Academy of Science, Schmiedlstrasse 6, Graz (Austria); Businaro, Luca [TASC INFM/CNR, SS 14 km 163.5, Basovizza, TS (Italy); Amenitsch, Heinz, E-mail: heinz.amenitsch@elettra.trieste.i [Institute for Biophysics and Nanosystem Research, Austrian Academy of Science, Schmiedlstrasse 6, Graz (Austria)

    2010-02-15

    Recently, we have designed, fabricated and tested a free-jet micromixer for time resolved small angle X-ray scattering (SAXS) studies of nanoparticles formation in the <100 mus time range. The microjet has a diameter of 25 mum and a time of first accessible measurement of 75 mus has been obtained. This result can still be improved. In this communication, we present a method to estimate whether a given chemical or biological reaction can be investigated with the micromixer, and to optimize the beam size for the measurement at the chosen SAXS beamline. Moreover, we describe a system based on stereoscopic imaging which allows the alignment of the jet with the X-ray beam with a precision of 20 mum. The proposed experimental procedures have been successfully employed to observe the formation of calcium carbonate (CaCO{sub 3}) nanoparticles from the reaction of sodium carbonate (Na{sub 2}CO{sub 3}) and calcium chloride (CaCl{sub 2}). The induction time has been estimated in the order of 200 mus and the determined radius of the particles is about 14 nm.

  17. A new setup for the underground study of capture reactions

    CERN Document Server

    Casella, C; Lemut, A; Limata, B; Bemmerer, D; Bonetti, R; Broggini, C; Campajola, L; Cocconi, P; Corvisiero, P; Cruz, J; D'Onofrio, A; Formicola, A; Fülöp, Z; Gervino, G; Gialanella, L; Guglielmetti, A; Gustavino, C; Gyürky, G; Loiano, A; Imbriani, G; Jesus, A P; Junker, M; Musico, P; Ordine, A; Parodi, F; Parolin, M; Pinto, J V; Prati, P; Ribeiro, J P; Roca, V; Rogalla, D; Rolfs, C; Romano, M; Rossi-Alvarez, C; Rottura, A; Schuemann, F; Somorjai, E; Strieder, F; Terrasi, F; Trautvetter, H P; Vomiero, A; Zavatarelli, S

    2002-01-01

    For the study of astrophysically relevant capture reactions in the underground laboratory LUNA a new setup of high sensitivity has been implemented. The setup includes a windowless gas target, a 4 pi BGO summing crystal, and beam calorimeters. The setup has been recently used to measure the d(p,gamma) sup 3 He cross-section for the first time within its solar Gamow peak, i.e. down to 2.5 keV c.m. energy. The features of the optimized setup are described.

  18. Setup of a bench for short time laser flash diffusivity measurement; Mise en place d`un banc de mesure de diffusivite flash laser aux temps courts

    Energy Technology Data Exchange (ETDEWEB)

    Remy, B.; Maillet, D.; Degiovanni, A. [Centre National de la Recherche Scientifique (CNRS), 54 - Vandoeuvre-les-Nancy (France)

    1996-12-31

    In the domain of thermal engineering, new materials have been developed which are characterized by a high thermal diffusivity (5 to 10 times greater than the best usual conductors: gold, copper, silicon..) but also by a small thickness (from few hundreds of microns to few microns). Their time of response is very short (some few milliseconds to some few microseconds) and they are mainly used as heat dissipating materials. The classical thermal diffusivity measurement techniques are unable to analyze the thermal properties of these materials. Therefore, a bench for fast thermal diffusivity measurements has been developed that uses a laser system for the excitation and for the measurement of temperature (infrared detector). In this study, the measurement bench is described and the metrological problems encountered are discussed. (J.S.) 10 refs.

  19. Evaluation of cleaning and disinfection performance of automatic washer disinfectors machines in programs presenting different cycle times and temperatures

    OpenAIRE

    Bergo,Maria do Carmo Noronha Cominato

    2006-01-01

    Thermal washer-disinfectors represent a technology that brought about great advantages such as, establishment of protocols, standard operating procedures, reduction in occupational risk of a biological and environmental nature. The efficacy of the cleaning and disinfection obtained by automatic washer disinfectors machines in running programs with different times and temperatures determined by the different official agencies was validated according to recommendations from ISO Standards 15883-...

  20. THE REAL-TIME SYSTEMS, ANALYSIS OF POSSIBILITIES OF THEIR APPLICATION IN THE MANAGEMENT OF BURLY CONSTRUCTION MACHINES

    Directory of Open Access Journals (Sweden)

    S. O. Yakovlev

    2008-03-01

    Full Text Available Presently computers are more utilized in the mode of batch processing or solution of separate jobs, which are loaded from the stand of operator. In spite of the fact that at present the advantages of the real-time systems and dialog modes of their functioning became obvious, too little attention is paid to the problem of application of these progressive methods for the management of parks of building machines.

  1. Failure prediction using machine learning and time series in optical network.

    Science.gov (United States)

    Wang, Zhilong; Zhang, Min; Wang, Danshi; Song, Chuang; Liu, Min; Li, Jin; Lou, Liqi; Liu, Zhuo

    2017-08-07

    In this paper, we propose a performance monitoring and failure prediction method in optical networks based on machine learning. The primary algorithms of this method are the support vector machine (SVM) and double exponential smoothing (DES). With a focus on risk-aware models in optical networks, the proposed protection plan primarily investigates how to predict the risk of an equipment failure. To the best of our knowledge, this important problem has not yet been fully considered. Experimental results showed that the average prediction accuracy of our method was 95% when predicting the optical equipment failure state. This finding means that our method can forecast an equipment failure risk with high accuracy. Therefore, our proposed DES-SVM method can effectively improve traditional risk-aware models to protect services from possible failures and enhance the optical network stability.

  2. Online scheduling of jobs with fixed start times on related machines

    Czech Academy of Sciences Publication Activity Database

    Epstein, L.; Jeż, Łukasz; Sgall, J.; van Stee, R.

    2016-01-01

    Roč. 74, č. 1 (2016), s. 156-176 ISSN 0178-4617 R&D Projects: GA AV ČR IAA100190902; GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : online scheduling * online algorithms * related machines Subject RIV: BA - General Mathematics Impact factor: 0.735, year: 2016 http://link.springer.com/article/10.1007%2Fs00453-014-9940-2

  3. Sequencing games with Just-in-Time arrival, and related games

    NARCIS (Netherlands)

    Lohmann, E.R.M.A.; Borm, P.E.M.; Slikker, M.

    2014-01-01

    In this paper sequencing situations with Just-in-Time (JiT) arrival are introduced. This new type of one-machine sequencing situations assumes that a job is available to be handled by the machine as soon as its predecessor is finished. A basic predecessor dependent set-up time is incorporated in the

  4. An HTS machine laboratory prototype

    DEFF Research Database (Denmark)

    Mijatovic, Nenad; Jensen, Bogi Bech; Træholt, Chresten

    2012-01-01

    This paper describes Superwind HTS machine laboratory setup which is a small scale HTS machine designed and build as a part of the efforts to identify and tackle some of the challenges the HTS machine design may face. One of the challenges of HTS machines is a Torque Transfer Element (TTE) which...... conduction compared to a shaft. The HTS machine was successfully cooled to 77K and tests have been performed. The IV curves of the HTS field winding employing 6 HTS coils indicate that two of the coils had been damaged. The maximal value of the torque during experiments of 78Nm was recorded. Loaded with 33...

  5. Noninferiority, randomized, controlled trial comparing embryo development using media developed for sequential or undisturbed culture in a time-lapse setup.

    Science.gov (United States)

    Hardarson, Thorir; Bungum, Mona; Conaghan, Joe; Meintjes, Marius; Chantilis, Samuel J; Molnar, Laszlo; Gunnarsson, Kristina; Wikland, Matts

    2015-12-01

    To study whether a culture medium that allows undisturbed culture supports human embryo development to the blastocyst stage equivalently to a well-established sequential media. Randomized, double-blinded sibling trial. Independent in vitro fertilization (IVF) clinics. One hundred twenty-eight patients, with 1,356 zygotes randomized into two study arms. Embryos randomly allocated into two study arms to compare embryo development on a time-lapse system using a single-step medium or sequential media. Percentage of good-quality blastocysts on day 5. Percentage of day 5 good-quality blastocysts was 21.1% (standard deviation [SD] ± 21.6%) and 22.2% (SD ± 22.1%) in the single-step time-lapse medium (G-TL) and the sequential media (G-1/G-2) groups, respectively. The mean difference (-1.2; 95% CI, -6.0; 3.6) between the two media systems for the primary end point was less than the noninferiority margin of -8%. There was a statistically significantly lower number of good-quality embryos on day 3 in the G-TL group [50.7% (SD ± 30.6%) vs. 60.8% (SD ± 30.7%)]. Four out of the 11 measured morphokinetic parameters were statistically significantly different for the two media used. The mean levels of ammonium concentration in the media at the end of the culture period was statistically significantly lower in the G-TL group as compared with the G-2 group. We have shown that a single-step culture medium supports blastocyst development equivalently to established sequential media. The ammonium concentrations were lower in the single-step media, and the measured morphokinetic parameters were modified somewhat. NCT01939626. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Data Mining and Machine Learning in Time-Domain Discovery and Classification

    Science.gov (United States)

    Bloom, Joshua S.; Richards, Joseph W.

    2012-03-01

    The changing heavens have played a central role in the scientific effort of astronomers for centuries. Galileo's synoptic observations of the moons of Jupiter and the phases of Venus starting in 1610, provided strong refutation of Ptolemaic cosmology. These observations came soon after the discovery of Kepler's supernova had challenged the notion of an unchanging firmament. In more modern times, the discovery of a relationship between period and luminosity in some pulsational variable stars [41] led to the inference of the size of the Milky way, the distance scale to the nearest galaxies, and the expansion of the Universe (see Ref. [30] for review). Distant explosions of supernovae were used to uncover the existence of dark energy and provide a precise numerical account of dark matter (e.g., [3]). Repeat observations of pulsars [71] and nearby main-sequence stars revealed the presence of the first extrasolar planets [17,35,44,45]. Indeed, time-domain observations of transient events and variable stars, as a technique, influences a broad diversity of pursuits in the entire astronomy endeavor [68]. While, at a fundamental level, the nature of the scientific pursuit remains unchanged, the advent of astronomy as a data-driven discipline presents fundamental challenges to the way in which the scientific process must now be conducted. Digital images (and data cubes) are not only getting larger, there are more of them. On logistical grounds, this taxes storage and transport systems. But it also implies that the intimate connection that astronomers have always enjoyed with their data - from collection to processing to analysis to inference - necessarily must evolve. Figure 6.1 highlights some of the ways that the pathway to scientific inference is now influenced (if not driven by) modern automation processes, computing, data-mining, and machine-learning (ML). The emerging reliance on computation and ML is a general one - a central theme of this book - but the time

  7. A comparative study of machine learning methods for time-to-event survival data for radiomics risk modelling.

    Science.gov (United States)

    Leger, Stefan; Zwanenburg, Alex; Pilz, Karoline; Lohaus, Fabian; Linge, Annett; Zöphel, Klaus; Kotzerke, Jörg; Schreiber, Andreas; Tinhofer, Inge; Budach, Volker; Sak, Ali; Stuschke, Martin; Balermpas, Panagiotis; Rödel, Claus; Ganswindt, Ute; Belka, Claus; Pigorsch, Steffi; Combs, Stephanie E; Mönnich, David; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Troost, Esther G C; Löck, Steffen; Richter, Christian

    2017-10-16

    Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g. C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.

  8. Beam Loss Monitoring for LHC Machine Protection

    Science.gov (United States)

    Holzer, Eva Barbara; Dehning, Bernd; Effnger, Ewald; Emery, Jonathan; Grishin, Viatcheslav; Hajdu, Csaba; Jackson, Stephen; Kurfuerst, Christoph; Marsili, Aurelien; Misiowiec, Marek; Nagel, Markus; Busto, Eduardo Nebot Del; Nordt, Annika; Roderick, Chris; Sapinski, Mariusz; Zamantzas, Christos

    The energy stored in the nominal LHC beams is two times 362 MJ, 100 times the energy of the Tevatron. As little as 1 mJ/cm3 deposited energy quenches a magnet at 7 TeV and 1 J/cm3 causes magnet damage. The beam dumps are the only places to safely dispose of this beam. One of the key systems for machine protection is the beam loss monitoring (BLM) system. About 3600 ionization chambers are installed at likely or critical loss locations around the LHC ring. The losses are integrated in 12 time intervals ranging from 40 μs to 84 s and compared to threshold values defined in 32 energy ranges. A beam abort is requested when potentially dangerous losses are detected or when any of the numerous internal system validation tests fails. In addition, loss data are used for machine set-up and operational verifications. The collimation system for example uses the loss data for set-up and regular performance verification. Commissioning and operational experience of the BLM are presented: The machine protection functionality of the BLM system has been fully reliable; the LHC availability has not been compromised by false beam aborts.

  9. High-temperature metallography setup

    International Nuclear Information System (INIS)

    Blumenfeld, M.; Shmarjahu, D.; Elfassy, S.

    1979-06-01

    A high-temperature metallography setup is presented. In this setup the observation of processes such as that of copper recrystallization was made possible, and the structure of metals such as uranium could be revealed. A brief historical review of part of the research works that have been done with the help of high temperature metallographical observation technique since the beginning of this century is included. Detailed description of metallographical specimen preparation technique and theoretical criteria based on the rate of evaporation of materials present on the polished surface of the specimens are given

  10. Magnetic field-assisted electrochemical discharge machining

    International Nuclear Information System (INIS)

    Cheng, Chih-Ping; Mai, Chao-Chuang; Wu, Kun-Ling; Hsu, Yu-Shan; Yan, Biing-Hwa

    2010-01-01

    Electrochemical discharge machining (ECDM) is an effective unconventional method for micromachining in non-conducting materials, such as glass, quartz and some ceramics. However, since the spark discharge performance becomes unpredictable as the machining depth increases, it is hard to achieve precision geometry and efficient machining rate in ECDM drilling. One of the main factors for this is the lack of sufficient electrolyte flow in the narrow gap between the tool and the workpiece. In this study a magnetohydrodynamic (MHD) convection, which enhances electrolyte circulation has been applied to the ECDM process in order to upgrade the machining accuracy and efficiency. During electrolysis in the presence of a magnetic field, the Lorenz force induces the charged ions to form a MHD convection. The MHD convection then forces the electrolyte into movement, thus enhancing circulation of electrolyte. Experimental results show that the MHD convection induced by the magnetic field can effectively enhance electrolyte circulation in the micro-hole, which contributes to higher machining efficiency. Micro-holes in glass with a depth of 450 µm are drilled in less than 20 s. At the same time, better electrolyte circulation can prevent deterioration of gas film quality with increasing machining depth, while ensuring stable electrochemical discharge. The improvement in the entrance diameter thus achieved was 23.8% while that in machining time reached 57.4%. The magnetic field-assisted approach proposed in the research does not require changes in the machining setup or electrolyte but has proved to achieve significant enhancement in both accuracy and efficiency of ECDM.

  11. Two years of experience with an automatic milking system: 1. Time on machine and successful attachments

    Directory of Open Access Journals (Sweden)

    M. Speroni

    2011-03-01

    Full Text Available The installation of an automatic milking system (AMS is not simply the replacement of an old traditional milking parlour with a new milking machine, but it requires a deeply modification of herd management (Spahr et al., 1997. Robotic milking is now considered fairly reliable and friendly and more than 1100 commercial farmers have installed at least one milking unit (De Koning et al., 2002. A lot of studies have been carried out on the interactions between milking robot, cows and farmer, but most of them referred to farm in the Northern Europe. On December 2000, an AMS (Voluntary Milking System, VMSTM, De Laval was installed at the Experimental Farm of Istituto Sperimentale per la Zootecnia in Cremona (Italy......

  12. Outsourcing and scheduling for a two-machine flow shop with release times

    Science.gov (United States)

    Ahmadizar, Fardin; Amiri, Zeinab

    2018-03-01

    This article addresses a two-machine flow shop scheduling problem where jobs are released intermittently and outsourcing is allowed. The first operations of outsourced jobs are processed by the first subcontractor, they are transported in batches to the second subcontractor for processing their second operations, and finally they are transported back to the manufacturer. The objective is to select a subset of jobs to be outsourced, to schedule both the in-house and the outsourced jobs, and to determine a transportation plan for the outsourced jobs so as to minimize the sum of the makespan and the outsourcing and transportation costs. Two mathematical models of the problem and several necessary optimality conditions are presented. A solution approach is then proposed by incorporating the dominance properties with an ant colony algorithm. Finally, computational experiments are conducted to evaluate the performance of the models and solution approach.

  13. Surface electromyography based muscle fatigue detection using high-resolution time-frequency methods and machine learning algorithms.

    Science.gov (United States)

    Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S

    2018-02-01

    Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to

  14. Just-in-time preemptive single machine problem with costs of earliness/tardiness, interruption and work-in-process

    Directory of Open Access Journals (Sweden)

    Mohammad Kazemi

    2012-04-01

    Full Text Available This paper considers preemption and idle time are allowed in a single machine scheduling problem with just-in-time (JIT approach. It incorporates Earliness/Tardiness (E/T penalties, interruption penalties and holding cost of jobs which are waiting to be processed as work-in-process (WIP. Generally in non-preemptive problems, E/T penalties are a function of the completion time of the jobs. Then, we introduce a non-linear preemptive scheduling model where the earliness penalty depends on the starting time of a job. The model is liberalized by an elaborately–designed procedure to reach the optimum solution. To validate and verify the performance of proposed model, computational results are presented by solving a number of numerical examples.

  15. Dependence of the mean time to failure of a hydraulic balancing machine unit on different factors for sectional pumps of the Alrosa JSC

    Science.gov (United States)

    Ovchinnikov, N. P.; Portnyagina, V. V.; Sobakina, M. P.

    2017-12-01

    This paper presents factors that have a greater impact on the mean time to failure of a hydraulic balancing machine unit working in underground kimberlite mines of the Alrosa JSC, the hydraulic balancing machine unit being the least reliable structural elements in terms of error-free operation. In addition, a multifactor linear dependence of mean time to failure of a hydraulic balancing machine unit is shown regarding it being parts of stage sectional pumps in the underground kimberlite mines of the Alrosa JSC. In prospect, this diagram can allow us to predict the durability of the least reliable structural element of a sectional pump.

  16. Performance Evaluation of Machine Learning Methods for Leaf Area Index Retrieval from Time-Series MODIS Reflectance Data

    Science.gov (United States)

    Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang

    2017-01-01

    Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size. PMID:28045443

  17. Time-resolved temperature measurements in a rapid compression machine using quantum cascade laser absorption in the intrapulse mode

    KAUST Repository

    Nasir, Ehson Fawad

    2016-07-16

    A temperature sensor based on the intrapulse absorption spectroscopy technique has been developed to measure in situ temperature time-histories in a rapid compression machine (RCM). Two quantum-cascade lasers (QCLs) emitting near 4.55μm and 4.89μm were operated in pulsed mode, causing a frequency "down-chirp" across two ro-vibrational transitions of carbon monoxide. The down-chirp phenomenon resulted in large spectral tuning (δν ∼2.8cm-1) within a single pulse of each laser at a high pulse repetition frequency (100kHz). The wide tuning range allowed the application of the two-line thermometry technique, thus making the sensor quantitative and calibration-free. The sensor was first tested in non-reactive CO-N2 gas mixtures in the RCM and then applied to cases of n-pentane oxidation. Experiments were carried out for end of compression (EOC) pressures and temperatures ranging 9.21-15.32bar and 745-827K, respectively. Measured EOC temperatures agreed with isentropic calculations within 5%. Temperature rise measured during the first-stage ignition of n-pentane is over-predicted by zero-dimensional kinetic simulations. This work presents, for the first time, highly time-resolved temperature measurements in reactive and non-reactive rapid compression machine experiments. © 2016 Elsevier Ltd.

  18. Thermal-Induced Errors Prediction and Compensation for a Coordinate Boring Machine Based on Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Jun Yang

    2014-01-01

    Full Text Available To improve the CNC machine tools precision, a thermal error modeling for the motorized spindle was proposed based on time series analysis, considering the length of cutting tools and thermal declined angles, and the real-time error compensation was implemented. A five-point method was applied to measure radial thermal declinations and axial expansion of the spindle with eddy current sensors, solving the problem that the three-point measurement cannot obtain the radial thermal angle errors. Then the stationarity of the thermal error sequences was determined by the Augmented Dickey-Fuller Test Algorithm, and the autocorrelation/partial autocorrelation function was applied to identify the model pattern. By combining both Yule-Walker equations and information criteria, the order and parameters of the models were solved effectively, which improved the prediction accuracy and generalization ability. The results indicated that the prediction accuracy of the time series model could reach up to 90%. In addition, the axial maximum error decreased from 39.6 μm to 7 μm after error compensation, and the machining accuracy was improved by 89.7%. Moreover, the X/Y-direction accuracy can reach up to 77.4% and 86%, respectively, which demonstrated that the proposed methods of measurement, modeling, and compensation were effective.

  19. Collider shot setup for Run 2 observations and suggestions

    International Nuclear Information System (INIS)

    Annala, J.; Joshel, B.

    1996-01-01

    This note is intended to provoke discussion on Collider Run II shot setup. We hope this is a start of activities that will converge on a functional description of what is needed for shot setups in Collider Run II. We will draw on observations of the present shot setup to raise questions and make suggestions for the next Collider run. It is assumed that the reader has some familiarity with the Collider operational issues. Shot setup is defined to be the time between the end of a store and the time the Main Control Room declares colliding beams. This is the time between Tevatron clock events SCE and SCB. This definition does not consider the time experiments use to turn on their detectors. This analysis was suggested by David Finley. The operational scenarios for Run II will require higher levels of reliability and speed for shot setup. See Appendix I and II. For example, we estimate that a loss of 3 pb -1 /week (with 8 hour stores) will occur if shot setups take 90 minutes instead of 30 minutes. In other words: If you do 12 shots for one week and accept an added delay of one minute in each shot, you will loose more than 60 nb -1 for that week alone (based on a normal shot setup of 30 minutes). These demands should lead us to be much more pedantic about all the factors that affect shot setups. Shot setup will be viewed as a distinct process that is composed of several inter- dependent 'components': procedures, hardware, controls, and sociology. These components don't directly align with the different Accelerator Division departments, but are topical groupings of the needed accelerator functions. Defining these components, and categorizing our suggestions within them, are part of the goal of this document. Of course, some suggestions span several of these components

  20. The current status of the MASHA setup

    Energy Technology Data Exchange (ETDEWEB)

    Vedeneev, V. Yu., E-mail: vvedeneyev@gmail.com; Rodin, A. M.; Krupa, L.; Belozerov, A. V.; Chernysheva, E. V.; Dmitriev, S. N.; Gulyaev, A. V.; Gulyaeva, A. V.; Kamas, D. [Joint Institute for Nuclear Research, Flerov Laboratory of Nuclear Reactions (Russian Federation); Kliman, J. [Slovak Academy of Sciences, Institute of Physics (Slovakia); Komarov, A. B.; Motycak, S.; Novoselov, A. S.; Salamatin, V. S.; Stepantsov, S. V.; Podshibyakin, A. V.; Yukhimchuk, S. A. [Joint Institute for Nuclear Research, Flerov Laboratory of Nuclear Reactions (Russian Federation); Granja, C.; Pospisil, S. [Czech Technical University in Prague, Institute of Experimental and Applied Physics (Czech Republic)

    2017-11-15

    The MASHA setup designed as the mass-separator with the resolving power of about 1700, which allows mass identification of superheavy nuclides is described. The setup uses solid ISOL (Isotope Separation On-Line) method. In the present article the upgrade of some parts of MASHA are described: target box (rotating target + hot catcher), ion source based on electron cyclotron resonance, data acquisition, beam diagnostics and control systems. The upgrade is undertaken in order to increase the total separation efficiency, reduce the separation time, of the installation and working stability and make possible continuous measurements at high beam currents. Ion source efficiency was measured in autonomous regime with using calibrated gas leaks of Kr and Xe injected directly to ion source. Some results of the first experiments for production of radon isotopes using the multi-nucleon transfer reaction {sup 48}Ca+{sup 242}Pu are described in the present article. The using of TIMEPIX detector with MASHA setup for neutron-rich Rn isotopes identification is also described.

  1. The current status of the MASHA setup

    Science.gov (United States)

    Vedeneev, V. Yu.; Rodin, A. M.; Krupa, L.; Belozerov, A. V.; Chernysheva, E. V.; Dmitriev, S. N.; Gulyaev, A. V.; Gulyaeva, A. V.; Kamas, D.; Kliman, J.; Komarov, A. B.; Motycak, S.; Novoselov, A. S.; Salamatin, V. S.; Stepantsov, S. V.; Podshibyakin, A. V.; Yukhimchuk, S. A.; Granja, C.; Pospisil, S.

    2017-11-01

    The MASHA setup designed as the mass-separator with the resolving power of about 1700, which allows mass identification of superheavy nuclides is described. The setup uses solid ISOL (Isotope Separation On-Line) method. In the present article the upgrade of some parts of MASHA are described: target box (rotating target + hot catcher), ion source based on electron cyclotron resonance, data acquisition, beam diagnostics and control systems. The upgrade is undertaken in order to increase the total separation efficiency, reduce the separation time, of the installation and working stability and make possible continuous measurements at high beam currents. Ion source efficiency was measured in autonomous regime with using calibrated gas leaks of Kr and Xe injected directly to ion source. Some results of the first experiments for production of radon isotopes using the multi-nucleon transfer reaction 48Ca+242Pu are described in the present article. The using of TIMEPIX detector with MASHA setup for neutron-rich Rn isotopes identification is also described.

  2. The current status of the MASHA setup

    International Nuclear Information System (INIS)

    Vedeneev, V. Yu.; Rodin, A. M.; Krupa, L.; Belozerov, A. V.; Chernysheva, E. V.; Dmitriev, S. N.; Gulyaev, A. V.; Gulyaeva, A. V.; Kamas, D.; Kliman, J.; Komarov, A. B.; Motycak, S.; Novoselov, A. S.; Salamatin, V. S.; Stepantsov, S. V.; Podshibyakin, A. V.; Yukhimchuk, S. A.; Granja, C.; Pospisil, S.

    2017-01-01

    The MASHA setup designed as the mass-separator with the resolving power of about 1700, which allows mass identification of superheavy nuclides is described. The setup uses solid ISOL (Isotope Separation On-Line) method. In the present article the upgrade of some parts of MASHA are described: target box (rotating target + hot catcher), ion source based on electron cyclotron resonance, data acquisition, beam diagnostics and control systems. The upgrade is undertaken in order to increase the total separation efficiency, reduce the separation time, of the installation and working stability and make possible continuous measurements at high beam currents. Ion source efficiency was measured in autonomous regime with using calibrated gas leaks of Kr and Xe injected directly to ion source. Some results of the first experiments for production of radon isotopes using the multi-nucleon transfer reaction "4"8Ca+"2"4"2Pu are described in the present article. The using of TIMEPIX detector with MASHA setup for neutron-rich Rn isotopes identification is also described.

  3. HVM-TP: A Time Predictable, Portable Java Virtual Machine for Hard Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Luckow, Kasper Søe; Thomsen, Bent; Korsholm, Stephan Erbs

    2014-01-01

    We present HVMTIME; a portable and time predictable JVM implementation with applications in resource-constrained hard real-time embedded systems. In addition, it implements the Safety Critical Java (SCJ) Level 1 specification. Time predictability is achieved by a combination of time predictable...... algorithms, exploiting the programming model of the SCJ specification, and harnessing static knowledge of the hosted SCJ system. This paper presents HVMTIME in terms of its design and capabilities, and demonstrates how a complete timing model of the JVM represented as a Network of Timed Automata can...... be obtained using the tool TetaSARTSJVM. Further, using the timing model, we derive Worst Case Execution Times (WCETs) and Best Case Execution Times (BCETs) of the Java Bytecodes....

  4. A note on resource allocation scheduling with group technology and learning effects on a single machine

    Science.gov (United States)

    Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu

    2017-09-01

    In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.

  5. Real time PI-backstepping induction machine drive with efficiency optimization.

    Science.gov (United States)

    Farhani, Fethi; Ben Regaya, Chiheb; Zaafouri, Abderrahmen; Chaari, Abdelkader

    2017-09-01

    This paper describes a robust and efficient speed control of a three phase induction machine (IM) subjected to load disturbances. First, a Multiple-Input Multiple-Output (MIMO) PI-Backstepping controller is proposed for a robust and highly accurate tracking of the mechanical speed and rotor flux. Asymptotic stability of the control scheme is proven by Lyapunov Stability Theory. Second, an active online optimization algorithm is used to optimize the efficiency of the drive system. The efficiency improvement approach consists of adjusting the rotor flux with respect to the load torque in order to minimize total losses in the IM. A dSPACE DS1104 R&D board is used to implement the proposed solution. The experimental results released on 3kW squirrel cage IM, show that the reference speed as well as the rotor flux are rapidly achieved with a fast transient response and without overshoot. A good load disturbances rejection response and IM parameters variation are fairly handled. The improvement of drive system efficiency reaches up to 180% at light load. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques

    Science.gov (United States)

    Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi

    2017-08-01

    The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.

  7. Comparison of random forests and support vector machine for real-time radar-derived rainfall forecasting

    Science.gov (United States)

    Yu, Pao-Shan; Yang, Tao-Chang; Chen, Szu-Yin; Kuo, Chen-Min; Tseng, Hung-Wei

    2017-09-01

    This study aims to compare two machine learning techniques, random forests (RF) and support vector machine (SVM), for real-time radar-derived rainfall forecasting. The real-time radar-derived rainfall forecasting models use the present grid-based radar-derived rainfall as the output variable and use antecedent grid-based radar-derived rainfall, grid position (longitude and latitude) and elevation as the input variables to forecast 1- to 3-h ahead rainfalls for all grids in a catchment. Grid-based radar-derived rainfalls of six typhoon events during 2012-2015 in three reservoir catchments of Taiwan are collected for model training and verifying. Two kinds of forecasting models are constructed and compared, which are single-mode forecasting model (SMFM) and multiple-mode forecasting model (MMFM) based on RF and SVM. The SMFM uses the same model for 1- to 3-h ahead rainfall forecasting; the MMFM uses three different models for 1- to 3-h ahead forecasting. According to forecasting performances, it reveals that the SMFMs give better performances than MMFMs and both SVM-based and RF-based SMFMs show satisfactory performances for 1-h ahead forecasting. However, for 2- and 3-h ahead forecasting, it is found that the RF-based SMFM underestimates the observed radar-derived rainfalls in most cases and the SVM-based SMFM can give better performances than RF-based SMFM.

  8. Gear fault diagnosis under variable conditions with intrinsic time-scale decomposition-singular value decomposition and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Zhanqiang; Qu, Jianfeng; Chai, Yi; Tang, Qiu; Zhou, Yuming [Chongqing University, Chongqing (China)

    2017-02-15

    The gear vibration signal is nonlinear and non-stationary, gear fault diagnosis under variable conditions has always been unsatisfactory. To solve this problem, an intelligent fault diagnosis method based on Intrinsic time-scale decomposition (ITD)-Singular value decomposition (SVD) and Support vector machine (SVM) is proposed in this paper. The ITD method is adopted to decompose the vibration signal of gearbox into several Proper rotation components (PRCs). Subsequently, the singular value decomposition is proposed to obtain the singular value vectors of the proper rotation components and improve the robustness of feature extraction under variable conditions. Finally, the Support vector machine is applied to classify the fault type of gear. According to the experimental results, the performance of ITD-SVD exceeds those of the time-frequency analysis methods with EMD and WPT combined with SVD for feature extraction, and the classifier of SVM outperforms those for K-nearest neighbors (K-NN) and Back propagation (BP). Moreover, the proposed approach can accurately diagnose and identify different fault types of gear under variable conditions.

  9. Simple setup for gas-phase H/D exchange mass spectrometry coupled to electron transfer dissociation and ion mobility for analysis of polypeptide structure on a liquid chromatographic time scale.

    Science.gov (United States)

    Mistarz, Ulrik H; Brown, Jeffery M; Haselmann, Kim F; Rand, Kasper D

    2014-12-02

    Gas-phase hydrogen/deuterium exchange (HDX) is a fast and sensitive, yet unharnessed analytical approach for providing information on the structural properties of biomolecules, in a complementary manner to mass analysis. Here, we describe a simple setup for ND3-mediated millisecond gas-phase HDX inside a mass spectrometer immediately after ESI (gas-phase HDX-MS) and show utility for studying the primary and higher-order structure of peptides and proteins. HDX was achieved by passing N2-gas through a container filled with aqueous deuterated ammonia reagent (ND3/D2O) and admitting the saturated gas immediately upstream or downstream of the primary skimmer cone. The approach was implemented on three commercially available mass spectrometers and required no or minor fully reversible reconfiguration of gas-inlets of the ion source. Results from gas-phase HDX-MS of peptides using the aqueous ND3/D2O as HDX reagent indicate that labeling is facilitated exclusively through gaseous ND3, yielding similar results to the infusion of purified ND3-gas, while circumventing the complications associated with the use of hazardous purified gases. Comparison of the solution-phase- and gas-phase deuterium uptake of Leu-Enkephalin and Glu-Fibrinopeptide B, confirmed that this gas-phase HDX-MS approach allows for labeling of sites (heteroatom-bound non-amide hydrogens located on side-chains, N-terminus and C-terminus) not accessed by classical solution-phase HDX-MS. The simple setup is compatible with liquid chromatography and a chip-based automated nanoESI interface, allowing for online gas-phase HDX-MS analysis of peptides and proteins separated on a liquid chromatographic time scale at increased throughput. Furthermore, online gas-phase HDX-MS could be performed in tandem with ion mobility separation or electron transfer dissociation, thus enabling multiple orthogonal analyses of the structural properties of peptides and proteins in a single automated LC-MS workflow.

  10. Minimizing Total Busy Time with Application to Energy-efficient Scheduling of Virtual Machines in IaaS clouds

    OpenAIRE

    Quang-Hung, Nguyen; Thoai, Nam

    2016-01-01

    Infrastructure-as-a-Service (IaaS) clouds have become more popular enabling users to run applications under virtual machines. Energy efficiency for IaaS clouds is still challenge. This paper investigates the energy-efficient scheduling problems of virtual machines (VMs) onto physical machines (PMs) in IaaS clouds along characteristics: multiple resources, fixed intervals and non-preemption of virtual machines. The scheduling problems are NP-hard. Most of existing works on VM placement reduce ...

  11. Influence of random setup error on dose distribution

    International Nuclear Information System (INIS)

    Zhai Zhenyu

    2008-01-01

    Objective: To investigate the influence of random setup error on dose distribution in radiotherapy and determine the margin from ITV to PTV. Methods: A random sample approach was used to simulate the fields position in target coordinate system. Cumulative effect of random setup error was the sum of dose distributions of all individual treatment fractions. Study of 100 cumulative effects might get shift sizes of 90% dose point position. Margins from ITV to PTV caused by random setup error were chosen by 95% probability. Spearman's correlation was used to analyze the influence of each factor. Results: The average shift sizes of 90% dose point position was 0.62, 1.84, 3.13, 4.78, 6.34 and 8.03 mm if random setup error was 1,2,3,4,5 and 6 mm,respectively. Univariate analysis showed the size of margin was associated only by the size of random setup error. Conclusions: Margin of ITV to PTV is 1.2 times random setup error for head-and-neck cancer and 1.5 times for thoracic and abdominal cancer. Field size, energy and target depth, unlike random setup error, have no relation with the size of the margin. (authors)

  12. Analysis of patient setup accuracy using electronic portal imaging device

    International Nuclear Information System (INIS)

    Onogi, Yuzo; Aoki, Yukimasa; Nakagawa, Keiichi

    1996-01-01

    Radiation therapy is performed in many fractions, and accurate patient setup is very important. This is more significant nowadays because treatment planning and radiation therapy are more precisely performed. Electronic portal imaging devices and automatic image comparison algorithms let us analyze setup deviations quantitatively. With such in mind we developed a simple image comparison algorithm. Using 2459 electronic verification images (335 ports, 123 treatment sites) generated during the past three years at our institute, we evaluated the results of the algorithm, and analyzed setup deviations according to the area irradiated, use of a fixing device (shell), and arm position. Calculated setup deviation was verified visually and their fitness was classified into good, fair, bad, and incomplete. The result was 40%, 14%, 22%, 24% respectively. Using calculated deviations classified as good (994 images), we analyzed setup deviations. Overall setup deviations described in 1 SD along axes x, y, z, was 1.9 mm, 2.5 mm, 1.7 mm respectively. We classified these deviations into systematic and random components, and found that random error was predominant in our institute. The setup deviations along axis y (cranio-caudal direction) showed larger distribution when treatment was performed with the shell. Deviations along y (cranio-caudal) and z (anterior-posterior) had larger distribution when treatment occurred with the patient's arm elevated. There was a significant time-trend error, whose deviations become greater with time. Within all evaluated ports, 30% showed a time-trend error. Using an electronic portal imaging device and automatic image comparison algorithm, we are able to analyze setup deviations more precisely and improve setup method based on objective criteria. (author)

  13. Machine-learning-based classification of real-time tissue elastography for hepatic fibrosis in patients with chronic hepatitis B.

    Science.gov (United States)

    Chen, Yang; Luo, Yan; Huang, Wei; Hu, Die; Zheng, Rong-Qin; Cong, Shu-Zhen; Meng, Fan-Kun; Yang, Hong; Lin, Hong-Jun; Sun, Yan; Wang, Xiu-Yan; Wu, Tao; Ren, Jie; Pei, Shu-Fang; Zheng, Ying; He, Yun; Hu, Yu; Yang, Na; Yan, Hongmei

    2017-10-01

    Hepatic fibrosis is a common middle stage of the pathological processes of chronic liver diseases. Clinical intervention during the early stages of hepatic fibrosis can slow the development of liver cirrhosis and reduce the risk of developing liver cancer. Performing a liver biopsy, the gold standard for viral liver disease management, has drawbacks such as invasiveness and a relatively high sampling error rate. Real-time tissue elastography (RTE), one of the most recently developed technologies, might be promising imaging technology because it is both noninvasive and provides accurate assessments of hepatic fibrosis. However, determining the stage of liver fibrosis from RTE images in a clinic is a challenging task. In this study, in contrast to the previous liver fibrosis index (LFI) method, which predicts the stage of diagnosis using RTE images and multiple regression analysis, we employed four classical classifiers (i.e., Support Vector Machine, Naïve Bayes, Random Forest and K-Nearest Neighbor) to build a decision-support system to improve the hepatitis B stage diagnosis performance. Eleven RTE image features were obtained from 513 subjects who underwent liver biopsies in this multicenter collaborative research. The experimental results showed that the adopted classifiers significantly outperformed the LFI method and that the Random Forest(RF) classifier provided the highest average accuracy among the four machine algorithms. This result suggests that sophisticated machine-learning methods can be powerful tools for evaluating the stage of hepatic fibrosis and show promise for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Perspective: Web-based machine learning models for real-time screening of thermoelectric materials properties

    Science.gov (United States)

    Gaultois, Michael W.; Oliynyk, Anton O.; Mar, Arthur; Sparks, Taylor D.; Mulholland, Gregory J.; Meredig, Bryce

    2016-05-01

    The experimental search for new thermoelectric materials remains largely confined to a limited set of successful chemical and structural families, such as chalcogenides, skutterudites, and Zintl phases. In principle, computational tools such as density functional theory (DFT) offer the possibility of rationally guiding experimental synthesis efforts toward very different chemistries. However, in practice, predicting thermoelectric properties from first principles remains a challenging endeavor [J. Carrete et al., Phys. Rev. X 4, 011019 (2014)], and experimental researchers generally do not directly use computation to drive their own synthesis efforts. To bridge this practical gap between experimental needs and computational tools, we report an open machine learning-based recommendation engine (http://thermoelectrics.citrination.com) for materials researchers that suggests promising new thermoelectric compositions based on pre-screening about 25 000 known materials and also evaluates the feasibility of user-designed compounds. We show this engine can identify interesting chemistries very different from known thermoelectrics. Specifically, we describe the experimental characterization of one example set of compounds derived from our engine, RE12Co5Bi (RE = Gd, Er), which exhibits surprising thermoelectric performance given its unprecedentedly high loading with metallic d and f block elements and warrants further investigation as a new thermoelectric material platform. We show that our engine predicts this family of materials to have low thermal and high electrical conductivities, but modest Seebeck coefficient, all of which are confirmed experimentally. We note that the engine also predicts materials that may simultaneously optimize all three properties entering into zT; we selected RE12Co5Bi for this study due to its interesting chemical composition and known facile synthesis.

  15. Real-time PCR Machine System Modeling and a Systematic Approach for the Robust Design of a Real-time PCR-on-a-Chip System

    Directory of Open Access Journals (Sweden)

    Da-Sheng Lee

    2010-01-01

    Full Text Available Chip-based DNA quantification systems are widespread, and used in many point-of-care applications. However, instruments for such applications may not be maintained or calibrated regularly. Since machine reliability is a key issue for normal operation, this study presents a system model of the real-time Polymerase Chain Reaction (PCR machine to analyze the instrument design through numerical experiments. Based on model analysis, a systematic approach was developed to lower the variation of DNA quantification and achieve a robust design for a real-time PCR-on-a-chip system. Accelerated lift testing was adopted to evaluate the reliability of the chip prototype. According to the life test plan, this proposed real-time PCR-on-a-chip system was simulated to work continuously for over three years with similar reproducibility in DNA quantification. This not only shows the robustness of the lab-on-a-chip system, but also verifies the effectiveness of our systematic method for achieving a robust design.

  16. Lot-Order Assignment Applying Priority Rules for the Single-Machine Total Tardiness Scheduling with Nonnegative Time-Dependent Processing Times

    Directory of Open Access Journals (Sweden)

    Jae-Gon Kim

    2015-01-01

    Full Text Available Lot-order assignment is to assign items in lots being processed to orders to fulfill the orders. It is usually performed periodically for meeting the due dates of orders especially in a manufacturing industry with a long production cycle time such as the semiconductor manufacturing industry. In this paper, we consider the lot-order assignment problem (LOAP with the objective of minimizing the total tardiness of the orders with distinct due dates. We show that we can solve the LOAP optimally by finding an optimal sequence for the single-machine total tardiness scheduling problem with nonnegative time-dependent processing times (SMTTSP-NNTDPT. Also, we address how the priority rules for the SMTTSP can be modified to those for the SMTTSP-NNTDPT to solve the LOAP. In computational experiments, we discuss the performances of the suggested priority rules and show the result of the proposed approach outperforms that of the commercial optimization software package.

  17. A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery.

    Science.gov (United States)

    Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong

    2017-07-01

    Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Time series analytics using sliding window metaheuristic optimization-based machine learning system for identifying building energy consumption patterns

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Ngo, Ngoc-Tri

    2016-01-01

    Highlights: • This study develops a novel time-series sliding window forecast system. • The system integrates metaheuristics, machine learning and time-series models. • Site experiment of smart grid infrastructure is installed to retrieve real-time data. • The proposed system accurately predicts energy consumption in residential buildings. • The forecasting system can help users minimize their electricity usage. - Abstract: Smart grids are a promising solution to the rapidly growing power demand because they can considerably increase building energy efficiency. This study developed a novel time-series sliding window metaheuristic optimization-based machine learning system for predicting real-time building energy consumption data collected by a smart grid. The proposed system integrates a seasonal autoregressive integrated moving average (SARIMA) model and metaheuristic firefly algorithm-based least squares support vector regression (MetaFA-LSSVR) model. Specifically, the proposed system fits the SARIMA model to linear data components in the first stage, and the MetaFA-LSSVR model captures nonlinear data components in the second stage. Real-time data retrieved from an experimental smart grid installed in a building were used to evaluate the efficacy and effectiveness of the proposed system. A k-week sliding window approach is proposed for employing historical data as input for the novel time-series forecasting system. The prediction system yielded high and reliable accuracy rates in 1-day-ahead predictions of building energy consumption, with a total error rate of 1.181% and mean absolute error of 0.026 kW h. Notably, the system demonstrates an improved accuracy rate in the range of 36.8–113.2% relative to those of the linear forecasting model (i.e., SARIMA) and nonlinear forecasting models (i.e., LSSVR and MetaFA-LSSVR). Therefore, end users can further apply the forecasted information to enhance efficiency of energy usage in their buildings, especially

  19. Design of a real-time open architecture controller for a reconfigurable machine tool

    CSIR Research Space (South Africa)

    Masekamela, I

    2008-11-01

    Full Text Available The paper presents the design and the development of a real-time, open architecture controller that is used for control of reconfigurable manufacturing tools (RMTs) in reconfigurable manufacturing systems (RMS). The controller that is presented can...

  20. Determination operation Time Risk of Box Spinning Components-oe Spinning Machine

    OpenAIRE

    Slobodan Stefanovic

    2013-01-01

    Based on the constructed dependency diagram reliability of the exploitation operation time of each constituent components of the analyzed frame in the case of selected statistical distributions, areas of the operation exploitation and repair intervals are determined. This is done by determining the first inflection points. Based on these points analysis to determine the time of safety operation of frame components with allowable risk to the segmental linear function of the intensity of failur...

  1. Set-up for steam generator tube bundle washing after explosion expanding the tubes

    International Nuclear Information System (INIS)

    Osipov, S.I.; Kal'nin, A.Ya.; Mazanenko, M.F.

    1985-01-01

    Set-up for steam generator tube bundle washing after the explosion expanding of tubes is described. Washing is accomplished by distillate. Steam is added to distillate for heating, and compersed air for preventing hydraulic shock. The set-up is equiped by control equipment. Set-up performances are presented. Time for one steam generator washing constitutes 8-12 h. High economic efficiency is realized due to the set-up introduction

  2. Some relations between quantum Turing machines and Turing machines

    OpenAIRE

    Sicard, Andrés; Vélez, Mario

    1999-01-01

    For quantum Turing machines we present three elements: Its components, its time evolution operator and its local transition function. The components are related with the components of deterministic Turing machines, the time evolution operator is related with the evolution of reversible Turing machines and the local transition function is related with the transition function of probabilistic and reversible Turing machines.

  3. Using a "time machine" to test for local adaptation of aquatic microbes to temporal and spatial environmental variation.

    Science.gov (United States)

    Fox, Jeremy W; Harder, Lawrence D

    2015-01-01

    Local adaptation occurs when different environments are dominated by different specialist genotypes, each of which is relatively fit in its local conditions and relatively unfit under other conditions. Analogously, ecological species sorting occurs when different environments are dominated by different competing species, each of which is relatively fit in its local conditions. The simplest theory predicts that spatial, but not temporal, environmental variation selects for local adaptation (or generates species sorting), but this prediction is difficult to test. Although organisms can be reciprocally transplanted among sites, doing so among times seems implausible. Here, we describe a reciprocal transplant experiment testing for local adaptation or species sorting of lake bacteria in response to both temporal and spatial variation in water chemistry. The experiment used a -80°C freezer as a "time machine." Bacterial isolates and water samples were frozen for later use, allowing transplantation of older isolates "forward in time" and newer isolates "backward in time." Surprisingly, local maladaptation predominated over local adaptation in both space and time. Such local maladaptation may indicate that adaptation, or the analogous species sorting process, fails to keep pace with temporal fluctuations in water chemistry. This hypothesis could be tested with more finely resolved temporal data. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  4. Radar Waveform Recognition Based on Time-Frequency Analysis and Artificial Bee Colony-Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Lutao Liu

    2018-04-01

    Full Text Available In this paper, a system for identifying eight kinds of radar waveforms is explored. The waveforms are the binary phase shift keying (BPSK, Costas codes, linear frequency modulation (LFM and polyphase codes (including P1, P2, P3, P4 and Frank codes. The features of power spectral density (PSD, moments and cumulants, instantaneous properties and time-frequency analysis are extracted from the waveforms and three new features are proposed. The classifier is support vector machine (SVM, which is optimized by artificial bee colony (ABC algorithm. The system shows well robustness, excellent computational complexity and high recognition rate under low signal-to-noise ratio (SNR situation. The simulation results indicate that the overall recognition rate is 92% when SNR is −4 dB.

  5. Electric Load Forecasting Based on a Least Squares Support Vector Machine with Fuzzy Time Series and Global Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Yan Hong Chen

    2016-01-01

    Full Text Available This paper proposes a new electric load forecasting model by hybridizing the fuzzy time series (FTS and global harmony search algorithm (GHSA with least squares support vector machines (LSSVM, namely GHSA-FTS-LSSVM model. Firstly, the fuzzy c-means clustering (FCS algorithm is used to calculate the clustering center of each cluster. Secondly, the LSSVM is applied to model the resultant series, which is optimized by GHSA. Finally, a real-world example is adopted to test the performance of the proposed model. In this investigation, the proposed model is verified using experimental datasets from the Guangdong Province Industrial Development Database, and results are compared against autoregressive integrated moving average (ARIMA model and other algorithms hybridized with LSSVM including genetic algorithm (GA, particle swarm optimization (PSO, harmony search, and so on. The forecasting results indicate that the proposed GHSA-FTS-LSSVM model effectively generates more accurate predictive results.

  6. Single-molecule packaging initiation in real time by a viral DNA packaging machine from bacteriophage T4.

    Science.gov (United States)

    Vafabakhsh, Reza; Kondabagil, Kiran; Earnest, Tyler; Lee, Kyung Suk; Zhang, Zhihong; Dai, Li; Dahmen, Karin A; Rao, Venigalla B; Ha, Taekjip

    2014-10-21

    Viral DNA packaging motors are among the most powerful molecular motors known. A variety of structural, biochemical, and single-molecule biophysical approaches have been used to understand their mechanochemistry. However, packaging initiation has been difficult to analyze because of its transient and highly dynamic nature. Here, we developed a single-molecule fluorescence assay that allowed visualization of packaging initiation and reinitiation in real time and quantification of motor assembly and initiation kinetics. We observed that a single bacteriophage T4 packaging machine can package multiple DNA molecules in bursts of activity separated by long pauses, suggesting that it switches between active and quiescent states. Multiple initiation pathways were discovered including, unexpectedly, direct DNA binding to the capsid portal followed by recruitment of motor subunits. Rapid succession of ATP hydrolysis was essential for efficient initiation. These observations have implications for the evolution of icosahedral viruses and regulation of virus assembly.

  7. A real-time neutron-gamma discriminator based on the support vector machine method for the time-of-flight neutron spectrometer

    Science.gov (United States)

    Wei, ZHANG; Tongyu, WU; Bowen, ZHENG; Shiping, LI; Yipo, ZHANG; Zejie, YIN

    2018-04-01

    A new neutron-gamma discriminator based on the support vector machine (SVM) method is proposed to improve the performance of the time-of-flight neutron spectrometer. The neutron detector is an EJ-299-33 plastic scintillator with pulse-shape discrimination (PSD) property. The SVM algorithm is implemented in field programmable gate array (FPGA) to carry out the real-time sifting of neutrons in neutron-gamma mixed radiation fields. This study compares the ability of the pulse gradient analysis method and the SVM method. The results show that this SVM discriminator can provide a better discrimination accuracy of 99.1%. The accuracy and performance of the SVM discriminator based on FPGA have been evaluated in the experiments. It can get a figure of merit of 1.30.

  8. Time machine: scientists smash matter, hoping to elicit the earliest stuff

    CERN Multimedia

    Loft, Kurt

    2007-01-01

    "Can we travel back in time, or predict the future? Will we ever know what seeds nature first planted to nurture our world? Such deep thoughts fill the minds of particle physicists who delve into the darkest dimensions of matter, and they've been busy lately." (1 page)

  9. The Effect of the Rolling Direction, Temperature, and Etching Time on the Photochemical Machining of Monel 400 Microchannels

    Directory of Open Access Journals (Sweden)

    Deepakkumar H. Patil

    2016-01-01

    Full Text Available The present paper describes the effect of the rolling direction on the quality of microchannels manufactured using photochemical machining (PCM of Monel 400. Experiments were carried out to fabricate microchannels along and across the rolling direction to investigate the effect of the grain orientation on microchannel etching. The input parameters considered were channel width and rolling direction, whereas the depth of etch was the response parameters. Different channels of widths of 60, 100, 150, 200, and 250 μm were etched. The effects of the etching time and temperature of the etchant solution on the undercut and depth of the microchannels were studied. For good quality microchannels, the effects of spinning time, spinning speed, exposure time, and photoresist film strength were also taken into consideration. Optimized values of the above were used for the experimentation. The results show that the depth of etch of the microchannel increases more along the rolling direction than across the rolling direction. The channel width and depth are significantly affected by the etching time and temperature. The proposed study reports an improvement in the quality of microchannels produced using PCM.

  10. Remaining Useful Life Estimation using Time Trajectory Tracking and Support Vector Machines

    International Nuclear Information System (INIS)

    Galar, D; Kumar, U; Lee, J; Zhao, W

    2012-01-01

    In this paper, a novel RUL prediction method inspired by feature maps and SVM classifiers is proposed. The historical instances of a system with life-time condition data are used to create a classification by SVM hyper planes. For a test instance of the same system, whose RUL is to be estimated, degradation speed is evaluated by computing the minimal distance defined based on the degradation trajectories, i.e. the approach of the system to the hyper plane that segregates good and bad condition data at different time horizon. Therefore, the final RUL of a specific component can be estimated and global RUL information can then be obtained by aggregating the multiple RUL estimations using a density estimation method.

  11. Redução do tempo de setup como estratégia de aumento da capacidade produtiva: estudo de caso em sopradora de garrafas plásticas

    Directory of Open Access Journals (Sweden)

    Teonas Bartz

    2012-01-01

    Full Text Available The increasing demand for the introduction of new products to serve all customers from different markets causes companies to seek new concepts in planning and programming production and in the changing of tools. Single-Minute Exchange of Die (SMED reduces the setup time of equipment, minimizing non-productive periods. Thus, it is possible to reduce the size of production batches to increase operating rates and improve the flexibility, productivity and competitiveness of organizations. This paper presents the stages necessary for the implementation of SMED in a plastic bottle blower and reports the results obtained. To achieve this, we conducted an analysis of activities, suggested improvements in the machine and in procedures, timed the stages before and after introducing the improvements and analyzed the times obtained. The results showed a significant reduction in setup time for the machine in this study. Key words: The increasing demand for the introduction of new products to serve all customers from different markets causes companies to seek new concepts in planning and programming production and in the changing of tools. Single- Minute Exchange of Die (SMED reduces the setup time of equipment, minimizing non-productive periods. Thus, it is possible to reduce the size of production batches to increase operating rates and improve the flexibility, productivity and competitiveness of organizations. This paper presents the stages necessary for the implementation of SMED in a plastic bottle blower and reports the results obtained. To achieve this, we conducted an analysis of activities, suggested improvements in the machine and in procedures, timed the stages before and after introducing the improvements and analyzed the times obtained. The results showed a significant reduction in setup time for the machine in this study.

  12. Analysis of the Financial Times ranking "master in management" with machine learning

    OpenAIRE

    Jansen, Arthur

    2017-01-01

    University rankings play nowadays a major role in the decision of many students with regards to their future schools. Nonetheless, these rankings often remain quite opaque: not all data are made available, the methodology behind the rankings is not well defined, etc. One of the main ranking centred on business schools is the "Master in Management" from the Financial Times. This work aims to study the relevance of this ranking and its possible flaws. Several techniques are conducted, as a robu...

  13. Automated business process management – in times of digital transformation using machine learning or artificial intelligence

    OpenAIRE

    Paschek Daniel; Luminosu Caius Tudor; Draghici Anca

    2017-01-01

    The continuous optimization of business processes is still a challenge for companies. In times of digital transformation, faster changing internal and external framework conditions and new customer expectations for fastest delivery and best quality of goods and many more, companies should set up their internal process at the best way. But what to do if framework conditions changed unexpectedly? The purpose of the paper is to analyse how the digital transformation will impact the Business Proc...

  14. Real-time electron density measurements from Cotton-Mouton effect in JET machine

    International Nuclear Information System (INIS)

    Brombin, M.; Boboc, A.; Zabeo, L.; Murari, A.

    2008-01-01

    Real-time density profile measurements are essential for advanced fusion tokamak operation and interferometry is a proven method for this task. Nevertheless, as a consequence of edge localized modes, pellet injections, fast density increases, or disruptions, the interferometer is subject to fringe jumps, which produce loss of the signal preventing reliable use of the measured density in a real-time feedback controller. An alternative method to measure the density is polarimetry based on the Cotton-Mouton effect, which is proportional to the line-integrated electron density. A new analysis approach has been implemented and tested to verify the reliability of the Cotton-Mouton measurements for a wide range of plasma parameters and to compare the density evaluated from polarimetry with that from interferometry. The density measurements based on polarimetry are going to be integrated in the real-time control system of JET since the difference with the interferometry is within one fringe for more than 90% of the cases.

  15. Techniques for optimizing human-machine information transfer related to real-time interactive display systems

    Science.gov (United States)

    Granaas, Michael M.; Rhea, Donald C.

    1989-01-01

    In recent years the needs of ground-based researcher-analysts to access real-time engineering data in the form of processed information has expanded rapidly. Fortunately, the capacity to deliver that information has also expanded. The development of advanced display systems is essential to the success of a research test activity. Those developed at the National Aeronautics and Space Administration (NASA), Western Aeronautical Test Range (WATR), range from simple alphanumerics to interactive mapping and graphics. These unique display systems are designed not only to meet basic information display requirements of the user, but also to take advantage of techniques for optimizing information display. Future ground-based display systems will rely heavily not only on new technologies, but also on interaction with the human user and the associated productivity with that interaction. The psychological abilities and limitations of the user will become even more important in defining the difference between a usable and a useful display system. This paper reviews the requirements for development of real-time displays; the psychological aspects of design such as the layout, color selection, real-time response rate, and interactivity of displays; and an analysis of some existing WATR displays.

  16. Rainfall Prediction of Indian Peninsula: Comparison of Time Series Based Approach and Predictor Based Approach using Machine Learning Techniques

    Science.gov (United States)

    Dash, Y.; Mishra, S. K.; Panigrahi, B. K.

    2017-12-01

    Prediction of northeast/post monsoon rainfall which occur during October, November and December (OND) over Indian peninsula is a challenging task due to the dynamic nature of uncertain chaotic climate. It is imperative to elucidate this issue by examining performance of different machine leaning (ML) approaches. The prime objective of this research is to compare between a) statistical prediction using historical rainfall observations and global atmosphere-ocean predictors like Sea Surface Temperature (SST) and Sea Level Pressure (SLP) and b) empirical prediction based on a time series analysis of past rainfall data without using any other predictors. Initially, ML techniques have been applied on SST and SLP data (1948-2014) obtained from NCEP/NCAR reanalysis monthly mean provided by the NOAA ESRL PSD. Later, this study investigated the applicability of ML methods using OND rainfall time series for 1948-2014 and forecasted up to 2018. The predicted values of aforementioned methods were verified using observed time series data collected from Indian Institute of Tropical Meteorology and the result revealed good performance of ML algorithms with minimal error scores. Thus, it is found that both statistical and empirical methods are useful for long range climatic projections.

  17. Fully automatic time-window selection using machine learning for global adjoint tomography

    Science.gov (United States)

    Chen, Y.; Hill, J.; Lei, W.; Lefebvre, M. P.; Bozdag, E.; Komatitsch, D.; Tromp, J.

    2017-12-01

    Selecting time windows from seismograms such that the synthetic measurements (from simulations) and measured observations are sufficiently close is indispensable in a global adjoint tomography framework. The increasing amount of seismic data collected everyday around the world demands "intelligent" algorithms for seismic window selection. While the traditional FLEXWIN algorithm can be "automatic" to some extent, it still requires both human input and human knowledge or experience, and thus is not deemed to be fully automatic. The goal of intelligent window selection is to automatically select windows based on a learnt engine that is built upon a huge number of existing windows generated through the adjoint tomography project. We have formulated the automatic window selection problem as a classification problem. All possible misfit calculation windows are classified as either usable or unusable. Given a large number of windows with a known selection mode (select or not select), we train a neural network to predict the selection mode of an arbitrary input window. Currently, the five features we extract from the windows are its cross-correlation value, cross-correlation time lag, amplitude ratio between observed and synthetic data, window length, and minimum STA/LTA value. More features can be included in the future. We use these features to characterize each window for training a multilayer perceptron neural network (MPNN). Training the MPNN is equivalent to solve a non-linear optimization problem. We use backward propagation to derive the gradient of the loss function with respect to the weighting matrices and bias vectors and use the mini-batch stochastic gradient method to iteratively optimize the MPNN. Numerical tests show that with a careful selection of the training data and a sufficient amount of training data, we are able to train a robust neural network that is capable of detecting the waveforms in an arbitrary earthquake data with negligible detection error

  18. Dynamic time warping and machine learning for signal quality assessment of pulsatile signals

    International Nuclear Information System (INIS)

    Li, Q; Clifford, G D

    2012-01-01

    In this work, we describe a beat-by-beat method for assessing the clinical utility of pulsatile waveforms, primarily recorded from cardiovascular blood volume or pressure changes, concentrating on the photoplethysmogram (PPG). Physiological blood flow is nonstationary, with pulses changing in height, width and morphology due to changes in heart rate, cardiac output, sensor type and hardware or software pre-processing requirements. Moreover, considerable inter-individual and sensor-location variability exists. Simple template matching methods are therefore inappropriate, and a patient-specific adaptive initialization is therefore required. We introduce dynamic time warping to stretch each beat to match a running template and combine it with several other features related to signal quality, including correlation and the percentage of the beat that appeared to be clipped. The features were then presented to a multi-layer perceptron neural network to learn the relationships between the parameters in the presence of good- and bad-quality pulses. An expert-labeled database of 1055 segments of PPG, each 6 s long, recorded from 104 separate critical care admissions during both normal and verified arrhythmic events, was used to train and test our algorithms. An accuracy of 97.5% on the training set and 95.2% on test set was found. The algorithm could be deployed as a stand-alone signal quality assessment algorithm for vetting the clinical utility of PPG traces or any similar quasi-periodic signal. (paper)

  19. Dynamic time warping and machine learning for signal quality assessment of pulsatile signals.

    Science.gov (United States)

    Li, Q; Clifford, G D

    2012-09-01

    In this work, we describe a beat-by-beat method for assessing the clinical utility of pulsatile waveforms, primarily recorded from cardiovascular blood volume or pressure changes, concentrating on the photoplethysmogram (PPG). Physiological blood flow is nonstationary, with pulses changing in height, width and morphology due to changes in heart rate, cardiac output, sensor type and hardware or software pre-processing requirements. Moreover, considerable inter-individual and sensor-location variability exists. Simple template matching methods are therefore inappropriate, and a patient-specific adaptive initialization is therefore required. We introduce dynamic time warping to stretch each beat to match a running template and combine it with several other features related to signal quality, including correlation and the percentage of the beat that appeared to be clipped. The features were then presented to a multi-layer perceptron neural network to learn the relationships between the parameters in the presence of good- and bad-quality pulses. An expert-labeled database of 1055 segments of PPG, each 6 s long, recorded from 104 separate critical care admissions during both normal and verified arrhythmic events, was used to train and test our algorithms. An accuracy of 97.5% on the training set and 95.2% on test set was found. The algorithm could be deployed as a stand-alone signal quality assessment algorithm for vetting the clinical utility of PPG traces or any similar quasi-periodic signal.

  20. The Evaluation of Efficiency of the Use of Machine Working Time in the Industrial Company - Case Study

    Science.gov (United States)

    Kardas, Edyta; Brožova, Silvie; Pustějovská, Pavlína; Jursová, Simona

    2017-12-01

    In the paper the evaluation of efficiency of the use of machines in the selected production company was presented. The OEE method (Overall Equipment Effectiveness) was used for the analysis. The selected company deals with the production of tapered roller bearings. The analysis of effectiveness was done for 17 automatic grinding lines working in the department of grinding rollers. Low level of efficiency of machines was affected by problems with the availability of machines and devices. The causes of machine downtime on these lines was also analyzed. Three basic causes of downtime were identified: no kanban card, diamonding, no operator. Ways to improve the use of these machines were suggested. The analysis takes into account the actual results from the production process and covers the period of one calendar year.

  1. THE EVALUATION OF EFFICIENCY OF THE USE OF MACHINE WORKING TIME IN THE INDUSTRIAL COMPANY – CASE STUDY

    Directory of Open Access Journals (Sweden)

    Edyta KARDAS

    2017-10-01

    Full Text Available In the paper the evaluation of efficiency of the use of machines in the selected production company was presented. The OEE method (Overall Equipment Effectiveness was used for the analysis. The selected company deals with the produc-tion of tapered roller bearings. The analysis of effectiveness was done for 17 automatic grinding lines working in the department of grinding rollers. Low level of efficiency of machines was affected by problems with the availability of ma-chines and devices. The causes of machine downtime on these lines was also analyzed. Three basic causes of downtime were identified: no kanban card, diamonding, no operator. Ways to improve the use of these machines were suggested. The analysis takes into account the actual results from the production process and covers the period of one calendar year.

  2. Cryogenic setup for trapped ion quantum computing.

    Science.gov (United States)

    Brandl, M F; van Mourik, M W; Postler, L; Nolf, A; Lakhmanskiy, K; Paiva, R R; Möller, S; Daniilidis, N; Häffner, H; Kaushal, V; Ruster, T; Warschburger, C; Kaufmann, H; Poschinger, U G; Schmidt-Kaler, F; Schindler, P; Monz, T; Blatt, R

    2016-11-01

    We report on the design of a cryogenic setup for trapped ion quantum computing containing a segmented surface electrode trap. The heat shield of our cryostat is designed to attenuate alternating magnetic field noise, resulting in 120 dB reduction of 50 Hz noise along the magnetic field axis. We combine this efficient magnetic shielding with high optical access required for single ion addressing as well as for efficient state detection by placing two lenses each with numerical aperture 0.23 inside the inner heat shield. The cryostat design incorporates vibration isolation to avoid decoherence of optical qubits due to the motion of the cryostat. We measure vibrations of the cryostat of less than ±20 nm over 2 s. In addition to the cryogenic apparatus, we describe the setup required for an operation with 40 Ca + and 88 Sr + ions. The instability of the laser manipulating the optical qubits in 40 Ca + is characterized by yielding a minimum of its Allan deviation of 2.4 ⋅ 10 -15 at 0.33 s. To evaluate the performance of the apparatus, we trapped 40 Ca + ions, obtaining a heating rate of 2.14(16) phonons/s and a Gaussian decay of the Ramsey contrast with a 1/e-time of 18.2(8) ms.

  3. Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

    Science.gov (United States)

    Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2009-06-15

    Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.

  4. Integrating support vector machines and random forests to classify crops in time series of Worldview-2 images

    Science.gov (United States)

    Zafari, A.; Zurita-Milla, R.; Izquierdo-Verdiguier, E.

    2017-10-01

    Crop maps are essential inputs for the agricultural planning done at various governmental and agribusinesses agencies. Remote sensing offers timely and costs efficient technologies to identify and map crop types over large areas. Among the plethora of classification methods, Support Vector Machine (SVM) and Random Forest (RF) are widely used because of their proven performance. In this work, we study the synergic use of both methods by introducing a random forest kernel (RFK) in an SVM classifier. A time series of multispectral WorldView-2 images acquired over Mali (West Africa) in 2014 was used to develop our case study. Ground truth containing five common crop classes (cotton, maize, millet, peanut, and sorghum) were collected at 45 farms and used to train and test the classifiers. An SVM with the standard Radial Basis Function (RBF) kernel, a RF, and an SVM-RFK were trained and tested over 10 random training and test subsets generated from the ground data. Results show that the newly proposed SVM-RFK classifier can compete with both RF and SVM-RBF. The overall accuracies based on the spectral bands only are of 83, 82 and 83% respectively. Adding vegetation indices to the analysis result in the classification accuracy of 82, 81 and 84% for SVM-RFK, RF, and SVM-RBF respectively. Overall, it can be observed that the newly tested RFK can compete with SVM-RBF and RF classifiers in terms of classification accuracy.

  5. A noninvasive technique for real-time detection of bruises in apple surface based on machine vision

    Science.gov (United States)

    Zhao, Juan; Peng, Yankun; Dhakal, Sagar; Zhang, Leilei; Sasao, Akira

    2013-05-01

    Apple is one of the highly consumed fruit item in daily life. However, due to its high damage potential and massive influence on taste and export, the quality of apple has to be detected before it reaches the consumer's hand. This study was aimed to develop a hardware and software unit for real-time detection of apple bruises based on machine vision technology. The hardware unit consisted of a light shield installed two monochrome cameras at different angles, LED light source to illuminate the sample, and sensors at the entrance of box to signal the positioning of sample. Graphical Users Interface (GUI) was developed in VS2010 platform to control the overall hardware and display the image processing result. The hardware-software system was developed to acquire the images of 3 samples from each camera and display the image processing result in real time basis. An image processing algorithm was developed in Opencv and C++ platform. The software is able to control the hardware system to classify the apple into two grades based on presence/absence of surface bruises with the size of 5mm. The experimental result is promising and the system with further modification can be applicable for industrial production in near future.

  6. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    Science.gov (United States)

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  7. Design and operation of a setup with a camera and adjustable mirror to inspect the sense-wire planes of the Time Projection Chamber inside the MicroBooNE cryostat

    International Nuclear Information System (INIS)

    Carls, B.; James, C.C.; Kubinski, R.M.; Pordes, S.; Schukraft, A.; Horton-Smith, G.; Strauss, T.

    2015-01-01

    Detectors in particle physics, particularly when including cryogenic components, are often enclosed in vessels that do not provide any physical or visual access to the detectors themselves after installation. However, it can be desirable for experiments to visually investigate the inside of the vessel. The MicroBooNE cryostat hosts a TPC with sense-wire planes, which had to be inspected for damage such as breakage or sagging. This inspection was performed after the transportation of the vessel with the enclosed detector to its final location, but before filling with liquid argon. This paper describes an approach to view the inside of the MicroBooNE cryostat with a setup of a camera and a mirror through one of its cryogenic service nozzles. The paper describes the camera and mirror chosen for the operation, the illumination, and the mechanical structure of the setup. It explains how the system was operated and demonstrates its performance

  8. A real-time brain-machine interface combining motor target and trajectory intent using an optimal feedback control design.

    Directory of Open Access Journals (Sweden)

    Maryam M Shanechi

    Full Text Available Real-time brain-machine interfaces (BMI have focused on either estimating the continuous movement trajectory or target intent. However, natural movement often incorporates both. Additionally, BMIs can be modeled as a feedback control system in which the subject modulates the neural activity to move the prosthetic device towards a desired target while receiving real-time sensory feedback of the state of the movement. We develop a novel real-time BMI using an optimal feedback control design that jointly estimates the movement target and trajectory of monkeys in two stages. First, the target is decoded from neural spiking activity before movement initiation. Second, the trajectory is decoded by combining the decoded target with the peri-movement spiking activity using an optimal feedback control design. This design exploits a recursive Bayesian decoder that uses an optimal feedback control model of the sensorimotor system to take into account the intended target location and the sensory feedback in its trajectory estimation from spiking activity. The real-time BMI processes the spiking activity directly using point process modeling. We implement the BMI in experiments consisting of an instructed-delay center-out task in which monkeys are presented with a target location on the screen during a delay period and then have to move a cursor to it without touching the incorrect targets. We show that the two-stage BMI performs more accurately than either stage alone. Correct target prediction can compensate for inaccurate trajectory estimation and vice versa. The optimal feedback control design also results in trajectories that are smoother and have lower estimation error. The two-stage decoder also performs better than linear regression approaches in offline cross-validation analyses. Our results demonstrate the advantage of a BMI design that jointly estimates the target and trajectory of movement and more closely mimics the sensorimotor control system.

  9. The Knife Machine. Module 15.

    Science.gov (United States)

    South Carolina State Dept. of Education, Columbia. Office of Vocational Education.

    This module on the knife machine, one in a series dealing with industrial sewing machines, their attachments, and operation, covers one topic: performing special operations on the knife machine (a single needle or multi-needle machine which sews and cuts at the same time). These components are provided: an introduction, directions, an objective,…

  10. A new approach for solving capacitated lot sizing and scheduling problem with sequence and period-dependent setup costs

    Directory of Open Access Journals (Sweden)

    Imen Chaieb Memmi

    2013-09-01

    Full Text Available Purpose: We aim to examine the capacitated multi-item lot sizing problem which is a typical example of a large bucket model, where many different items can be produced on the same machine in one time period. We propose a new approach to determine the production sequence and lot sizes that minimize the sum of start up and setup costs, inventory and production costs over all periods.Design/methodology/approach: The approach is composed of three steps. First, we compute a lower bound on total cost. Then we propose a three sub-steps iteration procedure. We solve optimally the lot sizing problem without considering products sequencing and their cost. Then, we determine products quantities to produce each period while minimizing the storage and variable production costs. Given the products to manufacture each period, we determine its correspondent optimal products sequencing, by using a Branch and Bound algorithm. Given the sequences of products within each period, we evaluate the total start up and setup cost. We compare then the total cost obtained to the lower bound of the total cost. If this value riches a prefixed value, we stop. Otherwise, we modify the results of lot sizing problem.Findings and Originality/value: We show using an illustrative example, that the difference between the total cost and its lower bound is only 10%. This gap depends on the significance of the inventory and production costs and the machine’s capacity. Comparing the approach we develop with a traditional one, we show that we manage to reduce the total cost by 30%.Research limitations/implications: Our model fits better to real-world situations where production systems run continuously. This model is applied for limited number of part types and periods.Practical implications: Our approach determines the products to manufacture each time period, their economic amounts, and their scheduling within each period. This outcome should help decision makers bearing expensive

  11. The ALICE time machine

    Directory of Open Access Journals (Sweden)

    Ferretti Alessandro

    2013-09-01

    Full Text Available According to the Big Bang theory, the Universe was once in an extremely hot and dense state which expanded rapidly. In such a state the normal nuclear matter could not exist: it is believed that a few microsecond after big-bang the matter underwent a phase transition, from a state called Quark-Gluon Plasma (QGP to a hadron gas. Some of the unexplained features of the Universe could be explained by the QGP properties. One of the aims of the CERN LHC is to recreate (on a smaller scale a QGP state, compressing and heating ordinary nuclear matter by means of ultrarelativistic heavy-ion collisions. The ALICE experiment at CERN is dedicated to the study of the medium produced in these collisions : in particular, the study of the heavy quarkonia suppression pattern can give a measure of the temperature reached in these collisions, helping us to understand how close we are getting to the conditions of the starting point of the Universe.

  12. Development of the pressure-time method as a relative and absolute method for low-head hydraulic machines

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Pontus [Poeyry SwedPower AB, Stockholm (Sweden); Cervantes, Michel [Luleaa Univ. of Technology, Luleaa (Sweden)

    2013-02-15

    The pressure-time method is an absolute method common for flow measurements in power plants. The method determines the flow rate by measuring the pressure and estimating the losses between two sections in the penstock during a closure of the guide vanes. The method has limitations according to the IEC41 standard, which makes it difficult to use at Swedish plants where the head is generally low. This means that there is limited experience/knowledge in Sweden on this method, where the Winter-Kennedy is usually used. Since several years, Luleaa University of Technology works actively in the development of the pressure-time method for low-head hydraulic machines with encouraging results. Focus has been in decreasing the distance between both measuring sections and evaluation of the viscous losses. Measurements were performed on a pipe test rig (D=0.3 m) in a laboratory under well controlled conditions with 7time dependent losses, allowing decreasing the error by up 0.4%. The present work presents pressure-time measurements (with L=5 m) performed on a 10 MW Kaplan turbine compared to transit-time flow measurements. The new formulation taking into account the unsteady losses allows a better estimation of the flow rate, up to 0.3%. As an alternative to the Winter-Kennedy widely used in Sweden, the pressure-time method was tested as a relative method by measuring the pressure between the free surface and a section in the penstock without knowing the exact geometry, i.e., pipe factor. Such measurements may be simple to perform as most of the inlet spiral casings have pressure taps. Furthermore, the viscous losses do not need to be accurately determined as long as they are handled similarly between the measurements. The pressure-time method may thus become an alternative to the Winter-Kennedy.

  13. Evaluation of cleaning and disinfection performance of automatic washer disinfectors machines in programs presenting different cycle times and temperatures.

    Science.gov (United States)

    Bergo, Maria do Carmo Noronha Cominato

    2006-01-01

    Thermal washer-disinfectors represent a technology that brought about great advantages such as, establishment of protocols, standard operating procedures, reduction in occupational risk of a biological and environmental nature. The efficacy of the cleaning and disinfection obtained by automatic washer disinfectors machines in running programs with different times and temperatures determined by the different official agencies was validated according to recommendations from ISO Standards 15883-1/1999 and HTM2030 (NHS Estates, 1997) for the determining of the Minimum Lethality and DAL both theoretically and through the use with thermocouples. In order to determine the cleaning efficacy, the Soil Test, Biotrace Pro-tect and the Protein Test Kit were used. The procedure to verify the CFU count of viable microorganisms was performed before and after the thermal disinfection. This article shows that the results are in compliance with the ISO and HTM Standards. The validation steps confirmed the high efficacy level of the Medical Washer-Disinfectors. This protocol enabled the evaluation of the procedure based on evidence supported by scientific research, aiming at the support of the Supply Center multi-professional personnel with information and the possibility of developing further research.

  14. Formula student suspension setup and laptime simulation tool

    NARCIS (Netherlands)

    van den Heuvel, E.; Besselink, I.J.M.; Nijmeijer, H.

    2013-01-01

    In motorsports time is usually limited. With use of dedicated tools for measuring wheel alignment, camber, ride heights etc. setting up the car can be done fast and consistent. With the setup sequence and tools described in this report, progress has been made in the time it takes to set up the car.

  15. Uma propriedade estrutural do problema de programação da produção flow shop permutacional com tempos de setup

    Directory of Open Access Journals (Sweden)

    João Vitor Moccellin

    2007-01-01

    Full Text Available Neste artigo apresenta-se uma propriedade estrutural do problema de programação da produção flow shop permutacional com tempos de setup das máquinas separados dos tempos de processamento das tarefas, a qual foi identificada a partir de investigações que foram realizadas sobre as características do problema. Tal propriedade fornece um limitante superior do tempo de máquina parada entre a sua preparação e o início de execução das tarefas. Utilizando a propriedade, o problema original de programação da produção com minimização do makespan pode ser resolvido de maneira heurística por meio de uma analogia com o problema assimétrico do caixeiro-viajante.This paper deals with the permutation flow shop scheduling problem with separated machine setup times. As a result of an investigation on the problem characteristics a structural property is introduced. Such a property provides an upper bound on the idle time of the machines between the setup task and the job processing. As an application of this property, the original scheduling problem with the makespan criterion can be heuristically solved by an analogy with the asymmetric traveling salesman problem.

  16. Experimental Setup to Characterize Bentonite Hydration Processes

    International Nuclear Information System (INIS)

    Bru, A.; Casero, D.; Pastor, J. M.

    2001-01-01

    We present an experimental setup to follow-up the hydration process of a bentonite. Clay samples, of 2 cm x 12 cm x 12 cm, were made and introduced in a Hele-Shaw cell with two PMM windows and two steel frames. In hydration experiments, a fluid enters by an orifice in the frame, located both at the top and the bottom of the cell, to perform hydration in both senses. To get a uniform hydration we place a diffuser near the orifice. Volume influxes in hydration cells are registered in time. The evolution of the developed interface was recorded on a videotape. The video cameras was fixed to a holder so that the vertical direction in the monitor was the same as the direction of the larger extension of the cell. (Author) 6 refs

  17. PC based manual and safety logic card test setup for 235 MWe PHWRs

    International Nuclear Information System (INIS)

    Chandgadkar, G.M.; Kohli, A.K.; Agarwal, R.G.; Chandra, Rajesh

    1992-01-01

    Fuel handling controls for 235 MWe PHWR make use of Manual and Logic cards (MLCs) for providing safety interlocks. These cards consist of various type of logic blocks. By connecting these logic blocks all the safety interlocks required for fuel handling controls have been provided. Previously trouble shooting of these cards was done by means of logic probe. Since the method was manual, it was laborious and time consuming. PC based test setup has overcome this drawback and detects the fault at the component level within few seconds. It also gives printout of status of faulty MLC cards. Here motherboard has been designed having slots for insertion of MLC cards. The input/output connection of these cards are coming to two 50 pin FRC connectors. PC communicates through 144 line digital input/output card with MLC card under test. Software is user friendly and outputs suitable input patterns to the card under test and checks for output pattern. It compares this output pattern with compare pattern and detects the fault and displays the symptoms. This system is currently in use at test facility for fuelling machine for 235 MWe PHWR reactor at Refuelling Technology Division, Hall-7. This test setup has been proposed for use at NAPP and future reactors. (author). 4 figs., 1 annexure

  18. Optimization of line configuration and balancing for flexible machining lines

    Science.gov (United States)

    Liu, Xuemei; Li, Aiping; Chen, Zurui

    2016-05-01

    Line configuration and balancing is to select the type of line and allot a given set of operations as well as machines to a sequence of workstations to realize high-efficiency production. Most of the current researches for machining line configuration and balancing problems are related to dedicated transfer lines with dedicated machine workstations. With growing trends towards great product variety and fluctuations in market demand, dedicated transfer lines are being replaced with flexible machining line composed of identical CNC machines. This paper deals with the line configuration and balancing problem for flexible machining lines. The objective is to assign operations to workstations and find the sequence of execution, specify the number of machines in each workstation while minimizing the line cycle time and total number of machines. This problem is subject to precedence, clustering, accessibility and capacity constraints among the features, operations, setups and workstations. The mathematical model and heuristic algorithm based on feature group strategy and polychromatic sets theory are presented to find an optimal solution. The feature group strategy and polychromatic sets theory are used to establish constraint model. A heuristic operations sequencing and assignment algorithm is given. An industrial case study is carried out, and multiple optimal solutions in different line configurations are obtained. The case studying results show that the solutions with shorter cycle time and higher line balancing rate demonstrate the feasibility and effectiveness of the proposed algorithm. This research proposes a heuristic line configuration and balancing algorithm based on feature group strategy and polychromatic sets theory which is able to provide better solutions while achieving an improvement in computing time.

  19. Development of group setup strategies for makespan minimization in PCB assembly

    DEFF Research Database (Denmark)

    Yilmaz, I.O.; Grunow, M.; Günther, H.-O.

    2007-01-01

    of the component magazine into account. We demonstrate the effectiveness of our approach in an extensive numerical investigation of a single-gantry collect-and-place machine equipped with a rotary placement head and an interchangeable feeder trolley. Compared to conventional methodologies, the proposed group setup...

  20. Inventory control with multiple setup costs

    NARCIS (Netherlands)

    Alp, O.; Huh, W.T.; Tan, T.

    2014-01-01

    We consider an infinite-horizon, periodic-review, single-item production/inventory system with random demand and backordering, where multiple setups are allowed in any period and a separate fixed cost is associated for each setup. Contrary to the majority of the literature on this topic, we do not

  1. Application of virtual machine technology to real-time mapping of Thomson scattering data to flux coordinates for the LHD

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Yoshida, Masanobu; Suzuki, Chihiro; Suzuki, Yasuhiro; Ida, Katsumi; Nagayama, Yoshio; Akiyama, Tsuyoshi; Kawahata, Kazuo; Narihara, Kazumichi; Tokuzawa, Tokihiko; Yamada, Ichihiro

    2012-01-01

    Highlights: ► We have developed a mapping system of the electron temperature profile to the flux coordinates. ► To increases the performance, multiple virtual machines are used. ► The virtual machine technology is flexible when increasing the number of computers. - Abstract: This paper presents a system called “TSMAP” that maps electron temperature profiles to flux coordinates for the Large Helical Device (LHD). Considering the flux surface is isothermal, TSMAP searches an equilibrium database for the LHD equilibrium that fits the electron temperature profile. The equilibrium database is built through many VMEC computations of the helical equilibria. Because the number of equilibria is large, the most important technical issue for realizing the TSMAP system is computational performance. Therefore, we use multiple personal computers to enhance performance when building the database for TSMAP. We use virtual machines on multiple Linux computers to run the TSMAP program. Virtual machine technology is flexible, allowing the number of computers to be easily increased. This paper discusses how the use of virtual machine technology enhances the performance of TSMAP calculations when multiple CPU cores are used.

  2. A modified genetic algorithm for time and cost optimization of an additive manufacturing single-machine scheduling

    Directory of Open Access Journals (Sweden)

    M. Fera

    2018-09-01

    Full Text Available Additive Manufacturing (AM is a process of joining materials to make objects from 3D model data, usually layer by layer, as opposed to subtractive manufacturing methodologies. Selective Laser Melting, commercially known as Direct Metal Laser Sintering (DMLS®, is the most diffused additive process in today’s manufacturing industry. Introduction of a DMLS® machine in a production department has remarkable effects not only on industrial design but also on production planning, for example, on machine scheduling. Scheduling for a traditional single machine can employ consolidated models. Scheduling of an AM machine presents new issues because it must consider the capability of producing different geometries, simultaneously. The aim of this paper is to provide a mathematical model for an AM/SLM machine scheduling. The complexity of the model is NP-HARD, so possible solutions must be found by metaheuristic algorithms, e.g., Genetic Algorithms. Genetic Algorithms solve sequential optimization problems by handling vectors; in the present paper, we must modify them to handle a matrix. The effectiveness of the proposed algorithms will be tested on a test case formed by a 30 Part Number production plan with a high variability in complexity, distinct due dates and low production volumes.

  3. Human-machine analytics for closed-loop sense-making in time-dominant cyber defense problems

    Science.gov (United States)

    Henry, Matthew H.

    2017-05-01

    Many defense problems are time-dominant: attacks progress at speeds that outpace human-centric systems designed for monitoring and response. Despite this shortcoming, these well-honed and ostensibly reliable systems pervade most domains, including cyberspace. The argument that often prevails when considering the automation of defense is that while technological systems are suitable for simple, well-defined tasks, only humans possess sufficiently nuanced understanding of problems to act appropriately under complicated circumstances. While this perspective is founded in verifiable truths, it does not account for a middle ground in which human-managed technological capabilities extend well into the territory of complex reasoning, thereby automating more nuanced sense-making and dramatically increasing the speed at which it can be applied. Snort1 and platforms like it enable humans to build, refine, and deploy sense-making tools for network defense. Shortcomings of these platforms include a reliance on rule-based logic, which confounds analyst knowledge of how bad actors behave with the means by which bad behaviors can be detected, and a lack of feedback-informed automation of sensor deployment. We propose an approach in which human-specified computational models hypothesize bad behaviors independent of indicators and then allocate sensors to estimate and forecast the state of an intrusion. State estimates and forecasts inform the proactive deployment of additional sensors and detection logic, thereby closing the sense-making loop. All the while, humans are on the loop, rather than in it, permitting nuanced management of fast-acting automated measurement, detection, and inference engines. This paper motivates and conceptualizes analytics to facilitate this human-machine partnership.

  4. SU-E-T-373: A Motorized Stage for Fast and Accurate QA of Machine Isocenter

    International Nuclear Information System (INIS)

    Moore, J; Velarde, E; Wong, J

    2014-01-01

    Purpose: Precision delivery of radiation dose relies on accurate knowledge of the machine isocenter under a variety of machine motions. This is typically determined by performing a Winston-Lutz test consisting of imaging a known object at multiple gantry/collimator/table angles and ensuring that the maximum offset is within specified tolerance. The first step in the Winston-Lutz test is careful placement of a ball bearing at the machine isocenter as determined by repeated imaging and shifting until accurate placement has been determined. Conventionally this is performed by adjusting a stage manually using vernier scales which carry the limitation that each adjustment must be done inside the treatment room with the risks of inaccurate adjustment of the scale and physical bumping of the table. It is proposed to use a motorized system controlled outside of the room to improve the required time and accuracy of these tests. Methods: The three dimensional vernier scales are replaced by three motors with accuracy of 1 micron and a range of 25.4mm connected via USB to a computer in the control room. Software is designed which automatically detects the motors and assigns them to proper axes and allows for small shifts to be entered and performed. Input values match calculated offsets in magnitude and sign to reduce conversion errors. Speed of setup, number of iterations to setup, and accuracy of final placement are assessed. Results: Automatic BB placement required 2.25 iterations and 13 minutes on average while manual placement required 3.76 iterations and 37.5 minutes. The average final XYZ offsets is 0.02cm, 0.01cm, 0.04cm for automatic setup and 0.04cm, 0.02cm, 0.04cm for manual setup. Conclusion: Automatic placement decreased time and repeat iterations for setup while improving placement accuracy. Automatic placement greatly reduces the time required to perform QA

  5. Eye-in-Hand Manipulation for Remote Handling: Experimental Setup

    Science.gov (United States)

    Niu, Longchuan; Suominen, Olli; Aref, Mohammad M.; Mattila, Jouni; Ruiz, Emilio; Esque, Salvador

    2018-03-01

    A prototype for eye-in-hand manipulation in the context of remote handling in the International Thermonuclear Experimental Reactor (ITER)1 is presented in this paper. The setup consists of an industrial robot manipulator with a modified open control architecture and equipped with a pair of stereoscopic cameras, a force/torque sensor, and pneumatic tools. It is controlled through a haptic device in a mock-up environment. The industrial robot controller has been replaced by a single industrial PC running Xenomai that has a real-time connection to both the robot controller and another Linux PC running as the controller for the haptic device. The new remote handling control environment enables further development of advanced control schemes for autonomous and semi-autonomous manipulation tasks. This setup benefits from a stereovision system for accurate tracking of the target objects with irregular shapes. The overall environmental setup successfully demonstrates the required robustness and precision that remote handling tasks need.

  6. MATLAB simulation for an experimental setup of digital feedback control

    International Nuclear Information System (INIS)

    Zheng Lifang; Liu Songqiang

    2005-01-01

    This paper describes the digital feedback simulation using MATLAB for an experimental accelerator control setup. By analyzing the plant characteristic in time-domain and frequency-domain, a guideline for design of digital filter and PID controller is derived. (authors)

  7. Construction of Multi-Year Time-Series Profiles of Suspended Particulate Inorganic Matter Concentrations Using Machine Learning Approach

    Directory of Open Access Journals (Sweden)

    Pannimpullath R. Renosh

    2017-12-01

    Full Text Available Hydro-sedimentary numerical models have been widely employed to derive suspended particulate matter (SPM concentrations in coastal and estuarine waters. These hydro-sedimentary models are computationally and technically expensive in nature. Here we have used a computationally less-expensive, well-established methodology of self-organizing maps (SOMs along with a hidden Markov model (HMM to derive profiles of suspended particulate inorganic matter (SPIM. The concept of the proposed work is to benefit from all available data sets through the use of fusion methods and machine learning approaches that are able to process a growing amount of available data. This approach is applied to two different data sets entitled “Hidden” and “Observable”. The hidden data are composed of 15 months (27 September 2007 to 30 December 2008 of hourly SPIM profiles extracted from the Regional Ocean Modeling System (ROMS. The observable data include forcing parameter variables such as significant wave heights ( H s and H s 50 (50 days from the Wavewatch 3-HOMERE database and barotropic currents ( U b a r and V b a r from the Iberian–Biscay–Irish (IBI reanalysis data. These observable data integrate hourly surface samples from 1 February 2002 to 31 December 2012. The time-series profiles of the SPIM have been derived from four different stations in the English Channel by considering 15 months of output hidden data from the ROMS as a statistical representation of the ocean for ≈11 years. The derived SPIM profiles clearly show seasonal and tidal fluctuations in accordance with the parent numerical model output. The surface SPIM concentrations of the derived model have been validated with satellite remote sensing data. The time series of the modeled SPIM and satellite-derived SPIM show similar seasonal fluctuations. The ranges of concentrations for the four stations are also in good agreement with the corresponding satellite data. The high accuracy of the

  8. Nanocomposites for Machining Tools

    Directory of Open Access Journals (Sweden)

    Daria Sidorenko

    2017-10-01

    Full Text Available Machining tools are used in many areas of production. To a considerable extent, the performance characteristics of the tools determine the quality and cost of obtained products. The main materials used for producing machining tools are steel, cemented carbides, ceramics and superhard materials. A promising way to improve the performance characteristics of these materials is to design new nanocomposites based on them. The application of micromechanical modeling during the elaboration of composite materials for machining tools can reduce the financial and time costs for development of new tools, with enhanced performance. This article reviews the main groups of nanocomposites for machining tools and their performance.

  9. Sustainable machining

    CERN Document Server

    2017-01-01

    This book provides an overview on current sustainable machining. Its chapters cover the concept in economic, social and environmental dimensions. It provides the reader with proper ways to handle several pollutants produced during the machining process. The book is useful on both undergraduate and postgraduate levels and it is of interest to all those working with manufacturing and machining technology.

  10. An Integer Batch Scheduling Model for a Single Machine with Simultaneous Learning and Deterioration Effects to Minimize Total Actual Flow Time

    Science.gov (United States)

    Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.

    2016-02-01

    In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%.

  11. An Integer Batch Scheduling Model for a Single Machine with Simultaneous Learning and Deterioration Effects to Minimize Total Actual Flow Time

    International Nuclear Information System (INIS)

    Yusriski, R; Sukoyo; Samadhi, T M A A; Halim, A H

    2016-01-01

    In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%. (paper)

  12. Statistical analysis of simulation-generated time series : Systolic vs. semi-systolic correlation on the Connection Machine

    NARCIS (Netherlands)

    Dontje, T.; Lippert, Th.; Petkov, N.; Schilling, K.

    1992-01-01

    Autocorrelation becomes an increasingly important tool to verify improvements in the state of the simulational art in Latice Gauge Theory. Semi-systolic and full-systolic algorithms are presented which are intensively used for correlation computations on the Connection Machine CM-2. The

  13. Scheduling by positional completion times: analysis of a two-stage flow shop problem with a batching machine

    NARCIS (Netherlands)

    Hoogeveen, J.A.; Velde, van de S.L.

    1998-01-01

    We consider a scheduling problem introduced by Ahmadi et al., Batching and scheduling jobs on batch and discrete processors, Operation Research 40 (1992) 750–763, in which each job has to be prepared before it can be processed. The preparation is performed by a batching machine; it can prepare at

  14. Digital setup for Doppler broadening spectroscopy

    International Nuclear Information System (INIS)

    Cizek, J; Vlcek, M; Prochazka, I

    2011-01-01

    New digital spectrometer for measurement of the Doppler shift of annihilation photons was developed and tested in this work. Digital spectrometer uses a fast 12-bit digitizer for direct sampling of signals from HPGe detectors. Analysis of sampled waveforms is performed off-line in software. Performance of the new digital setup was compared with its traditional analogue counterpart. Superior energy resolution was achieved in the digital setup. Moreover, the digital setup allows for a better control of the shape of detector signals. This enables to eliminate undesired signals damaged by pile-up effects or by ballistic deficit.

  15. 3D CT cerebral angiography technique using a 320-detector machine with a time–density curve and low contrast medium volume: Comparison with fixed time delay technique

    International Nuclear Information System (INIS)

    Das, K.; Biswas, S.; Roughley, S.; Bhojak, M.; Niven, S.

    2014-01-01

    Aim: To describe a cerebral computed tomography angiography (CTA) technique using a 320-detector CT machine and a small contrast medium volume (35 ml, 15 ml for test bolus). Also, to compare the quality of these images with that of the images acquired using a larger contrast medium volume (90 or 120 ml) and a fixed time delay (FTD) of 18 s using a 16-detector CT machine. Materials and methods: Cerebral CTA images were acquired using a 320-detector machine by synchronizing the scanning time with the time of peak enhancement as determined from the time–density curve (TDC) using a test bolus dose. The quality of CTA images acquired using this technique was compared with that obtained using a FTD of 18 s (by 16-detector CT), retrospectively. Average densities in four different intracranial arteries, overall opacification of arteries, and the degree of venous contamination were graded and compared. Results: Thirty-eight patients were scanned using the TDC technique and 40 patients using the FTD technique. The arterial densities achieved by the TDC technique were higher (significant for supraclinoid and basilar arteries, p < 0.05). The proportion of images deemed as having “good” arterial opacification was 95% for TDC and 90% for FTD. The degree of venous contamination was significantly higher in images produced by the FTD technique (p < 0.001%). Conclusion: Good diagnostic quality CTA images with significant reduction of venous contamination can be achieved with a low contrast medium dose using a 320-detector machine by coupling the time of data acquisition with the time of peak enhancement

  16. Development of Data Acquisition Set-up for Steady-state Experiments

    Science.gov (United States)

    Srivastava, Amit K.; Gupta, Arnab D.; Sunil, S.; Khan, Ziauddin

    2017-04-01

    For short duration experiments, generally digitized data is transferred for processing and storage after the experiment whereas in case of steady-state experiment the data is acquired, processed, displayed and stored continuously in pipelined manner. This requires acquiring data through special techniques for storage and on-the-go viewing data to display the current data trends for various physical parameters. A small data acquisition set-up is developed for continuously acquiring signals from various physical parameters at different sampling rate for long duration experiment. This includes the hardware set-up for signal digitization, Field Programmable Gate Arrays (FPGA) based timing system for clock synchronization and event/trigger distribution, time slicing of data streams for storage of data chunks to enable viewing of data during acquisition and channel profile display through down sampling etc. In order to store a long data stream of indefinite/long time duration, the data stream is divided into data slices/chunks of user defined time duration. Data chunks avoid the problem of non-access of server data until the channel data file is closed at the end of the long duration experiment. A graphical user interface has been developed in Lab VIEW application development environment for configuring the data acquisition hardware and storing data chunks on local machine as well as at remote data server through Python for further data access. The data plotting and analysis utilities have been developed with Python software, which provides tools for further data processing. This paper describes the development and implementation of data acquisition for steady-state experiment.

  17. Sine-Bar Attachment For Machine Tools

    Science.gov (United States)

    Mann, Franklin D.

    1988-01-01

    Sine-bar attachment for collets, spindles, and chucks helps machinists set up quickly for precise angular cuts that require greater precision than provided by graduations of machine tools. Machinist uses attachment to index head, carriage of milling machine or lathe relative to table or turning axis of tool. Attachment accurate to 1 minute or arc depending on length of sine bar and precision of gauge blocks in setup. Attachment installs quickly and easily on almost any type of lathe or mill. Requires no special clamps or fixtures, and eliminates many trial-and-error measurements. More stable than improvised setups and not jarred out of position readily.

  18. A setup for active fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2006-01-01

    A setup for active fault diagnosis (AFD) of parametric faults in dynamic systems is formulated in this paper. It is shown that it is possible to use the same setup for both open loop systems, closed loop systems based on a nominal feedback controller as well as for closed loop systems based...... on a reconfigured feedback controller. This will make the proposed AFD approach very useful in connection with fault tolerant control (FTC). The setup will make it possible to let the fault diagnosis part of the fault tolerant controller remain unchanged after a change in the feedback controller. The setup for AFD...... is based on the YJBK (after Youla, Jabr, Bongiorno and Kucera) parameterization of all stabilizing feedback controllers and the dual YJBK parameterization. It is shown that the AFD is based directly on the dual YJBK transfer function matrix. This matrix will be named the fault signature matrix when...

  19. Scintillation forward spectrometer of the SPHERE setup

    International Nuclear Information System (INIS)

    Anisimov, Yu.S.; Afanas'ev, S.V.; Bondarev, V.K.

    1991-01-01

    The construction of the forward spectrometer for the 4π SPHERE setup to study multiple production of particles in nucleus-nucleus interactions is described. The measured parameters of the spectrometer detectors are presented. 7 refs.; 14 figs.; 1 tab

  20. Assessment of loaded squat jump height with a free-weight barbell and Smith machine: comparison of the take-off velocity and flight time procedures.

    Science.gov (United States)

    Pérez-Castilla, Alejandro; McMahon, John J; Comfort, Paul; García-Ramos, Amador

    2017-07-31

    The aims of this study were to compare the reliability and magnitude of jump height between the two standard procedures of analysing force platform data to estimate jump height (take-off velocity [TOV] and flight time [FT]) in the loaded squat jump (SJ) exercise performed with a free-weight barbell and in a Smith machine. Twenty-three collegiate men (age 23.1 ± 3.2 years, body mass 74.7 ± 7.3 kg, height 177.1 ± 7.0 cm) were tested twice for each SJ type (free-weight barbell and Smith machine) with 17, 30, 45, 60, and 75 kg loads. No substantial differences in reliability were observed between the TOV (Coefficient of variation [CV]: 9.88%; Intraclass correlation coefficient [ICC]: 0.82) and FT (CV: 8.68%; ICC: 0.88) procedures (CV ratio: 1.14), while the Smith SJ (CV: 7.74%; ICC: 0.87) revealed a higher reliability than the free-weight SJ (CV: 9.88%; ICC: 0.81) (CV ratio: 1.28). The TOV procedure provided higher magnitudes of jump height than the FT procedure for the loaded Smith machine SJ (systematic bias: 2.64 cm; Pfree-weight SJ exercise (systematic bias: 0.26 cm; P>0.05). Heteroscedasticity of the errors was observed for the Smith machine SJ (r: 0.177) with increasing differences in favour of the TOV procedure for the trials with lower jump height (i.e. higher external loads). Based on these results the use of a Smith machine in conjunction with the FT more accurately determine jump height during the loaded SJ.

  1. Experimental investigation of time and repeated cycles in nucleate pool boiling of alumina/water nanofluid on polished and machined surfaces

    Science.gov (United States)

    Rajabzadeh Dareh, F.; Haghshenasfard, M.; Nasr Esfahany, M.; Salimi Jazi, H.

    2018-06-01

    Pool boiling heat transfer of pure water and nanofluids on a copper block has been studied experimentally. Nanofluids with various concentrations of 0.0025, 0.005 and 0.01 vol.% are employed and two simple surfaces (polished and machined copper surface) are used as the heating surfaces. The results indicated that the critical heat flux (CHF) in boiling of fluids on the polished surface is 7% higher than CHF on the machined surface. In the case of machined surface, the heat transfer coefficient (HTC) of 0.01 vol.% nanofluid is about 37% higher than HTC of base fluid, while in the polished surface the average HTC of 0.01% nanofluid is about 19% lower than HTC of the pure water. The results also showed that the boiling time and boiling cycles on the polished surface changes the heat transfer performance. By increasing the boiling time from 5 to 10 min, the roughness enhances about 150%, but by increasing the boiling time to 15 min, the roughness enhancement is only 8%.

  2. Experimental Setups for Single Event Effect Studies

    OpenAIRE

    N. H. Medina; V. A. P. Aguiar; N. Added; F. Aguirre; E. L. A. Macchione; S. G. Alberton; M. A. G. Silveira; J. Benfica; F. Vargas; B. Porcher

    2016-01-01

    Experimental setups are being prepared to test and to qualify electronic devices regarding their tolerance to Single Event Effect (SEE). A multiple test setup and a new beam line developed especially for SEE studies at the São Paulo 8 UD Pelletron accelerator were prepared. This accelerator produces proton beams and heavy ion beams up to 107Ag. A Super conducting Linear accelerator, which is under construction, may fulfill all of the European Space Agency requirements to qualify electronic...

  3. Design of rotating electrical machines

    CERN Document Server

    Pyrhonen , Juha; Hrabovcova , Valeria

    2013-01-01

    In one complete volume, this essential reference presents an in-depth overview of the theoretical principles and techniques of electrical machine design. This timely new edition offers up-to-date theory and guidelines for the design of electrical machines, taking into account recent advances in permanent magnet machines as well as synchronous reluctance machines. New coverage includes: Brand new material on the ecological impact of the motors, covering the eco-design principles of rotating electrical machinesAn expanded section on the design of permanent magnet synchronous machines, now repo

  4. Characterization of a neutron imaging setup at the INES facility

    Energy Technology Data Exchange (ETDEWEB)

    Durisi, E.A., E-mail: elisabettaalessandra.durisi@unito.it [Università di Torino, Dipartimento di Fisica, Via Pietro Giuria 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare—Sezione di Torino, Via Pietro Giuria 1, 10125 Torino (Italy); Visca, L. [Università di Torino, Dipartimento di Fisica, Via Pietro Giuria 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare—Sezione di Torino, Via Pietro Giuria 1, 10125 Torino (Italy); Albertin, F.; Brancaccio, R. [Istituto Nazionale di Fisica Nucleare—Sezione di Torino, Via Pietro Giuria 1, 10125 Torino (Italy); Corsi, J. [Università di Torino, Dipartimento di Fisica, Via Pietro Giuria 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare—Sezione di Torino, Via Pietro Giuria 1, 10125 Torino (Italy); Dughera, G. [Istituto Nazionale di Fisica Nucleare—Sezione di Torino, Via Pietro Giuria 1, 10125 Torino (Italy); Ferrarese, W. [Università di Torino, Dipartimento di Fisica, Via Pietro Giuria 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare—Sezione di Torino, Via Pietro Giuria 1, 10125 Torino (Italy); Giovagnoli, A.; Grassi, N. [Fondazione Centro per la Conservazione ed il Restauro dei Beni Culturali “La Venaria Reale”, Piazza della Repubblica, 10078 Venaria Reale, Torino (Italy); Grazzi, F. [Consiglio Nazionale delle Ricerche, Istituto dei Sistemi Complessi, Via Madonna del Piano 10, 50019 Sesto Fiorentino, Firenze (Italy); Lo Giudice, A.; Mila, G. [Università di Torino, Dipartimento di Fisica, Via Pietro Giuria 1, 10125 Torino (Italy); Istituto Nazionale di Fisica Nucleare—Sezione di Torino, Via Pietro Giuria 1, 10125 Torino (Italy); and others

    2013-10-21

    The Italian Neutron Experimental Station (INES) located at the ISIS pulsed neutron source (Didcot, United Kingdom) provides a thermal neutron beam mainly used for diffraction analysis. A neutron transmission imaging system was also developed for beam monitoring and for aligning the sample under investigation. Although the time-of-flight neutron diffraction is a consolidated technique, the neutron imaging setup is not yet completely characterized and optimized. In this paper the performance for neutron radiography and tomography at INES of two scintillator screens read out by two different commercial CCD cameras is compared in terms of linearity, signal-to-noise ratio, effective dynamic range and spatial resolution. In addition, the results of neutron radiographies and a tomography of metal alloy test structures are presented to better characterize the INES imaging capabilities of metal artifacts in the cultural heritage field. -- Highlights: A full characterization of the present INES imaging set-up was carried out. Two CCD cameras and two scintillators (ZnS/{sup 6}LiF) of different thicknesses were tested. Linearity, effective dynamic range and spatial resolution were determined. Radiographies of steep wedges were performed using the highest dynamic range setup. Tomography of a bronze cube was performed using the best spatial resolution setup.

  5. Characterization of a neutron imaging setup at the INES facility

    International Nuclear Information System (INIS)

    Durisi, E.A.; Visca, L.; Albertin, F.; Brancaccio, R.; Corsi, J.; Dughera, G.; Ferrarese, W.; Giovagnoli, A.; Grassi, N.; Grazzi, F.; Lo Giudice, A.; Mila, G.

    2013-01-01

    The Italian Neutron Experimental Station (INES) located at the ISIS pulsed neutron source (Didcot, United Kingdom) provides a thermal neutron beam mainly used for diffraction analysis. A neutron transmission imaging system was also developed for beam monitoring and for aligning the sample under investigation. Although the time-of-flight neutron diffraction is a consolidated technique, the neutron imaging setup is not yet completely characterized and optimized. In this paper the performance for neutron radiography and tomography at INES of two scintillator screens read out by two different commercial CCD cameras is compared in terms of linearity, signal-to-noise ratio, effective dynamic range and spatial resolution. In addition, the results of neutron radiographies and a tomography of metal alloy test structures are presented to better characterize the INES imaging capabilities of metal artifacts in the cultural heritage field. -- Highlights: A full characterization of the present INES imaging set-up was carried out. Two CCD cameras and two scintillators (ZnS/ 6 LiF) of different thicknesses were tested. Linearity, effective dynamic range and spatial resolution were determined. Radiographies of steep wedges were performed using the highest dynamic range setup. Tomography of a bronze cube was performed using the best spatial resolution setup

  6. Tritium calorimeter setup and operation

    CERN Document Server

    Rodgers, D E

    2002-01-01

    The LBNL tritium calorimeter is a stable instrument capable of measuring tritium with a sensitivity of 25 Ci. Measurement times range from 8-hr to 7-days depending on the thermal conductivity and mass of the material being measured. The instrument allows accurate tritium measurements without requiring that the sample be opened and subsampled, thus reducing personnel exposure and radioactive waste generation. The sensitivity limit is primarily due to response shifts caused by temperature fluctuation in the water bath. The fluctuations are most likely a combination of insufficient insulation from ambient air and precision limitations in the temperature controller. The sensitivity could probably be reduced to below 5 Ci if the following improvements were made: (1) Extend the external insulation to cover the entire bath and increase the top insulation. (2) Improve the seal between the air space above the bath and the outside air to reduce evaporation. This will limit the response drift as the water level drops. (...

  7. A new Nawaz-Enscore-Ham-based heuristic for permutation flow-shop problems with bicriteria of makespan and machine idle time

    Science.gov (United States)

    Liu, Weibo; Jin, Yan; Price, Mark

    2016-10-01

    A new heuristic based on the Nawaz-Enscore-Ham algorithm is proposed in this article for solving a permutation flow-shop scheduling problem. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion with the objective of minimizing both makespan and machine idle time. Statistical tests illustrate better solution quality of the proposed algorithm compared to existing benchmark heuristics.

  8. Laboratory setup for incremental forming at IPL

    DEFF Research Database (Denmark)

    Young, Dave; Andreasen, Jan Lasson

    and machining laboratories. A laboratory equipment was developed during spring 2004 with our Cincinatti CNC milling machine as basis. The first obstacle to overcome was to establish a possibility for running CNC programs larger than capacity of the memory in the CNC controller of the milling machine. A solution...... is a collection of the work done by Dave in cooperation with Lars P. Holmbæck, Jan L. Andreasen and Niels Bay during his 1 month stay here. The work has been concentrated on following subjects: 1. Design and construction of a rig for SPIF for our Cincinatti CNC milling machine 2. Establishment of methods...

  9. Tritium calorimeter setup and operation

    International Nuclear Information System (INIS)

    Rodgers, David E.

    2002-01-01

    The LBNL tritium calorimeter is a stable instrument capable of measuring tritium with a sensitivity of 25 Ci. Measurement times range from 8-hr to 7-days depending on the thermal conductivity and mass of the material being measured. The instrument allows accurate tritium measurements without requiring that the sample be opened and subsampled, thus reducing personnel exposure and radioactive waste generation. The sensitivity limit is primarily due to response shifts caused by temperature fluctuation in the water bath. The fluctuations are most likely a combination of insufficient insulation from ambient air and precision limitations in the temperature controller. The sensitivity could probably be reduced to below 5 Ci if the following improvements were made: (1) Extend the external insulation to cover the entire bath and increase the top insulation. (2) Improve the seal between the air space above the bath and the outside air to reduce evaporation. This will limit the response drift as the water level drops. (3) Install an improved temperature controller, preferably with a built in chiller, capable of temperature control to ±0.001 C

  10. Precision machining commercialization

    International Nuclear Information System (INIS)

    1978-01-01

    To accelerate precision machining development so as to realize more of the potential savings within the next few years of known Department of Defense (DOD) part procurement, the Air Force Materials Laboratory (AFML) is sponsoring the Precision Machining Commercialization Project (PMC). PMC is part of the Tri-Service Precision Machine Tool Program of the DOD Manufacturing Technology Five-Year Plan. The technical resources supporting PMC are provided under sponsorship of the Department of Energy (DOE). The goal of PMC is to minimize precision machining development time and cost risk for interested vendors. PMC will do this by making available the high precision machining technology as developed in two DOE contractor facilities, the Lawrence Livermore Laboratory of the University of California and the Union Carbide Corporation, Nuclear Division, Y-12 Plant, at Oak Ridge, Tennessee

  11. A Study of Resin as Master Jewellery Material, Surface Quality and Machining Time Improvement by Implementing Appropriate Cutting Strategy

    Directory of Open Access Journals (Sweden)

    Puspaputra Paryana

    2017-01-01

    Full Text Available This paper deals with a research about art and jewellery product machining that focused in the selection of appropriate material for jewellery master which is machined by CNC. CNC is used for better surface finish for no undercut design and more complex ornament. The need of production speed requires minimum process without reducing the quality of detail ornament significantly. Problems occur when high surface quality is required. In that condition high speed spindle is used with low feeding speed, as a result is high temperature in cutter-material area will melt the resin and build the build-up edge (BUE. Due to the existence of BUE, the cutting tool will no longer cut the resin, as a result the resin will then melt due to friction and the melt resin will then stuck on the relief and surface finish become worst and rework should be done. When required surface is achieved problem also occur in next going process, that is silicon mould making. Due to galvanization process for silicon at about 170°C, resin material may be broken or cracked. Research is then conducted to select appropriate resin type suitable for all production steps.

  12. Couch height–based patient setup for abdominal radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Ohira, Shingo [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita (Japan); Ueda, Yoshihiro [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita (Japan); Nishiyama, Kinji [Department of Radiation Oncology, Yao Municipal Hospital, Yao (Japan); Miyazaki, Masayoshi; Isono, Masaru; Tsujii, Katsutomo [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Takashina, Masaaki; Koizumi, Masahiko [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita (Japan); Kawanabe, Kiyoto [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Teshima, Teruki, E-mail: teshima-te@mc.pref.osaka.jp [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan)

    2016-04-01

    There are 2 methods commonly used for patient positioning in the anterior-posterior (A-P) direction: one is the skin mark patient setup method (SMPS) and the other is the couch height–based patient setup method (CHPS). This study compared the setup accuracy of these 2 methods for abdominal radiation therapy. The enrollment for this study comprised 23 patients with pancreatic cancer. For treatments (539 sessions), patients were set up by using isocenter skin marks and thereafter treatment couch was shifted so that the distance between the isocenter and the upper side of the treatment couch was equal to that indicated on the computed tomographic (CT) image. Setup deviation in the A-P direction for CHPS was measured by matching the spine of the digitally reconstructed radiograph (DRR) of a lateral beam at simulation with that of the corresponding time-integrated electronic portal image. For SMPS with no correction (SMPS/NC), setup deviation was calculated based on the couch-level difference between SMPS and CHPS. SMPS/NC was corrected using 2 off-line correction protocols: no action level (SMPS/NAL) and extended NAL (SMPS/eNAL) protocols. Margins to compensate for deviations were calculated using the Stroom formula. A-P deviation > 5 mm was observed in 17% of SMPS/NC, 4% of SMPS/NAL, and 4% of SMPS/eNAL sessions but only in one CHPS session. For SMPS/NC, 7 patients (30%) showed deviations at an increasing rate of > 0.1 mm/fraction, but for CHPS, no such trend was observed. The standard deviations (SDs) of systematic error (Σ) were 2.6, 1.4, 0.6, and 0.8 mm and the root mean squares of random error (σ) were 2.1, 2.6, 2.7, and 0.9 mm for SMPS/NC, SMPS/NAL, SMPS/eNAL, and CHPS, respectively. Margins to compensate for the deviations were wide for SMPS/NC (6.7 mm), smaller for SMPS/NAL (4.6 mm) and SMPS/eNAL (3.1 mm), and smallest for CHPS (2.2 mm). Achieving better setup with smaller margins, CHPS appears to be a reproducible method for abdominal patient setup.

  13. Face machines

    Energy Technology Data Exchange (ETDEWEB)

    Hindle, D.

    1999-06-01

    The article surveys latest equipment available from the world`s manufacturers of a range of machines for tunnelling. These are grouped under headings: excavators; impact hammers; road headers; and shields and tunnel boring machines. Products of thirty manufacturers are referred to. Addresses and fax numbers of companies are supplied. 5 tabs., 13 photos.

  14. Electric machine

    Science.gov (United States)

    El-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Reddy, Patel Bhageerath [Madison, WI

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  15. Machine Learning.

    Science.gov (United States)

    Kirrane, Diane E.

    1990-01-01

    As scientists seek to develop machines that can "learn," that is, solve problems by imitating the human brain, a gold mine of information on the processes of human learning is being discovered, expert systems are being improved, and human-machine interactions are being enhanced. (SK)

  16. Nonplanar machines

    International Nuclear Information System (INIS)

    Ritson, D.

    1989-05-01

    This talk examines methods available to minimize, but never entirely eliminate, degradation of machine performance caused by terrain following. Breaking of planar machine symmetry for engineering convenience and/or monetary savings must be balanced against small performance degradation, and can only be decided on a case-by-case basis. 5 refs

  17. Integer batch scheduling problems for a single-machine with simultaneous effect of learning and forgetting to minimize total actual flow time

    Directory of Open Access Journals (Sweden)

    Rinto Yusriski

    2015-09-01

    Full Text Available This research discusses an integer batch scheduling problems for a single-machine with position-dependent batch processing time due to the simultaneous effect of learning and forgetting. The decision variables are the number of batches, batch sizes, and the sequence of the resulting batches. The objective is to minimize total actual flow time, defined as total interval time between the arrival times of parts in all respective batches and their common due date. There are two proposed algorithms to solve the problems. The first is developed by using the Integer Composition method, and it produces an optimal solution. Since the problems can be solved by the first algorithm in a worst-case time complexity O(n2n-1, this research proposes the second algorithm. It is a heuristic algorithm based on the Lagrange Relaxation method. Numerical experiments show that the heuristic algorithm gives outstanding results.

  18. The setup to investigate rare processes with neutron producing

    International Nuclear Information System (INIS)

    Bystritskij, V.M.; Zhuravlev, N.I.; Merzlyakov, S.I.; Sidorov, V.T.; Stolupin, V.A.; Strelkov, A.V.; Shvetsov, V.N.

    1995-01-01

    An experimental setup has been created to study rare processes with neutron production. The detecting system comprises a scintillation detector in the form of a cup around which thermal neutron detectors (BF3 counters) set in paraffin are placed parallel to the common axis in two concentric circles. The detecting system and registering electronics make it possible to obtain time and amplitude information for each registered event. 8 refs., 5 figs

  19. Machine musicianship

    Science.gov (United States)

    Rowe, Robert

    2002-05-01

    The training of musicians begins by teaching basic musical concepts, a collection of knowledge commonly known as musicianship. Computer programs designed to implement musical skills (e.g., to make sense of what they hear, perform music expressively, or compose convincing pieces) can similarly benefit from access to a fundamental level of musicianship. Recent research in music cognition, artificial intelligence, and music theory has produced a repertoire of techniques that can make the behavior of computer programs more musical. Many of these were presented in a recently published book/CD-ROM entitled Machine Musicianship. For use in interactive music systems, we are interested in those which are fast enough to run in real time and that need only make reference to the material as it appears in sequence. This talk will review several applications that are able to identify the tonal center of musical material during performance. Beyond this specific task, the design of real-time algorithmic listening through the concurrent operation of several connected analyzers is examined. The presentation includes discussion of a library of C++ objects that can be combined to perform interactive listening and a demonstration of their capability.

  20. Fractional-Order Control of a Nonlinear Time-Delay System: Case Study in Oxygen Regulation in the Heart-Lung Machine

    Directory of Open Access Journals (Sweden)

    S. J. Sadati

    2012-01-01

    Full Text Available A fractional-order controller will be proposed to regulate the inlet oxygen into the heart-lung machine. An analytical approach will be explained to satisfy some requirements together with practical implementation of some restrictions for the first time. Primarily a nonlinear single-input single-output (SISO time-delay model which was obtained previously in the literature is introduced for the oxygen generation process in the heart-lung machine system and we will complete it by adding some new states to control it. Thereafter, the system is linearized using the state feedback linearization approach to find a third-order time-delay dynamics. Consequently classical PID and fractional order controllers are gained to assess the quality of the proposed technique. A set of optimal parameters of those controllers are achieved through the genetic algorithm optimization procedure through minimizing a cost function. Our design method focuses on minimizing some famous performance criterions such as IAE, ISE, and ITSE. In the genetic algorithm, the controller parameters are chosen as a random population. The best relevant values are achieved by reducing the cost function. A time-domain simulation signifies the performance of controller with respect to a traditional optimized PID controller.

  1. Optimization of machining fixture layout for tolerance requirements ...

    African Journals Online (AJOL)

    Dimensional accuracy of workpart under machining is strongly influenced by the layout of the fixturing elements like locators and clamps. Setup or geometrical errors in locators result in overall machining error of the feature under consideration. Therefore it is necessary to ensure that the layout is optimized for the desired ...

  2. Research of vibration resistance of non-rigid shafts turning with various technological set-ups

    Directory of Open Access Journals (Sweden)

    Vasilevykh Sergey L.

    2017-01-01

    Full Text Available The article considers the definition of the stability range of a dynamic system for turning non-rigid shafts with different technological set-ups: standard and developed ones; they are improved as a result of this research. The topicality of the study is due to the fact that processing such parts is associated with significant difficulties caused by deformation of the workpiece under the cutting force as well as occurrence of vibration of the part during processing, they are so intense and in practice they force to significantly reduce the cutting regime, recur to multiple-pass operation, lead to premature deterioration of the cutter, as a result, reduce the productivity of machining shafts on metal-cutting machines. In this connection, the purpose of the present research is to determine the boundaries of the stability regions with intensive turning of non-rigid shafts. In the article the basic theoretical principles of construction of a mathematical system focused on the process of non-free cutting of a dynamic machine are justified. By means of the developed mathematical model interrelations are established and legitimacies of influence of various technological set-ups on stability of the dynamic system of the machine-tool-device-tool-blank are revealed. The conducted researches allow to more objectively represent difficult processes that occur in a closed dynamic system of a machine.

  3. Development of a Committee of Artificial Neural Networks for the Performance Testing of Compressors for Thermal Machines in Very Reduced Times

    Directory of Open Access Journals (Sweden)

    Coral Rodrigo

    2015-03-01

    Full Text Available This paper presents a new test method able to infer - in periods of less than 7 seconds - the refrigeration capacity of a compressor used in thermal machines, which represents a time reduction of approximately 99.95% related to the standardized traditional methods. The method was developed aiming at its application on compressor manufacture lines and on 100% of the units produced. Artificial neural networks (ANNs were used to establish a model able to infer the refrigeration capacity based on the data collected directly on the production line. The proposed method does not make use of refrigeration systems and also does not require using the compressor oil.

  4. Prospects of a mathematical theory of human behavior in complex man-machine systems tasks. [time sharing computer analogy of automobile driving

    Science.gov (United States)

    Johannsen, G.; Rouse, W. B.

    1978-01-01

    A hierarchy of human activities is derived by analyzing automobile driving in general terms. A structural description leads to a block diagram and a time-sharing computer analogy. The range of applicability of existing mathematical models is considered with respect to the hierarchy of human activities in actual complex tasks. Other mathematical tools so far not often applied to man machine systems are also discussed. The mathematical descriptions at least briefly considered here include utility, estimation, control, queueing, and fuzzy set theory as well as artificial intelligence techniques. Some thoughts are given as to how these methods might be integrated and how further work might be pursued.

  5. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  6. Mobile phone sensors and supervised machine learning to identify alcohol use events in young adults: Implications for just-in-time adaptive interventions.

    Science.gov (United States)

    Bae, Sangwon; Chung, Tammy; Ferreira, Denzil; Dey, Anind K; Suffoletto, Brian

    2017-11-27

    Real-time detection of drinking could improve timely delivery of interventions aimed at reducing alcohol consumption and alcohol-related injury, but existing detection methods are burdensome or impractical. To evaluate whether phone sensor data and machine learning models are useful to detect alcohol use events, and to discuss implications of these results for just-in-time mobile interventions. 38 non-treatment seeking young adult heavy drinkers downloaded AWARE app (which continuously collected mobile phone sensor data), and reported alcohol consumption (number of drinks, start/end time of prior day's drinking) for 28days. We tested various machine learning models using the 20 most informative sensor features to classify time periods as non-drinking, low-risk (1 to 3/4 drinks per occasion for women/men), and high-risk drinking (>4/5 drinks per occasion for women/men). Among 30 participants in the analyses, 207 non-drinking, 41 low-risk, and 45 high-risk drinking episodes were reported. A Random Forest model using 30-min windows with 1day of historical data performed best for detecting high-risk drinking, correctly classifying high-risk drinking windows 90.9% of the time. The most informative sensor features were related to time (i.e., day of week, time of day), movement (e.g., change in activities), device usage (e.g., screen duration), and communication (e.g., call duration, typing speed). Preliminary evidence suggests that sensor data captured from mobile phones of young adults is useful in building accurate models to detect periods of high-risk drinking. Interventions using mobile phone sensor features could trigger delivery of a range of interventions to potentially improve effectiveness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Machine translation

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, M

    1982-04-01

    Each language has its own structure. In translating one language into another one, language attributes and grammatical interpretation must be defined in an unambiguous form. In order to parse a sentence, it is necessary to recognize its structure. A so-called context-free grammar can help in this respect for machine translation and machine-aided translation. Problems to be solved in studying machine translation are taken up in the paper, which discusses subjects for semantics and for syntactic analysis and translation software. 14 references.

  8. Advanced Laboratory Setup for Testing Offshore Foundations

    DEFF Research Database (Denmark)

    Nielsen, Søren Dam; Ibsen, Lars Bo; Nielsen, Benjaminn Nordahl

    2016-01-01

    This paper describes a test setup for testing small-scale offshore foundations under realistic conditions of high pore-water pressure and high impact loads. The actuator, used for loading has enough capacity to apply sufficient force and displacement to achieve both drained and undrained failure ...

  9. Non-stationary signal analysis based on general parameterized time-frequency transform and its application in the feature extraction of a rotary machine

    Science.gov (United States)

    Zhou, Peng; Peng, Zhike; Chen, Shiqian; Yang, Yang; Zhang, Wenming

    2018-06-01

    With the development of large rotary machines for faster and more integrated performance, the condition monitoring and fault diagnosis for them are becoming more challenging. Since the time-frequency (TF) pattern of the vibration signal from the rotary machine often contains condition information and fault feature, the methods based on TF analysis have been widely-used to solve these two problems in the industrial community. This article introduces an effective non-stationary signal analysis method based on the general parameterized time-frequency transform (GPTFT). The GPTFT is achieved by inserting a rotation operator and a shift operator in the short-time Fourier transform. This method can produce a high-concentrated TF pattern with a general kernel. A multi-component instantaneous frequency (IF) extraction method is proposed based on it. The estimation for the IF of every component is accomplished by defining a spectrum concentration index (SCI). Moreover, such an IF estimation process is iteratively operated until all the components are extracted. The tests on three simulation examples and a real vibration signal demonstrate the effectiveness and superiority of our method.

  10. Electric field stimulation setup for photoemission electron microscopes.

    Science.gov (United States)

    Buzzi, M; Vaz, C A F; Raabe, J; Nolting, F

    2015-08-01

    Manipulating magnetisation by the application of an electric field in magnetoelectric multiferroics represents a timely issue due to the potential applications in low power electronics and the novel physics involved. Thanks to its element sensitivity and high spatial resolution, X-ray photoemission electron microscopy is a uniquely suited technique for the investigation of magnetoelectric coupling in multiferroic materials. In this work, we present a setup that allows for the application of in situ electric and magnetic fields while the sample is analysed in the microscope. As an example of the performances of the setup, we present measurements on Ni/Pb(Mg(0.66)Nb(0.33))O3-PbTiO3 and La(0.7)Sr(0.3)MnO3/PMN-PT artificial multiferroic nanostructures.

  11. Investigation into the accuracy of a proposed laser diode based multilateration machine tool calibration system

    International Nuclear Information System (INIS)

    Fletcher, S; Longstaff, A P; Myers, A

    2005-01-01

    Geometric and thermal calibration of CNC machine tools is required in modern machine shops with volumetric accuracy assessment becoming the standard machine tool qualification in many industries. Laser interferometry is a popular method of measuring the errors but this, and other alternatives, tend to be expensive, time consuming or both. This paper investigates the feasibility of using a laser diode based system that capitalises on the low cost nature of the diode to provide multiple laser sources for fast error measurement using multilateration. Laser diode module technology enables improved wavelength stability and spectral linewidth which are important factors for laser interferometry. With more than three laser sources, the set-up process can be greatly simplified while providing flexibility in the location of the laser sources improving the accuracy of the system

  12. Machine Learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Machine learning, which builds on ideas in computer science, statistics, and optimization, focuses on developing algorithms to identify patterns and regularities in data, and using these learned patterns to make predictions on new observations. Boosted by its industrial and commercial applications, the field of machine learning is quickly evolving and expanding. Recent advances have seen great success in the realms of computer vision, natural language processing, and broadly in data science. Many of these techniques have already been applied in particle physics, for instance for particle identification, detector monitoring, and the optimization of computer resources. Modern machine learning approaches, such as deep learning, are only just beginning to be applied to the analysis of High Energy Physics data to approach more and more complex problems. These classes will review the framework behind machine learning and discuss recent developments in the field.

  13. Machine Translation

    Indian Academy of Sciences (India)

    Research Mt System Example: The 'Janus' Translating Phone Project. The Janus ... based on laptops, and simultaneous translation of two speakers in a dialogue. For more ..... The current focus in MT research is on using machine learning.

  14. ASD FieldSpec Calibration Setup and Techniques

    Science.gov (United States)

    Olive, Dan

    2001-01-01

    This paper describes the Analytical Spectral Devices (ASD) Fieldspec Calibration Setup and Techniques. The topics include: 1) ASD Fieldspec FR Spectroradiometer; 2) Components of Calibration; 3) Equipment list; 4) Spectral Setup; 5) Spectral Calibration; 6) Radiometric and Linearity Setup; 7) Radiometric setup; 8) Datadets Required; 9) Data files; and 10) Field of View Measurement. This paper is in viewgraph form.

  15. Optimization of pocket machining strategy in HSM

    OpenAIRE

    Msaddek, El Bechir; Bouaziz, Zoubeir; Dessein, Gilles; Baili, Maher

    2012-01-01

    International audience; Our two major concerns, which should be taken into consideration as soon as we start the selecting the machining parameters, are the minimization of the machining time and the maintaining of the high-speed machining machine in good state. The manufacturing strategy is one of the parameters which practically influences the time of the different geometrical forms manufacturing, as well as the machine itself. In this article, we propose an optimization methodology of the ...

  16. Using machine learning for real-time estimates of snow water equivalent in the watersheds of Afghanistan

    Science.gov (United States)

    Bair, Edward H.; Abreu Calfa, Andre; Rittger, Karl; Dozier, Jeff

    2018-05-01

    In the mountains, snowmelt often provides most of the runoff. Operational estimates use imagery from optical and passive microwave sensors, but each has its limitations. An accurate approach, which we validate in Afghanistan and the Sierra Nevada USA, reconstructs spatially distributed snow water equivalent (SWE) by calculating snowmelt backward from a remotely sensed date of disappearance. However, reconstructed SWE estimates are available only retrospectively; they do not provide a forecast. To estimate SWE throughout the snowmelt season, we consider physiographic and remotely sensed information as predictors and reconstructed SWE as the target. The period of analysis matches the AMSR-E radiometer's lifetime from 2003 to 2011, for the months of April through June. The spatial resolution of the predictions is 3.125 km, to match the resolution of a microwave brightness temperature product. Two machine learning techniques - bagged regression trees and feed-forward neural networks - produced similar mean results, with 0-14 % bias and 46-48 mm RMSE on average. Nash-Sutcliffe efficiencies averaged 0.68 for all years. Daily SWE climatology and fractional snow-covered area are the most important predictors. We conclude that these methods can accurately estimate SWE during the snow season in remote mountains, and thereby provide an independent estimate to forecast runoff and validate other methods to assess the snow resource.

  17. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces

    Science.gov (United States)

    Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I.; Shenoy, Krishna V.; Boahen, Kwabena

    2013-06-01

    Objective. Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. Approach. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Main results. Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system’s robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. Significance. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

  18. Optical sensor system for time-resolved quantification of methane concentrations: Validation measurements in a rapid compression machine

    Science.gov (United States)

    Bauke, Stephan; Golibrzuch, Kai; Wackerbarth, Hainer; Fendt, Peter; Zigan, Lars; Seefeldt, Stefan; Thiele, Olaf; Berg, Thomas

    2018-05-01

    Lowering greenhouse gas emissions is one of the most challenging demands of today's society. Especially, the automotive industry struggles with the development of more efficient internal combustion (IC) engines. As an alternative to conventional fuels, methane has the potential for a significant emission reduction. In methane fuelled engines, the process of mixture formation, which determines the properties of combustion after ignition, differs significantly from gasoline and diesel engines and needs to be understood and controlled in order to develop engines with high efficiency. This work demonstrates the development of a gas sensing system that can serve as a diagnostic tool for measuring crank-angle resolved relative air-fuel ratios in methane-fuelled near-production IC engines. By application of non-dispersive infrared absorption spectroscopy at two distinct spectral regions in the ν3 absorption band of methane around 3.3 μm, the system is able to determine fuel density and temperature simultaneously. A modified spark plug probe allows for straightforward application at engine test stations. Here, the application of the detection system in a rapid compression machine is presented, which enables validation and characterization of the system on well-defined gas mixtures under engine-like dynamic conditions. In extension to a recent proof-of-principle study, a refined data analysis procedure is introduced that allows the correction of artefacts originating from mechanical distortions of the sensor probe. In addition, the measured temperatures are compared to data obtained with a commercially available system based on the spectrally resolved detection of water absorption in the near infrared.

  19. Handbook of machine soldering SMT and TH

    CERN Document Server

    Woodgate, Ralph W

    1996-01-01

    A shop-floor guide to the machine soldering of electronics Sound electrical connections are the operational backbone of every piece of electronic equipment-and the key to success in electronics manufacturing. The Handbook of Machine Soldering is dedicated to excellence in the machine soldering of electrical connections. Self-contained, comprehensive, and down-to-earth, it cuts through jargon, peels away outdated notions, and presents all the information needed to select, install, and operate machine soldering equipment. This fully updated and revised volume covers all of the new technologies and processes that have emerged in recent years, most notably the use of surface mount technology (SMT). Supplemented with 200 illustrations, this thoroughly accessible text Describes reflow and wave soldering in detail, including reflow soldering of SMT boards and the use of nitrogen blankets * Explains the setup, operation, and maintenance of a variety of soldering machines * Discusses theory, selection, and control met...

  20. OptiCentric lathe centering machine

    Science.gov (United States)

    Buß, C.; Heinisch, J.

    2013-09-01

    High precision optics depend on precisely aligned lenses. The shift and tilt of individual lenses as well as the air gap between elements require accuracies in the single micron regime. These accuracies are hard to meet with traditional assembly methods. Instead, lathe centering can be used to machine the mount with respect to the optical axis. Using a diamond turning process, all relevant errors of single mounted lenses can be corrected in one post-machining step. Building on the OptiCentric® and OptiSurf® measurement systems, Trioptics has developed their first lathe centering machines. The machine and specific design elements of the setup will be shown. For example, the machine can be used to turn optics for i-line steppers with highest precision.

  1. Quick setup of test unit for accelerator control system

    International Nuclear Information System (INIS)

    Fu, W.; D'Ottavio, T.; Gassner, D.; Nemesure, S.; Morris, J.

    2011-01-01

    Testing a single hardware unit of an accelerator control system often requires the setup of a program with graphical user interface. Developing a dedicated application for a specific hardware unit test could be time consuming and the application may become obsolete after the unit tests. This paper documents a methodology for quick design and setup of an interface focused on performing unit tests of accelerator equipment with minimum programming work. The method has three components. The first is a generic accelerator device object (ADO) manager which can be used to setup, store, and log testing controls parameters for any unit testing system. The second involves the design of a TAPE (Tool for Automated Procedure Execution) sequence file that specifies and implements all te testing and control logic. The sting third is the design of a PET (parameter editing tool) page that provides the unit tester with all the necessary control parameters required for testing. This approach has been used for testing the horizontal plane of the Stochastic Cooling Motion Control System at RHIC.

  2. High-resolution continuous-flow analysis setup for water isotopic measurement from ice cores using laser spectroscopy

    Science.gov (United States)

    Emanuelsson, B. D.; Baisden, W. T.; Bertler, N. A. N.; Keller, E. D.; Gkinis, V.

    2015-07-01

    Here we present an experimental setup for water stable isotope (δ18O and δD) continuous-flow measurements and provide metrics defining the performance of the setup during a major ice core measurement campaign (Roosevelt Island Climate Evolution; RICE). We also use the metrics to compare alternate systems. Our setup is the first continuous-flow laser spectroscopy system that is using off-axis integrated cavity output spectroscopy (OA-ICOS; analyzer manufactured by Los Gatos Research, LGR) in combination with an evaporation unit to continuously analyze water samples from an ice core. A Water Vapor Isotope Standard Source (WVISS) calibration unit, manufactured by LGR, was modified to (1) enable measurements on several water standards, (2) increase the temporal resolution by reducing the response time and (3) reduce the influence from memory effects. While this setup was designed for the continuous-flow analysis (CFA) of ice cores, it can also continuously analyze other liquid or vapor sources. The custom setups provide a shorter response time (~ 54 and 18 s for 2013 and 2014 setup, respectively) compared to the original WVISS unit (~ 62 s), which is an improvement in measurement resolution. Another improvement compared to the original WVISS is that the custom setups have a reduced memory effect. Stability tests comparing the custom and WVISS setups were performed and Allan deviations (σAllan) were calculated to determine precision at different averaging times. For the custom 2013 setup the precision after integration times of 103 s is 0.060 and 0.070 ‰ for δ18O and δD, respectively. The corresponding σAllan values for the custom 2014 setup are 0.030, 0.060 and 0.043 ‰ for δ18O, δD and δ17O, respectively. For the WVISS setup the precision is 0.035, 0.070 and 0.042 ‰ after 103 s for δ18O, δD and δ17O, respectively. Both the custom setups and WVISS setup are influenced by instrumental drift with δ18O being more drift sensitive than δD. The

  3. A simple Lissajous curves experimental setup

    Science.gov (United States)

    Şahin Kızılcık, Hasan; Damlı, Volkan

    2018-05-01

    The aim of this study is to develop an experimental setup to produce Lissajous curves. The setup was made using a smartphone, a powered speaker (computer speaker), a balloon, a laser pointer and a piece of mirror. Lissajous curves are formed as follows: a piece of mirror is attached to a balloon. The balloon is vibrated with the sound signal provided by the speaker that is connected to a smartphone. The laser beam is reflected off the mirror and the reflection is shaped as a Lissajous curve. Because of the intersection of two frequencies (frequency of the sound signal and natural vibration frequency of the balloon), these curves are formed. They can be used to measure the ratio of frequencies.

  4. Narrow Artificial Intelligence with Machine Learning for Real-Time Estimation of a Mobile Agent’s Location Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cédric Beaulac

    2017-01-01

    Full Text Available We propose to use a supervised machine learning technique to track the location of a mobile agent in real time. Hidden Markov Models are used to build artificial intelligence that estimates the unknown position of a mobile target moving in a defined environment. This narrow artificial intelligence performs two distinct tasks. First, it provides real-time estimation of the mobile agent’s position using the forward algorithm. Second, it uses the Baum–Welch algorithm as a statistical learning tool to gain knowledge of the mobile target. Finally, an experimental environment is proposed, namely, a video game that we use to test our artificial intelligence. We present statistical and graphical results to illustrate the efficiency of our method.

  5. Man-machine interface design of real-time hardware-in-loop simulation system for power regulation of nuclear heating reactor

    International Nuclear Information System (INIS)

    Ni Xiaoli; Huang Xiaojin; Dong Zhe

    2009-01-01

    It is necessary to set up real-time hardware-in-loop simulation system for power regulation of nuclear heating reactor (NHR) because it is used in the load following instance such as seawater desalination and energy source. As the experiment data are so large that it is hard to be real-time all in one computer and to save and show the data.With the distributed configuration, the system was set up having a legible and intuitionist man-machine interface, speeding the model calculation computer and meeting the requirements of power regulation of NHR. Screen clear and concise, easy command input and results output make the system easier to verify. (authors)

  6. The spectral imaging facility: Setup characterization

    Energy Technology Data Exchange (ETDEWEB)

    De Angelis, Simone, E-mail: simone.deangelis@iaps.inaf.it; De Sanctis, Maria Cristina; Manzari, Paola Olga [Institute for Space Astrophysics and Planetology, INAF-IAPS, Via Fosso del Cavaliere, 100, 00133 Rome (Italy); Ammannito, Eleonora [Institute for Space Astrophysics and Planetology, INAF-IAPS, Via Fosso del Cavaliere, 100, 00133 Rome (Italy); Department of Earth, Planetary and Space Sciences, University of California, Los Angeles, Los Angeles, California 90095-1567 (United States); Di Iorio, Tatiana [ENEA, UTMEA-TER, Rome (Italy); Liberati, Fabrizio [Opto Service SrL, Campagnano di Roma (RM) (Italy); Tarchi, Fabio; Dami, Michele; Olivieri, Monica; Pompei, Carlo [Selex ES, Campi Bisenzio (Italy); Mugnuolo, Raffaele [Italian Space Agency, ASI, Spatial Geodesy Center, Matera (Italy)

    2015-09-15

    The SPectral IMager (SPIM) facility is a laboratory visible infrared spectrometer developed to support space borne observations of rocky bodies of the solar system. Currently, this laboratory setup is used to support the DAWN mission, which is in its journey towards the asteroid 1-Ceres, and to support the 2018 Exo-Mars mission in the spectral investigation of the Martian subsurface. The main part of this setup is an imaging spectrometer that is a spare of the DAWN visible infrared spectrometer. The spectrometer has been assembled and calibrated at Selex ES and then installed in the facility developed at the INAF-IAPS laboratory in Rome. The goal of SPIM is to collect data to build spectral libraries for the interpretation of the space borne and in situ hyperspectral measurements of planetary materials. Given its very high spatial resolution combined with the imaging capability, this instrument can also help in the detailed study of minerals and rocks. In this paper, the instrument setup is first described, and then a series of test measurements, aimed to the characterization of the main subsystems, are reported. In particular, laboratory tests have been performed concerning (i) the radiation sources, (ii) the reference targets, and (iii) linearity of detector response; the instrumental imaging artifacts have also been investigated.

  7. Measurements of operator performance - an experimental setup

    International Nuclear Information System (INIS)

    Netland, K.

    1980-01-01

    The human has to be considered as an important element in a process control system, even if the degree of automation is extremely high. Other elements, e.g. computer, displays, etc., can to a large extent be described and quantified. The human (operator), is difficult to describe in a precise way, and it is just as difficult to predict his thinking and acting in a control room environment. Many factors influence his performance, such as: experience, motivation, level of knowledge, training, control environment, job organization, etc. These factors have to a certain degree to be described before guidelines for design of the man-process interfaces and the control room layout can be developed. For decades, the psychological science has obtained knowledge of the human mind and behaviour. This knowledge should have the potential of a positive input on our effort to describe the factors influencing the operator performance. Even if the human is complex, a better understanding of his thinking and acting, and a more precise description of the factors influencing his performance can be obtained. At OECD Halden Reactor Project an experimental set-up for such studies has been developed and implemented in the computer laboratory. The present set-up includes elements as a computer- and display-based control room, a simulator representing a nuclear power plant, training programme for the subjects, and methods for the experiments. Set-up modules allow reconfiguration of experiments. (orig./HP)

  8. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    Science.gov (United States)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  9. Determination of efficiencies, loss mechanisms, and performance degradation factors in chopper controlled dc vehical motors. Section 2: The time dependent finite element modeling of the electromagnetic field in electrical machines: Methods and applications. Ph.D. Thesis

    Science.gov (United States)

    Hamilton, H. B.; Strangas, E.

    1980-01-01

    The time dependent solution of the magnetic field is introduced as a method for accounting for the variation, in time, of the machine parameters in predicting and analyzing the performance of the electrical machines. The method of time dependent finite element was used in combination with an also time dependent construction of a grid for the air gap region. The Maxwell stress tensor was used to calculate the airgap torque from the magnetic vector potential distribution. Incremental inductances were defined and calculated as functions of time, depending on eddy currents and saturation. The currents in all the machine circuits were calculated in the time domain based on these inductances, which were continuously updated. The method was applied to a chopper controlled DC series motor used for electric vehicle drive, and to a salient pole sychronous motor with damper bars. Simulation results were compared to experimentally obtained ones.

  10. Setup and evaluation of a sensor tilting system for dimensional micro- and nanometrology

    International Nuclear Information System (INIS)

    Schuler, Alexander; Hausotte, Tino; Weckenmann, Albert

    2014-01-01

    Sensors in micro- and nanometrology show their limits if the measurement objects and surfaces feature high aspect ratios, high curvature and steep surface angles. Their measurable surface angle is limited and an excess leads to measurement deviation and not detectable surface points. We demonstrate a principle to adapt the sensor's working angle during the measurement keeping the sensor in its optimal working angle. After the simulation of the principle, a hardware prototype was realized. It is based on a rotary kinematic chain with two rotary degrees of freedom, which extends the measurable surface angle to ±90° and is combined with a nanopositioning and nanomeasuring machine. By applying a calibration procedure with a quasi-tactile 3D sensor based on electrical near-field interaction the systematic position deviation of the kinematic chain is reduced. The paper shows for the first time the completed setup and integration of the prototype, the performance results of the calibration, the measurements with the prototype and the tilting principle, and finishes with the interpretation and feedback of the practical results. (paper)

  11. Machine Protection

    International Nuclear Information System (INIS)

    Zerlauth, Markus; Schmidt, Rüdiger; Wenninger, Jörg

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012

  12. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  13. Machine Protection

    CERN Document Server

    Zerlauth, Markus; Wenninger, Jörg

    2012-01-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012.

  14. Machine Protection

    Energy Technology Data Exchange (ETDEWEB)

    Zerlauth, Markus; Schmidt, Rüdiger; Wenninger, Jörg [European Organization for Nuclear Research, Geneva (Switzerland)

    2012-07-01

    The present architecture of the machine protection system is being recalled and the performance of the associated systems during the 2011 run will be briefly summarized. An analysis of the causes of beam dumps as well as an assessment of the dependability of the machine protection systems (MPS) itself is being presented. Emphasis will be given to events that risked exposing parts of the machine to damage. Further improvements and mitigations of potential holes in the protection systems will be evaluated along with their impact on the 2012 run. The role of rMPP during the various operational phases (commissioning, intensity ramp up, MDs...) will be discussed along with a proposal for the intensity ramp up for the start of beam operation in 2012.

  15. Vehicle speed prediction via a sliding-window time series analysis and an evolutionary least learning machine: A case study on San Francisco urban roads

    Directory of Open Access Journals (Sweden)

    Ladan Mozaffari

    2015-06-01

    Full Text Available The main goal of the current study is to take advantage of advanced numerical and intelligent tools to predict the speed of a vehicle using time series. It is clear that the uncertainty caused by temporal behavior of the driver as well as various external disturbances on the road will affect the vehicle speed, and thus, the vehicle power demands. The prediction of upcoming power demands can be employed by the vehicle powertrain control systems to improve significantly the fuel economy and emission performance. Therefore, it is important to systems design engineers and automotive industrialists to develop efficient numerical tools to overcome the risk of unpredictability associated with the vehicle speed profile on roads. In this study, the authors propose an intelligent tool called evolutionary least learning machine (E-LLM to forecast the vehicle speed sequence. To have a practical evaluation regarding the efficacy of E-LLM, the authors use the driving data collected on the San Francisco urban roads by a private Honda Insight vehicle. The concept of sliding window time series (SWTS analysis is used to prepare the database for the speed forecasting process. To evaluate the performance of the proposed technique, a number of well-known approaches, such as auto regressive (AR method, back-propagation neural network (BPNN, evolutionary extreme learning machine (E-ELM, extreme learning machine (ELM, and radial basis function neural network (RBFNN, are considered. The performances of the rival methods are then compared in terms of the mean square error (MSE, root mean square error (RMSE, mean absolute percentage error (MAPE, median absolute percentage error (MDAPE, and absolute fraction of variances (R2 metrics. Through an exhaustive comparative study, the authors observed that E-LLM is a powerful tool for predicting the vehicle speed profiles. The outcomes of the current study can be of use for the engineers of automotive industry who have been

  16. Teletherapy machine

    International Nuclear Information System (INIS)

    Panyam, Vinatha S.; Rakshit, Sougata; Kulkarni, M.S.; Pradeepkumar, K.S.

    2017-01-01

    Radiation Standards Section (RSS), RSSD, BARC is the national metrology institute for ionizing radiation. RSS develops and maintains radiation standards for X-ray, beta, gamma and neutron radiations. In radiation dosimetry, traceability, accuracy and consistency of radiation measurements is very important especially in radiotherapy where the success of patient treatment is dependent on the accuracy of the dose delivered to the tumour. Cobalt teletherapy machines have been used in the treatment of cancer since the early 1950s and India had its first cobalt teletherapy machine installed at the Cancer Institute, Chennai in 1956

  17. Adapting machine learning techniques to censored time-to-event health record data: A general-purpose approach using inverse probability of censoring weighting.

    Science.gov (United States)

    Vock, David M; Wolfson, Julian; Bandyopadhyay, Sunayan; Adomavicius, Gediminas; Johnson, Paul E; Vazquez-Benitez, Gabriela; O'Connor, Patrick J

    2016-06-01

    Models for predicting the probability of experiencing various health outcomes or adverse events over a certain time frame (e.g., having a heart attack in the next 5years) based on individual patient characteristics are important tools for managing patient care. Electronic health data (EHD) are appealing sources of training data because they provide access to large amounts of rich individual-level data from present-day patient populations. However, because EHD are derived by extracting information from administrative and clinical databases, some fraction of subjects will not be under observation for the entire time frame over which one wants to make predictions; this loss to follow-up is often due to disenrollment from the health system. For subjects without complete follow-up, whether or not they experienced the adverse event is unknown, and in statistical terms the event time is said to be right-censored. Most machine learning approaches to the problem have been relatively ad hoc; for example, common approaches for handling observations in which the event status is unknown include (1) discarding those observations, (2) treating them as non-events, (3) splitting those observations into two observations: one where the event occurs and one where the event does not. In this paper, we present a general-purpose approach to account for right-censored outcomes using inverse probability of censoring weighting (IPCW). We illustrate how IPCW can easily be incorporated into a number of existing machine learning algorithms used to mine big health care data including Bayesian networks, k-nearest neighbors, decision trees, and generalized additive models. We then show that our approach leads to better calibrated predictions than the three ad hoc approaches when applied to predicting the 5-year risk of experiencing a cardiovascular adverse event, using EHD from a large U.S. Midwestern healthcare system. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Developing Probabilistic Operating Rules for Real-time Conjunctive Use of Surface and Groundwater Resources:Application of Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Bazargan-Lari

    2011-01-01

    Full Text Available Developing optimal operating policies for conjunctive use of surface and groundwater resources when different decision makers and stakeholders with conflicting objectives are involved is usually a challenging task. This problem would be more complex when objectives related to surface and groundwater quality are taken into account. In this paper, a new methodology is developed for real time conjunctive use of surface and groundwater resources. In the proposed methodology, a well-known multi-objective genetic algorithm, namely Non-dominated Sorting Genetic Algorithm II (NSGA-II is employed to develop a Pareto front among the objectives. The Young conflict resolution theory is also used for resolving the conflict of interests among decision makers. To develop the real time conjunctive use operating rules, the Probabilistic Support Vector Machines (PSVMs, which are capable of providing probability distribution functions of decision variables, are utilized. The proposed methodology is applied to Tehran Aquifer inTehran metropolitan area,Iran. Stakeholders in the study area have some conflicting interests including supplying water with acceptable quality, reducing pumping costs, improving groundwater quality and controlling the groundwater table fluctuations. In the proposed methodology, MODFLOW and MT3D groundwater quantity and quality simulation models are linked with NSGA-II optimization model to develop Pareto fronts among the objectives. The best solutions on the Pareto fronts are then selected using the Young conflict resolution theory. The selected solution (optimal monthly operating policies is used to train and verify a PSVM. The results show the significance of applying an integrated conflict resolution approach and the capability of support vector machines for the real time conjunctive use of surface and groundwater resources in the study area. It is also shown that the validation accuracy of the proposed operating rules is higher that 80

  19. Patient setup aid with wireless CCTV system in radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yang Kyun; Cho, Woong; Park, Jong Min [Seoul National University Graduate School, Seoul (Korea, Republic of); Ha, Sung Whan; Ye, Sung Joon [Seoul National University College of Medicine, Seoul (Korea, Republic of); Park, Suk Won [Chung-Ang University Cellege of Medicine, Seoul (Korea, Republic of); Huh, Soon Nyung [Seoul National University Hospital, Seoul (Korea, Republic of)

    2006-12-15

    To develop a wireless CCTV system in semi-beam's eye view (BEV) to monitor daily patient setup in radiation therapy. In order to get patient images in semi-BEV, CCTV cameras are installed in a custom-made acrylic applicator below the treatment head of a linear accelerator. The images from the cameras are transmitted via radio frequency signal ( {approx} 2.4 GHz and 10 mW RF output). An expected problem with this system is radio frequency interference, which is solved utilizing RF shielding with Cu foils and median filtering software. The images are analyzed by our custom-made software. In the software, three anatomical landmarks in the patient surface are indicated by a user, then automatically the 3 dimensional structures are obtained and registered by utilizing a localization procedure consisting mainly of stereo matching algorithm and Gauss-Newton optimization. This algorithm is applied to phantom images in investigate the setup accuracy. Respiratory gating system is also researched with real-time image processing. A line-laser marker projected on a patient's surface is extracted by binary image processing and the breath pattern is calculated and displayed in real-time. More than 80% of the camera noises from the linear accelerator are eliminated by wrapping the camera with copper foils. The accuracy of the localization procedure is found to be on the order of 1.5 {+-} 0.7 mm with a point phantom and sub-millimeters and degrees with a custom-made head/neck phantom. With line-laser marker, real-time respiratory monitoring is possible in the delay time of {approx} 0.7 sec. The wireless CCTV camera system is the novel tool which can monitor daily patient setups. The feasibility of respiratory gating system with the wireless CCTV is hopeful.

  20. Patient setup aid with wireless CCTV system in radiation therapy

    International Nuclear Information System (INIS)

    Park, Yang Kyun; Cho, Woong; Park, Jong Min; Ha, Sung Whan; Ye, Sung Joon; Park, Suk Won; Huh, Soon Nyung

    2006-01-01

    To develop a wireless CCTV system in semi-beam's eye view (BEV) to monitor daily patient setup in radiation therapy. In order to get patient images in semi-BEV, CCTV cameras are installed in a custom-made acrylic applicator below the treatment head of a linear accelerator. The images from the cameras are transmitted via radio frequency signal ( ∼ 2.4 GHz and 10 mW RF output). An expected problem with this system is radio frequency interference, which is solved utilizing RF shielding with Cu foils and median filtering software. The images are analyzed by our custom-made software. In the software, three anatomical landmarks in the patient surface are indicated by a user, then automatically the 3 dimensional structures are obtained and registered by utilizing a localization procedure consisting mainly of stereo matching algorithm and Gauss-Newton optimization. This algorithm is applied to phantom images in investigate the setup accuracy. Respiratory gating system is also researched with real-time image processing. A line-laser marker projected on a patient's surface is extracted by binary image processing and the breath pattern is calculated and displayed in real-time. More than 80% of the camera noises from the linear accelerator are eliminated by wrapping the camera with copper foils. The accuracy of the localization procedure is found to be on the order of 1.5 ± 0.7 mm with a point phantom and sub-millimeters and degrees with a custom-made head/neck phantom. With line-laser marker, real-time respiratory monitoring is possible in the delay time of ∼ 0.7 sec. The wireless CCTV camera system is the novel tool which can monitor daily patient setups. The feasibility of respiratory gating system with the wireless CCTV is hopeful

  1. Prediction and Real-Time Compensation of Qubit Decoherence Via Machine Learning (Open Access, Publisher’s Version)

    Science.gov (United States)

    2017-01-16

    accuracy increases with n, as the algorithm learns more about the temporal correlations in fA. For values of k]n, corresponding to prediction times...the identity. Diagnostic measurements are performed after a Noise injection a b c Stabilise up to tk Time forward tk (Δt) t–n Qubit Memory Future...supported by the ARC Centre of Excellence for Engineered Quantum Systems CE110001013, ARC Discovery Project DP130103823, the Intelligence Advanced

  2. Street ball, swim team and the sour cream machine: a cluster analysis of out of school time participation portfolios.

    Science.gov (United States)

    Nelson, Ingrid Ann; Gastic, Billie

    2009-10-01

    Adolescents spend only a fraction of their waking hours in school and what they do with the rest of their time varies dramatically. Despite this, research on out-of-school time has largely focused on structured programming. The authors analyzed data from the Educational Longitudinal Study of 2002 (ELS:2002) to examine the out-of-school time activity portfolios of 6,338 high school sophomores, accounting for time spent in school clubs and sports as well as 17 other activities. The analytical sample was balanced with respect to sex and racially and ethnically diverse: 49% female, 67% White, 10% Latino, 10% African American, and 6% Asian and Pacific Islander. Approximately 76% of the sample attended public schools, 30% were in the highest socioeconomic quartile, and 20% were in the lowest socioeconomic quartile. The authors identified five distinct out-of-school time activity portfolios based on a cluster analysis. The demographic profiles of students by portfolio type differed significantly with respect to sex, race/ethnicity, socioeconomic status, school type and location. Students by portfolio type also differed significantly in terms of measures of academic success, school behavior, victimization and perceptions of school climate, controlling for covariates. These findings underscore the importance of more complex considerations of adolescents' out-of-school time.

  3. Testing and Modeling of Mechanical Characteristics of Resistance Welding Machines

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels

    2003-01-01

    for both upper and lower electrode systems. This has laid a foundation for modeling the welding process and selecting the welding parameters considering the machine factors. The method is straightforward and easy to be applied in industry since the whole procedure is based on tests with no requirements......The dynamic mechanical response of resistance welding machine is very important to the weld quality in resistance welding especially in projection welding when collapse or deformation of work piece occurs. It is mainly governed by the mechanical parameters of machine. In this paper, a mathematical...... model for characterizing the dynamic mechanical responses of machine and a special test set-up called breaking test set-up are developed. Based on the model and the test results, the mechanical parameters of machine are determined, including the equivalent mass, damping coefficient, and stiffness...

  4. Detection of correct and incorrect measurements in real-time continuous glucose monitoring systems by applying a postprocessing support vector machine.

    Science.gov (United States)

    Leal, Yenny; Gonzalez-Abril, Luis; Lorencio, Carol; Bondia, Jorge; Vehi, Josep

    2013-07-01

    Support vector machines (SVMs) are an attractive option for detecting correct and incorrect measurements in real-time continuous glucose monitoring systems (RTCGMSs), because their learning mechanism can introduce a postprocessing strategy for imbalanced datasets. The proposed SVM considers the geometric mean to obtain a more balanced performance between sensitivity and specificity. To test this approach, 23 critically ill patients receiving insulin therapy were monitored over 72 h using an RTCGMS, and a dataset of 537 samples, classified according to International Standards Organization (ISO) criteria (372 correct and 165 incorrect measurements), was obtained. The results obtained were promising for patients with septic shock or with sepsis, for which the proposed system can be considered as reliable. However, this approach cannot be considered suitable for patients without sepsis.

  5. Machine testning

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with a laboratory exercise of 3 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercise includes a series of tests carried out by the student on a conventional and a numerically controled lathe, respectively. This document...

  6. Machine rates for selected forest harvesting machines

    Science.gov (United States)

    R.W. Brinker; J. Kinard; Robert Rummer; B. Lanford

    2002-01-01

    Very little new literature has been published on the subject of machine rates and machine cost analysis since 1989 when the Alabama Agricultural Experiment Station Circular 296, Machine Rates for Selected Forest Harvesting Machines, was originally published. Many machines discussed in the original publication have undergone substantial changes in various aspects, not...

  7. Dimensionamiento de lotes y programación de una máquina para múltiples productos con setup y escasez

    Directory of Open Access Journals (Sweden)

    Horacio Ocampo Azocar

    2014-08-01

    Full Text Available En este trabajo se desarrolla un procedimiento para resolver una extensión del problema clásico del lote económico y programación (ELSP, con múltiples productos y considerando tiempos de preparación dependientes de la secuencia (setup y satisfacción atrasada de la demanda en un entorno productivo de una máquina. El procedimiento utiliza una heurística de la literatura para generar secuencias de lotes de producción, las que son evaluadas mediante un modelo de optimización no lineal desarrollado por los autores que incorpora la satisfacción de demanda atrasada (backlogging y el cumplimiento de niveles de servicio, minimizando los costos de setup, de inventario y de atraso en un horizonte de tiempo. El método se ilustra mediante la resolución de un problema de tamaño reducido. This work develops a methodology to solve an extension of the Economic Lot Scheduling Problem (ELSP satisfying the demand for multiple products considering sequence dependent setup times and backlogging in a single machine. The fundamental problem is to determine the production sequence and lot size for each production run, in order to meet customer demand in a given planning horizon, minimizing setup, inventory and backlogging costs. An incremental heuristic procedure that interchange two and three lots to obtain the lot production sequence is applied. These lot sequences are evaluated with a nonlinear optimization model to determine the lot size for each lot of the sequence. The method is illustrated solving a small instance of the problem.

  8. Laboratory setup for temperature and humidity measurements

    CERN Document Server

    Eimre, Kristjan

    2015-01-01

    In active particle detectors, the temperature and humidity conditions must be under constant monitoring and control, as even small deviations from the norm cause changes to detector characteristics and result in a loss of precision. To monitor the temperature and humidity, different kinds of sensors are used, which must be calibrated beforehand to ensure their accuracy. To calibrate the large number of sensors that are needed for the particle detectors and other laboratory work, a calibration system is needed. The purpose of the current work was to develop a laboratory setup for temperature and humidity sensor measurements and calibration.

  9. A simple setup for neutron tomography at the Portuguese nuclear research reactor

    International Nuclear Information System (INIS)

    Pereira, M.A. Stanojev; Marques, J.G.; Pugliesi, R.

    2012-01-01

    A simple setup for neutron radiography and tomography was recently installed at the Portuguese Research Reactor. The objective of this work was to determine the operational characteristics of the installed setup, namely the irradiation time to obtain the best dynamic range for individual images and the spatial resolution. The performance of the equipment was demonstrated by imaging a fragment of a seventeenth-century decorative tile. (author)

  10. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    Science.gov (United States)

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  11. Machine protection for FLASH and the European XFEL

    Energy Technology Data Exchange (ETDEWEB)

    Froehlich, Lars

    2009-05-15

    The Free-Electron Laser in Hamburg (FLASH) and the future European X-Ray Free-Electron Laser (XFEL) are sources of brilliant extremeultraviolet and X-ray radiation pulses. Both facilities are based on superconducting linear accelerators (linacs) that can produce and transport electron beams of high average power. With up to 90 kW or up to 600 kW of power, respectively, these beams hold a serious potential to damage accelerator components. This thesis discusses several passive and active machine protection measures needed to ensure safe operation. At FLASH, dark current from the rf gun electron source has activated several accelerator components to unacceptable radiation levels. Its transport through the linac is investigated with detailed tracking simulations using a parallelized and enhanced version of the tracking code Astra; possible remedies are evaluated. Beam losses can lead to the demagnetization of permanent magnet insertion devices. A number of beam loss scenarios typical for FLASH are investigated with shower simulations. A shielding setup is designed and its efficiency is evaluated. For the design parameters of FLASH, it is concluded that the average relative beam loss in the undulators must be controlled to a level of about 10{sup -8}. FLASH is equipped with an active machine protection system (MPS) comprising more than 80 photomultiplier-based beam loss monitors and several subsystems. The maximum response time to beam losses is less than 4 {mu}s. Setup procedures and calibration algorithms for MPS subsystems and components are introduced and operational problems are addressed. Finally, an architecture for a fully programmable machine protection system for the XFEL is presented. Several options for the topology of this system are reviewed, with the result that an availability goal of at least 0.999 for the MPS is achievable with moderate hardware requirements. (orig.)

  12. Machine protection for FLASH and the European XFEL

    International Nuclear Information System (INIS)

    Froehlich, Lars

    2009-05-01

    The Free-Electron Laser in Hamburg (FLASH) and the future European X-Ray Free-Electron Laser (XFEL) are sources of brilliant extremeultraviolet and X-ray radiation pulses. Both facilities are based on superconducting linear accelerators (linacs) that can produce and transport electron beams of high average power. With up to 90 kW or up to 600 kW of power, respectively, these beams hold a serious potential to damage accelerator components. This thesis discusses several passive and active machine protection measures needed to ensure safe operation. At FLASH, dark current from the rf gun electron source has activated several accelerator components to unacceptable radiation levels. Its transport through the linac is investigated with detailed tracking simulations using a parallelized and enhanced version of the tracking code Astra; possible remedies are evaluated. Beam losses can lead to the demagnetization of permanent magnet insertion devices. A number of beam loss scenarios typical for FLASH are investigated with shower simulations. A shielding setup is designed and its efficiency is evaluated. For the design parameters of FLASH, it is concluded that the average relative beam loss in the undulators must be controlled to a level of about 10 -8 . FLASH is equipped with an active machine protection system (MPS) comprising more than 80 photomultiplier-based beam loss monitors and several subsystems. The maximum response time to beam losses is less than 4 μs. Setup procedures and calibration algorithms for MPS subsystems and components are introduced and operational problems are addressed. Finally, an architecture for a fully programmable machine protection system for the XFEL is presented. Several options for the topology of this system are reviewed, with the result that an availability goal of at least 0.999 for the MPS is achievable with moderate hardware requirements. (orig.)

  13. A new experimental setup established for low-energy nuclear astrophysics studies

    International Nuclear Information System (INIS)

    Chen, S.Z.; Xu, S.W.; He, J.J.; Hu, J.; Rolfs, C.E.; Zhang, N.T.; Ma, S.B.; Zhang, L.Y.; Hou, S.Q.; Yu, X.Q.; Ma, X.W.

    2014-01-01

    An experimental setup for low-energy nuclear astrophysics studies has been recently established at the Institute of Modern Physics (IMP), Lanzhou, China. The driver machine is a 320 kV high voltage platform, which can provide intense currents of proton, alpha and many heavy ion beams. The energy of a proton beam was calibrated against the nominal platform high voltage by using a well-known resonant reaction of 11 B(p,γ) 12 C and a non-resonant reaction 12 C(p,γ) 13 N. The accuracy was achieved to be better than ±0.5 keV. The detection system consists of a Clover-type high-purity germanium detector, a silicon detector and a plastic scintillator. The performance of the detectors was tested by several experiments. The astrophysical S-factors of the 7 Li(p,γ) 8 Be and 7 Li(p,α) 3 He reactions were measured with this new setup, and our data agree with the values found in the literature. In addition, the upgrade of our driver machine and experimental setup has been discussed. As a future goal, a fascinating National Deep Underground Laboratory in China, the deepest underground laboratory all over the world, is prospected

  14. EPQ model for imperfect production processes with rework and random preventive machine time for deteriorating items and trended demand

    Directory of Open Access Journals (Sweden)

    Shah Nita H.

    2015-01-01

    Full Text Available Economic production quantity (EPQ model has been analyzed for trended demand, and units in inventory are subject to constant rate. The system allows rework of imperfect units, and preventive maintenance time is random. A search method is used to study the model. The proposed methodology is validated by a numerical example. The sensitivity analysis is carried out to determine the critical model parameters. It is observed that the rate of change of demand, and the deterioration rate have a significant impact on the decision variables and the total cost of an inventory system. The model is highly sensitive to the production and demand rate.

  15. A simple experimental setup for magneto-dielectric measurements

    Energy Technology Data Exchange (ETDEWEB)

    Manimuthu, P.; Shanker, N. Praveen; Kumar, K. Saravana; Venkateswaran, C., E-mail: cvunom@hotmail.com

    2014-09-01

    The increasing demand for the multiferroic materials calls for the need of an experimental setup that will facilitate magneto-dielectric coupling measurements. A connector setup designed makes it possible to measure and analyze the dielectric properties of the material under the influence of a magnetic field. The salient feature of this setup is in its incorporation with the already existing experimental facilities.

  16. A simple experimental setup for magneto-dielectric measurements

    International Nuclear Information System (INIS)

    Manimuthu, P.; Shanker, N. Praveen; Kumar, K. Saravana; Venkateswaran, C.

    2014-01-01

    The increasing demand for the multiferroic materials calls for the need of an experimental setup that will facilitate magneto-dielectric coupling measurements. A connector setup designed makes it possible to measure and analyze the dielectric properties of the material under the influence of a magnetic field. The salient feature of this setup is in its incorporation with the already existing experimental facilities

  17. Electric machines

    CERN Document Server

    Gross, Charles A

    2006-01-01

    BASIC ELECTROMAGNETIC CONCEPTSBasic Magnetic ConceptsMagnetically Linear Systems: Magnetic CircuitsVoltage, Current, and Magnetic Field InteractionsMagnetic Properties of MaterialsNonlinear Magnetic Circuit AnalysisPermanent MagnetsSuperconducting MagnetsThe Fundamental Translational EM MachineThe Fundamental Rotational EM MachineMultiwinding EM SystemsLeakage FluxThe Concept of Ratings in EM SystemsSummaryProblemsTRANSFORMERSThe Ideal n-Winding TransformerTransformer Ratings and Per-Unit ScalingThe Nonideal Three-Winding TransformerThe Nonideal Two-Winding TransformerTransformer Efficiency and Voltage RegulationPractical ConsiderationsThe AutotransformerOperation of Transformers in Three-Phase EnvironmentsSequence Circuit Models for Three-Phase Transformer AnalysisHarmonics in TransformersSummaryProblemsBASIC MECHANICAL CONSIDERATIONSSome General PerspectivesEfficiencyLoad Torque-Speed CharacteristicsMass Polar Moment of InertiaGearingOperating ModesTranslational SystemsA Comprehensive Example: The ElevatorP...

  18. Charging machine

    International Nuclear Information System (INIS)

    Medlin, J.B.

    1976-01-01

    A charging machine for loading fuel slugs into the process tubes of a nuclear reactor includes a tubular housing connected to the process tube, a charging trough connected to the other end of the tubular housing, a device for loading the charging trough with a group of fuel slugs, means for equalizing the coolant pressure in the charging trough with the pressure in the process tubes, means for pushing the group of fuel slugs into the process tube and a latch and a seal engaging the last object in the group of fuel slugs to prevent the fuel slugs from being ejected from the process tube when the pusher is removed and to prevent pressure liquid from entering the charging machine. 3 claims, 11 drawing figures

  19. Minimizing the Makespan for a Two-Stage Three-Machine Assembly Flow Shop Problem with the Sum-of-Processing-Time Based Learning Effect

    Directory of Open Access Journals (Sweden)

    Win-Chin Lin

    2018-01-01

    Full Text Available Two-stage production process and its applications appear in many production environments. Job processing times are usually assumed to be constant throughout the process. In fact, the learning effect accrued from repetitive work experiences, which leads to the reduction of actual job processing times, indeed exists in many production environments. However, the issue of learning effect is rarely addressed in solving a two-stage assembly scheduling problem. Motivated by this observation, the author studies a two-stage three-machine assembly flow shop problem with a learning effect based on sum of the processing times of already processed jobs to minimize the makespan criterion. Because this problem is proved to be NP-hard, a branch-and-bound method embedded with some developed dominance propositions and a lower bound is employed to search for optimal solutions. A cloud theory-based simulated annealing (CSA algorithm and an iterated greedy (IG algorithm with four different local search methods are used to find near-optimal solutions for small and large number of jobs. The performances of adopted algorithms are subsequently compared through computational experiments and nonparametric statistical analyses, including the Kruskal–Wallis test and a multiple comparison procedure.

  20. Characterization and classification of seven citrus herbs by liquid chromatography-quadrupole time-of-flight mass spectrometry and genetic algorithm optimized support vector machines.

    Science.gov (United States)

    Duan, Li; Guo, Long; Liu, Ke; Liu, E-Hu; Li, Ping

    2014-04-25

    Citrus herbs have been widely used in traditional medicine and cuisine in China and other countries since the ancient time. However, the authentication and quality control of Citrus herbs has always been a challenging task due to their similar morphological characteristics and the diversity of the multi-components existed in the complicated matrix. In the present investigation, we developed a novel strategy to characterize and classify seven Citrus herbs based on chromatographic analysis and chemometric methods. Firstly, the chemical constituents in seven Citrus herbs were globally characterized by liquid chromatography combined with quadrupole time-of-flight mass spectrometry (LC-QTOF-MS). Based on their retention time, UV spectra and MS fragmentation behavior, a total of 75 compounds were identified or tentatively characterized in these herbal medicines. Secondly, a segmental monitoring method based on LC-variable wavelength detection was developed for simultaneous quantification of ten marker compounds in these Citrus herbs. Thirdly, based on the contents of the ten analytes, genetic algorithm optimized support vector machines (GA-SVM) was employed to differentiate and classify the 64 samples covering these seven herbs. The obtained classifier showed good prediction performance and the overall prediction accuracy reached 96.88%. The proposed strategy is expected to provide new insight for authentication and quality control of traditional herbs. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. A comparison of ambient casino sound and music: effects on dissociation and on perceptions of elapsed time while playing slot machines.

    Science.gov (United States)

    Noseworthy, Theodore J; Finlay, Karen

    2009-09-01

    This research examined the effects of a casino's auditory character on estimates of elapsed time while gambling. More specifically, this study varied whether the sound heard while gambling was ambient casino sound alone or ambient casino sound accompanied by music. The tempo and volume of both the music and ambient sound were varied to manipulate temporal engagement and introspection. One hundred and sixty (males = 91) individuals played slot machines in groups of 5-8, after which they provided estimates of elapsed time. The findings showed that the typical ambient casino auditive environment, which characterizes the majority of gaming venues, promotes understated estimates of elapsed duration of play. In contrast, when music is introduced into the ambient casino environment, it appears to provide a cue of interval from which players can more accurately reconstruct elapsed duration of play. This is particularly the case when the tempo of the music is slow and the volume is high. Moreover, the confidence with which time estimates are held (as reflected by latency of response) is higher in an auditive environment with music than in an environment that is comprised of ambient casino sounds alone. Implications for casino management are discussed.

  2. Genesis machines

    CERN Document Server

    Amos, Martyn

    2014-01-01

    Silicon chips are out. Today's scientists are using real, wet, squishy, living biology to build the next generation of computers. Cells, gels and DNA strands are the 'wetware' of the twenty-first century. Much smaller and more intelligent, these organic computers open up revolutionary possibilities. Tracing the history of computing and revealing a brave new world to come, Genesis Machines describes how this new technology will change the way we think not just about computers - but about life itself.

  3. Joint Training of Deep Boltzmann Machines

    OpenAIRE

    Goodfellow, Ian; Courville, Aaron; Bengio, Yoshua

    2012-01-01

    We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classifi- cation tasks.

  4. Machine Directional Register System Modeling for Shaft-Less Drive Gravure Printing Machines

    Directory of Open Access Journals (Sweden)

    Shanhui Liu

    2013-01-01

    Full Text Available In the latest type of gravure printing machines referred to as the shaft-less drive system, each gravure printing roller is driven by an individual servo motor, and all motors are electrically synchronized. The register error is regulated by a speed difference between the adjacent printing rollers. In order to improve the control accuracy of register system, an accurate mathematical model of the register system should be investigated for the latest machines. Therefore, the mathematical model of the machine directional register (MDR system is studied for the multicolor gravure printing machines in this paper. According to the definition of the MDR error, the model is derived, and then it is validated by the numerical simulation and experiments carried out in the experimental setup of the four-color gravure printing machines. The results show that the established MDR system model is accurate and reliable.

  5. Stochastic scheduling on unrelated machines

    NARCIS (Netherlands)

    Skutella, Martin; Sviridenko, Maxim; Uetz, Marc Jochen

    2013-01-01

    Two important characteristics encountered in many real-world scheduling problems are heterogeneous machines/processors and a certain degree of uncertainty about the actual sizes of jobs. The first characteristic entails machine dependent processing times of jobs and is captured by the classical

  6. Man and Machines: Three Criticisms.

    Science.gov (United States)

    Schneider, Edward F.

    As machines have become a more common part of daily life through the passage of time, the idea that the line separating man and machine is slowly fading has become more popular as well. This paper examines three critics of change through their most famous works. One of the most popular views of Mary Shelley's "Frankenstein" is that it is a…

  7. A setup for measurement of beam stability and position using position sensitive detector for Indus-1

    International Nuclear Information System (INIS)

    Nathwani, R.K.; Joshi, D.K.; Tyagi, Y.; Soni, R.S.; Puntambekar, T.A.; Pithawa, C.K.

    2009-01-01

    The 450 MeV electron synchrotron radiation source Indus-1 is operational at RRCAT. A set-up has been developed to measure the relative transverse positional stability of the electron beam and its position with microns resolution using position sensitive photodiodes. The set-up has been installed at the diagnostics beam line of Indus-1. Synchrotron light from photo physics beamline was reflected out by inserting a Ni coated mirror and was focused onto a duo-lateral position sensitive photodiode by using two mirrors of 1.25 meter focal length to obtain unity magnification. The set-up consists of a duo-lateral position sensitive detector (PSD), precision processing electronics and a PC based data acquisition system. A computer program captures the processed signals on to a PC using GPIB interface and displays vertical position of the beam in real time. The paper describes the salient features of the setup developed for measurement of beam stability. (author)

  8. Machine learning systems

    Energy Technology Data Exchange (ETDEWEB)

    Forsyth, R

    1984-05-01

    With the dramatic rise of expert systems has come a renewed interest in the fuel that drives them-knowledge. For it is specialist knowledge which gives expert systems their power. But extracting knowledge from human experts in symbolic form has proved arduous and labour-intensive. So the idea of machine learning is enjoying a renaissance. Machine learning is any automatic improvement in the performance of a computer system over time, as a result of experience. Thus a learning algorithm seeks to do one or more of the following: cover a wider range of problems, deliver more accurate solutions, obtain answers more cheaply, and simplify codified knowledge. 6 references.

  9. Evaluating the Generalization Value of Process-based Models in a Deep-in-time Machine Learning framework

    Science.gov (United States)

    Shen, C.; Fang, K.

    2017-12-01

    Deep Learning (DL) methods have made revolutionary strides in recent years. A core value proposition of DL is that abstract notions and patterns can be extracted purely from data, without the need for domain expertise. Process-based models (PBM), on the other hand, can be regarded as repositories of human knowledge or hypotheses about how systems function. Here, through computational examples, we argue that there is merit in integrating PBMs with DL due to the imbalance and lack of data in many situations, especially in hydrology. We trained a deep-in-time neural network, the Long Short-Term Memory (LSTM), to learn soil moisture dynamics from Soil Moisture Active Passive (SMAP) Level 3 product. We show that when PBM solutions are integrated into LSTM, the network is able to better generalize across regions. LSTM is able to better utilize PBM solutions than simpler statistical methods. Our results suggest PBMs have generalization value which should be carefully assessed and utilized. We also emphasize that when properly regularized, the deep network is robust and is of superior testing performance compared to simpler methods.

  10. Characterisation of Dynamic Mechanical Properties of Resistance Welding Machines

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels

    2005-01-01

    characterizing the dynamic mechanical characteristics of resistance welding machines is suggested, and a test set-up is designed determining the basic, independent machine parameters required in the model. The model is verified by performing a series of mechanical tests as well as real projection welds.......The dynamic mechanical properties of a resistance welding machine have significant influence on weld quality, which must be considered when simulating the welding process numerically. However, due to the complexity of the machine structure and the mutual coupling of components of the machine system......, it is very difficult to measure or calculate the basic, independent machine parameters required in a mathematical model of the machine dynamics, and no test method has so far been presented in literature, which can be applied directly in an industrial environment. In this paper, a mathematical model...

  11. High precision neutron interferometer setup S18b

    International Nuclear Information System (INIS)

    Hasegawa, Y.; Lemmel, H.

    2011-01-01

    The present setup at S18 is a multi purpose instrument. It is used for both interferometry and a Bonse-Hart camera for USANS (Ultra Small Angle Neutron Scattering) spectroscopy with wide range tunability of wavelength. Some recent measurements demand higher stability of the instrument, which made us to propose a new setup dedicated particularly for neutron interferometer experiments requiring high phase stability. To keep both options available, we suggest building the new setup in addition to the old one. By extending the space of the present setup by 1.5 m to the upstream, both setups can be accommodated side by side. (authors)

  12. Mathematical determination of setup parameters for carcinoma breast cases

    International Nuclear Information System (INIS)

    Prasad, P.B.L.D.; Suresh, P.; Sridhar, A.

    2008-01-01

    Determining proper patient set up parameters like IFD, Gantry angles and field width in Ca Breast are prime important to achieve precise treatment. In a center where 3D Treatment Planning Systems (TPS) and simulator are not available to determine the set up parameters, contouring of target region is essential which is time consuming. The mathematical formula described here provides instant patient set up parameters using machine parameters. (author)

  13. Simulations of Quantum Turing Machines by Quantum Multi-Stack Machines

    OpenAIRE

    Qiu, Daowen

    2005-01-01

    As was well known, in classical computation, Turing machines, circuits, multi-stack machines, and multi-counter machines are equivalent, that is, they can simulate each other in polynomial time. In quantum computation, Yao [11] first proved that for any quantum Turing machines $M$, there exists quantum Boolean circuit $(n,t)$-simulating $M$, where $n$ denotes the length of input strings, and $t$ is the number of move steps before machine stopping. However, the simulations of quantum Turing ma...

  14. Setup uncertainties in linear accelerator based stereotactic radiosurgery and a derivation of the corresponding setup margin for treatment planning.

    Science.gov (United States)

    Zhang, Mutian; Zhang, Qinghui; Gan, Hua; Li, Sicong; Zhou, Su-min

    2016-02-01

    In the present study, clinical stereotactic radiosurgery (SRS) setup uncertainties from image-guidance data are analyzed, and the corresponding setup margin is estimated for treatment planning purposes. Patients undergoing single-fraction SRS at our institution were localized using invasive head ring or non-invasive thermoplastic masks. Setup discrepancies were obtained from an in-room x-ray patient position monitoring system. Post treatment re-planning using the measured setup errors was performed in order to estimate the individual target margins sufficient to compensate for the actual setup errors. The formula of setup margin for a general SRS patient population was derived by proposing a correlation between the three-dimensional setup error and the required minimal margin. Setup errors of 104 brain lesions were analyzed, in which 81 lesions were treated using an invasive head ring, and 23 were treated using non-invasive masks. In the mask cases with image guidance, the translational setup uncertainties achieved the same level as those in the head ring cases. Re-planning results showed that the margins for individual patients could be smaller than the clinical three-dimensional setup errors. The derivation of setup margin adequate to address the patient setup errors was demonstrated by using the arbitrary planning goal of treating 95% of the lesions with sufficient doses. With image guidance, the patient setup accuracy of mask cases can be comparable to that of invasive head rings. The SRS setup margin can be derived for a patient population with the proposed margin formula to compensate for the institution-specific setup errors. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Hybrid genetic algorithm for minimizing non productive machining ...

    African Journals Online (AJOL)

    Minimization of non-productive time of tool during machining for 2.5 D milling significantly reduces the machining cost. The tool gets retracted and repositioned several times in multi pocket jobs during rough machining which consumes 15 to 30% of total machining time depending on the complexity of job. The automatic ...

  16. Representational Machines

    DEFF Research Database (Denmark)

    Photography not only represents space. Space is produced photographically. Since its inception in the 19th century, photography has brought to light a vast array of represented subjects. Always situated in some spatial order, photographic representations have been operatively underpinned by social...... to the enterprises of the medium. This is the subject of Representational Machines: How photography enlists the workings of institutional technologies in search of establishing new iconic and social spaces. Together, the contributions to this edited volume span historical epochs, social environments, technological...... possibilities, and genre distinctions. Presenting several distinct ways of producing space photographically, this book opens a new and important field of inquiry for photography research....

  17. Shear machines

    International Nuclear Information System (INIS)

    Astill, M.; Sunderland, A.; Waine, M.G.

    1980-01-01

    A shear machine for irradiated nuclear fuel elements has a replaceable shear assembly comprising a fuel element support block, a shear blade support and a clamp assembly which hold the fuel element to be sheared in contact with the support block. A first clamp member contacts the fuel element remote from the shear blade and a second clamp member contacts the fuel element adjacent the shear blade and is advanced towards the support block during shearing to compensate for any compression of the fuel element caused by the shear blade (U.K.)

  18. Intrafractional Target Motions and Uncertainties of Treatment Setup Reference Systems in Accelerated Partial Breast Irradiation

    International Nuclear Information System (INIS)

    Yue, Ning J.; Goyal, Sharad; Zhou Jinghao; Khan, Atif J.; Haffty, Bruce G.

    2011-01-01

    Purpose: This study investigated the magnitude of intrafractional motion and level of accuracy of various setup strategies in accelerated partial breast irradiation (APBI) using three-dimensional conformal external beam radiotherapy. Methods and Materials: At lumpectomy, gold fiducial markers were strategically sutured to the surrounding walls of the cavity. Weekly fluoroscopy imaging was conducted at treatment to investigate the respiration-induced target motions. Daily pre- and post-RT kV imaging was performed, and images were matched to digitally reconstructed radiographs based on bony anatomy and fiducial markers, respectively, to determine the intrafractional motion magnitudes over the course of treatment. The positioning differences of the laser tattoo- and the bony anatomy-based setups compared with those of the marker-based setup (benchmark) were also determined. The study included 21 patients. Results: Although lung exhibited significant motion, the average marker motion amplitude on the fluoroscopic image was about 1 mm. Over a typical treatment time period, average intrafractional motion magnitude was 4.2 mm and 2.6 mm based on the marker and bony anatomy matching, respectively. The bony anatomy- and laser tattoo-based interfractional setup errors, with respect to the fiducial marker-based setup, were 7.1 and 9.0 mm, respectively. Conclusions: Respiration has limited effects on the target motion during APBI. Bony anatomy-based treatment setup improves the accuracy relative to that of the laser tattoo-based setup approach. Since fiducial markers are sutured directly to the surgical cavity, the marker-based approach can further improve the interfractional setup accuracy. On average, a seroma cavity exhibits intrafractional motion of more than 4 mm, a magnitude that is larger than that which is otherwise derived based on bony anatomy matching. A seroma-specific marker-based approach has the potential to improve treatment accuracy by taking the true inter

  19. Electricity of machine tool

    International Nuclear Information System (INIS)

    Gijeon media editorial department

    1977-10-01

    This book is divided into three parts. The first part deals with electricity machine, which can taints from generator to motor, motor a power source of machine tool, electricity machine for machine tool such as switch in main circuit, automatic machine, a knife switch and pushing button, snap switch, protection device, timer, solenoid, and rectifier. The second part handles wiring diagram. This concludes basic electricity circuit of machine tool, electricity wiring diagram in your machine like milling machine, planer and grinding machine. The third part introduces fault diagnosis of machine, which gives the practical solution according to fault diagnosis and the diagnostic method with voltage and resistance measurement by tester.

  20. Environmentally Friendly Machining

    CERN Document Server

    Dixit, U S; Davim, J Paulo

    2012-01-01

    Environment-Friendly Machining provides an in-depth overview of environmentally-friendly machining processes, covering numerous different types of machining in order to identify which practice is the most environmentally sustainable. The book discusses three systems at length: machining with minimal cutting fluid, air-cooled machining and dry machining. Also covered is a way to conserve energy during machining processes, along with useful data and detailed descriptions for developing and utilizing the most efficient modern machining tools. Researchers and engineers looking for sustainable machining solutions will find Environment-Friendly Machining to be a useful volume.

  1. Modelling Machine Tools using Structure Integrated Sensors for Fast Calibration

    Directory of Open Access Journals (Sweden)

    Benjamin Montavon

    2018-02-01

    Full Text Available Monitoring of the relative deviation between commanded and actual tool tip position, which limits the volumetric performance of the machine tool, enables the use of contemporary methods of compensation to reduce tolerance mismatch and the uncertainties of on-machine measurements. The development of a primarily optical sensor setup capable of being integrated into the machine structure without limiting its operating range is presented. The use of a frequency-modulating interferometer and photosensitive arrays in combination with a Gaussian laser beam allows for fast and automated online measurements of the axes’ motion errors and thermal conditions with comparable accuracy, lower cost, and smaller dimensions as compared to state-of-the-art optical measuring instruments for offline machine tool calibration. The development is tested through simulation of the sensor setup based on raytracing and Monte-Carlo techniques.

  2. RISMA: A Rule-based Interval State Machine Algorithm for Alerts Generation, Performance Analysis and Monitoring Real-Time Data Processing

    Science.gov (United States)

    Laban, Shaban; El-Desouky, Aly

    2013-04-01

    The monitoring of real-time systems is a challenging and complicated process. So, there is a continuous need to improve the monitoring process through the use of new intelligent techniques and algorithms for detecting exceptions, anomalous behaviours and generating the necessary alerts during the workflow monitoring of such systems. The interval-based or period-based theorems have been discussed, analysed, and used by many researches in Artificial Intelligence (AI), philosophy, and linguistics. As explained by Allen, there are 13 relations between any two intervals. Also, there have also been many studies of interval-based temporal reasoning and logics over the past decades. Interval-based theorems can be used for monitoring real-time interval-based data processing. However, increasing the number of processed intervals makes the implementation of such theorems a complex and time consuming process as the relationships between such intervals are increasing exponentially. To overcome the previous problem, this paper presents a Rule-based Interval State Machine Algorithm (RISMA) for processing, monitoring, and analysing the behaviour of interval-based data, received from real-time sensors. The proposed intelligent algorithm uses the Interval State Machine (ISM) approach to model any number of interval-based data into well-defined states as well as inferring them. An interval-based state transition model and methodology are presented to identify the relationships between the different states of the proposed algorithm. By using such model, the unlimited number of relationships between similar large numbers of intervals can be reduced to only 18 direct relationships using the proposed well-defined states. For testing the proposed algorithm, necessary inference rules and code have been designed and applied to the continuous data received in near real-time from the stations of International Monitoring System (IMS) by the International Data Centre (IDC) of the Preparatory

  3. Machine Protection

    CERN Document Server

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an ...

  4. Design and development of multiple sample counting setup

    International Nuclear Information System (INIS)

    Rath, D.P.; Murali, S.; Babu, D.A.R.

    2010-01-01

    Full text: The analysis of active samples on regular basis for ambient air activity and floor contamination from radio chemical lab accounts for major chunk of the operational activity in Health Physicist's responsibility. The requirement for daily air sample analysis on immediate counting and delayed counting from various labs in addition to samples of smear swipe check of lab led to the urge for development of system that could cater multiple sample analysis in a time programmed manner on a single sample loading. A multiple alpha/beta counting system for counting was designed and fabricated. It has arrangements for loading 10 samples in slots in order, get counted in a time programmed manner with results displayed and records maintained in PC. The paper describes the design and development of multiple sample counting setup presently in use at the facility has resulted in reduction of man-hour consumption in counting and recording of the results

  5. The use of adaptive radiation therapy to reduce setup error: a prospective clinical study

    International Nuclear Information System (INIS)

    Yan Di; Wong, John; Vicini, Frank; Robertson, John; Horwitz, Eric; Brabbins, Donald; Cook, Carla; Gustafson, Gary; Stromberg, Jannifer; Martinez, Alvaro

    1996-01-01

    Purpose: Adaptive Radiation Therapy (ART) is a closed-loop feedback process where each patients treatment is adaptively optimized according to the individual variation information measured during the course of treatment. The process aims to maximize the benefits of treatment for the individual patient. A prospective study is currently being conducted to test the feasibility and effectiveness of ART for clinical use. The present study is limited to compensating the effects of systematic setup error. Methods and Materials: The study includes 20 patients treated on a linear accelerator equipped with a computer controlled multileaf collimator (MLC) and a electronic portal imaging device (EPID). Alpha cradles are used to immobilize those patients treated for disease in the thoracic and abdominal regions, and thermal plastic masks for the head and neck. Portal images are acquired daily. Setup error of each treatment field is quantified off-line every day. As determined from an earlier retrospective study of different clinical sites, the measured setup variation from the first 4 to 9 days, are used to estimate systematic setup error and the standard deviation of random setup error for each field. Setup adjustment is made if estimated systematic setup error of the treatment field was larger than or equal to 2 mm. Instead of the conventional approach of repositioning the patient, setup correction is implemented by reshaping MLC to compensate for the estimated systematic error. The entire process from analysis of portal images to the implementation of the modified MLC field is performed via computer network. Systematic and random setup errors of the treatment after adjustment are compared with those prior to adjustment. Finally, the frequency distributions of block overlap cumulated throughout the treatment course are evaluated. Results: Sixty-seven percent of all treatment fields were reshaped to compensate for the estimated systematic errors. At the time of this writing

  6. High-throughput, label-free, single-cell, microalgal lipid screening by machine-learning-equipped optofluidic time-stretch quantitative phase microscopy.

    Science.gov (United States)

    Guo, Baoshan; Lei, Cheng; Kobayashi, Hirofumi; Ito, Takuro; Yalikun, Yaxiaer; Jiang, Yiyue; Tanaka, Yo; Ozeki, Yasuyuki; Goda, Keisuke

    2017-05-01

    The development of reliable, sustainable, and economical sources of alternative fuels to petroleum is required to tackle the global energy crisis. One such alternative is microalgal biofuel, which is expected to play a key role in reducing the detrimental effects of global warming as microalgae absorb atmospheric CO 2 via photosynthesis. Unfortunately, conventional analytical methods only provide population-averaged lipid amounts and fail to characterize a diverse population of microalgal cells with single-cell resolution in a non-invasive and interference-free manner. Here high-throughput label-free single-cell screening of lipid-producing microalgal cells with optofluidic time-stretch quantitative phase microscopy was demonstrated. In particular, Euglena gracilis, an attractive microalgal species that produces wax esters (suitable for biodiesel and aviation fuel after refinement), within lipid droplets was investigated. The optofluidic time-stretch quantitative phase microscope is based on an integration of a hydrodynamic-focusing microfluidic chip, an optical time-stretch quantitative phase microscope, and a digital image processor equipped with machine learning. As a result, it provides both the opacity and phase maps of every single cell at a high throughput of 10,000 cells/s, enabling accurate cell classification without the need for fluorescent staining. Specifically, the dataset was used to characterize heterogeneous populations of E. gracilis cells under two different culture conditions (nitrogen-sufficient and nitrogen-deficient) and achieve the cell classification with an error rate of only 2.15%. The method holds promise as an effective analytical tool for microalgae-based biofuel production. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  7. QUASI-STELLAR OBJECT SELECTION ALGORITHM USING TIME VARIABILITY AND MACHINE LEARNING: SELECTION OF 1620 QUASI-STELLAR OBJECT CANDIDATES FROM MACHO LARGE MAGELLANIC CLOUD DATABASE

    International Nuclear Information System (INIS)

    Kim, Dae-Won; Protopapas, Pavlos; Alcock, Charles; Trichas, Markos; Byun, Yong-Ik; Khardon, Roni

    2011-01-01

    We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.

  8. Yield and quality of milk and udder health in Martina Franca ass: effects of daily interval and time of machine milking

    Directory of Open Access Journals (Sweden)

    Giovanni Martemucci

    2010-01-01

    Full Text Available Twenty asses of Martina Franca breed, machine milked twice a day, were used to assess the influence of milking interval (3-h, 5-h, and 8-h; N=5 and time (700, 1200 and 1900 on milk yield and udder health. Individual milk samples were taken to determine fat, protein and lactose con- tent. Sensory analysis profile was also assessed. Milk’s total bacterial count (TBC, somatic cell con- tent (SCC and udder’s skin temperature were considered to assess udder health. Milk yield increases by 28.4% (P<0.01 with a milking interval from 3-h to 8-h and is higher (P<0.01 at morning milking. The maximum milk yield per milking corresponds to 700 milking (1416.9 mL thus indicating a circa- dian rhythm in milk secretion processes. Milking intervals of 5 and 8 hours cause a decrease (P<0.01 in milk fat and lactose content. The 8-h interval leads to an increase (P<0.01 in SCC but without any significance for the health udder. No alterations about CBT, clinical evaluation and temperature of ud- der were observed. Milk organoleptic characteristics were better in the 3-h interval milking.

  9. Machinability of nickel based alloys using electrical discharge machining process

    Science.gov (United States)

    Khan, M. Adam; Gokul, A. K.; Bharani Dharan, M. P.; Jeevakarthikeyan, R. V. S.; Uthayakumar, M.; Thirumalai Kumaran, S.; Duraiselvam, M.

    2018-04-01

    The high temperature materials such as nickel based alloys and austenitic steel are frequently used for manufacturing critical aero engine turbine components. Literature on conventional and unconventional machining of steel materials is abundant over the past three decades. However the machining studies on superalloy is still a challenging task due to its inherent property and quality. Thus this material is difficult to be cut in conventional processes. Study on unconventional machining process for nickel alloys is focused in this proposed research. Inconel718 and Monel 400 are the two different candidate materials used for electrical discharge machining (EDM) process. Investigation is to prepare a blind hole using copper electrode of 6mm diameter. Electrical parameters are varied to produce plasma spark for diffusion process and machining time is made constant to calculate the experimental results of both the material. Influence of process parameters on tool wear mechanism and material removal are considered from the proposed experimental design. While machining the tool has prone to discharge more materials due to production of high energy plasma spark and eddy current effect. The surface morphology of the machined surface were observed with high resolution FE SEM. Fused electrode found to be a spherical structure over the machined surface as clumps. Surface roughness were also measured with surface profile using profilometer. It is confirmed that there is no deviation and precise roundness of drilling is maintained.

  10. Research on mechanical and sensoric set-up for high strain rate testing of high performance fibers

    Science.gov (United States)

    Unger, R.; Schegner, P.; Nocke, A.; Cherif, C.

    2017-10-01

    Within this research project, the tensile behavior of high performance fibers, such as carbon fibers, is investigated under high velocity loads. This contribution (paper) focuses on the clamp set-up of two testing machines. Based on a kinematic model, weight optimized clamps are designed and evaluated. By analyzing the complex dynamic behavior of conventional high velocity testing machines, it has been shown that the impact typically exhibits an elastic characteristic. This leads to barely predictable breaking speeds and will not work at higher speeds when acceleration force exceeds material specifications. Therefore, a plastic impact behavior has to be achieved, even at lower testing speeds. This type of impact behavior at lower speeds can be realized by means of some minor test set-up adaptions.

  11. Developing and implementing a high precision setup system

    Science.gov (United States)

    Peng, Lee-Cheng

    The demand for high-precision radiotherapy (HPRT) was first implemented in stereotactic radiosurgery using a rigid, invasive stereotactic head frame. Fractionated stereotactic radiotherapy (SRT) with a frameless device was developed along a growing interest in sophisticated treatment with a tight margin and high-dose gradient. This dissertation establishes the complete management for HPRT in the process of frameless SRT, including image-guided localization, immobilization, and dose evaluation. The most ideal and precise positioning system can allow for ease of relocation, real-time patient movement assessment, high accuracy, and no additional dose in daily use. A new image-guided stereotactic positioning system (IGSPS), the Align RT3C 3D surface camera system (ART, VisionRT), which combines 3D surface images and uses a real-time tracking technique, was developed to ensure accurate positioning at the first place. The uncertainties of current optical tracking system, which causes patient discomfort due to additional bite plates using the dental impression technique and external markers, are found. The accuracy and feasibility of ART is validated by comparisons with the optical tracking and cone-beam computed tomography (CBCT) systems. Additionally, an effective daily quality assurance (QA) program for the linear accelerator and multiple IGSPSs is the most important factor to ensure system performance in daily use. Currently, systematic errors from the phantom variety and long measurement time caused by switching phantoms were discovered. We investigated the use of a commercially available daily QA device to improve the efficiency and thoroughness. Reasonable action level has been established by considering dosimetric relevance and clinic flow. As for intricate treatments, the effect of dose deviation caused by setup errors remains uncertain on tumor coverage and toxicity on OARs. The lack of adequate dosimetric simulations based on the true treatment coordinates from

  12. Toroidal helical quartz forming machine

    International Nuclear Information System (INIS)

    Hanks, K.W.; Cole, T.R.

    1977-01-01

    The Scyllac fusion experimental machine used 10 cm diameter smooth bore discharge tubes formed into a simple toroidal shape prior to 1974. At about that time, it was discovered that a discharge tube was required to follow the convoluted shape of the load coil. A machine was designed and built to form a fused quartz tube with a toroidal shape. The machine will accommodate quartz tubes from 5 cm to 20 cm diameter forming it into a 4 m toroidal radius with a 1 to 5 cm helical displacement. The machine will also generate a helical shape on a linear tube. Two sets of tubes with different helical radii and wavelengths have been successfully fabricated. The problems encountered with the design and fabrication of this machine are discussed

  13. Machine intelligence and signal processing

    CERN Document Server

    Vatsa, Mayank; Majumdar, Angshul; Kumar, Ajay

    2016-01-01

    This book comprises chapters on key problems in machine learning and signal processing arenas. The contents of the book are a result of a 2014 Workshop on Machine Intelligence and Signal Processing held at the Indraprastha Institute of Information Technology. Traditionally, signal processing and machine learning were considered to be separate areas of research. However in recent times the two communities are getting closer. In a very abstract fashion, signal processing is the study of operator design. The contributions of signal processing had been to device operators for restoration, compression, etc. Applied Mathematicians were more interested in operator analysis. Nowadays signal processing research is gravitating towards operator learning – instead of designing operators based on heuristics (for example wavelets), the trend is to learn these operators (for example dictionary learning). And thus, the gap between signal processing and machine learning is fast converging. The 2014 Workshop on Machine Intel...

  14. Aplicación de un algoritmo ACO al problema de taller de flujo de permutación con tiempos de preparación dependientes de la secuencia y minimización de makespan An ant colony algorithm for the permutation flowshop with sequence dependent setup times and makespan minimization

    Directory of Open Access Journals (Sweden)

    Eduardo Salazar Hornig

    2011-08-01

    Full Text Available En este trabajo se estudió el problema de secuenciamiento de trabajos en el taller de flujo de permutación con tiempos de preparación dependientes de la secuencia y minimización de makespan. Para ello se propuso un algoritmo de optimización mediante colonia de hormigas (ACO, llevando el problema original a una estructura semejante al problema del vendedor viajero TSP (Traveling Salesman Problem asimétrico, utilizado para su evaluación problemas propuestos en la literatura y se compara con una adaptación de la heurística NEH (Nawaz-Enscore-Ham. Posteriormente se aplica una búsqueda en vecindad a la solución obtenida tanto por ACO como NEH.This paper studied the permutation flowshop with sequence dependent setup times and makespan minimization. An ant colony algorithm which turns the original problem into an asymmetric TSP (Traveling Salesman Problem structure is presented, and applied to problems proposed in the literature and is compared with an adaptation of the NEH heuristic. Subsequently a neighborhood search was applied to the solution obtained by the ACO algorithm and the NEH heuristic.

  15. Current status of GALS setup in JINR

    Energy Technology Data Exchange (ETDEWEB)

    Zemlyanoy, S., E-mail: zemlya@jinr.ru; Avvakumov, K., E-mail: kavvakumov@jinr.ru [Joint Institute for Nuclear Research, Flerov Laboratory of Nuclear Reactions (Russian Federation); Fedosseev, V. [CERN (Switzerland); Bark, R. [Nat. Research Foundation, iThemba LABS (South Africa); Blazczak, Z. [A. Mickiewicz University, Faculty of Physics (Poland); Janas, Z. [University of Warsaw, Faculty of Physics (Poland)

    2017-11-15

    This is a brief report on the current status of the new GAs cell based Laser ionization Setup (GALS) at the Flerov Laboratory for Nuclear Reactions (FLNR) of the Joint Institute for Nuclear Research (JINR) in Dubna. GALS will exploit available beams from the U-400M cyclotron in low energy multi-nucleon transfer reactions to study exotic neutron-rich nuclei located in the “north-east” region of nuclear map. Products from 4.5 to 9 MeV/nucleon heavy-ion collisions, such as {sup 136}Xe on {sup 208}Pb, are thermalized and neutralized in a high pressure gas cell and subsequently selectively laser re-ionized. In order to choose the best scheme of ion extraction the results of computer simulations of two different systems are presented. The first off- and online experiment will be performed on osmium atoms that is regarded as a most convenient element for producing isotopes with neutron number in the vicinity of the magic N = 126.

  16. Status On Multi-microsecond Prepulse Technique On Sphinx Machine Going From Nested To Single Wire Array For 800 ns Implosion Time Z-pinch

    Science.gov (United States)

    Maury, P.; Calamy, H.; Grunenwald, J.; Lassalle, F.; Zucchini, F.; Loyen, A.; Georges, A.; Morell, A.; Bedoch, J. P.

    2009-01-01

    The Sphinx machine[1] is a 6 MA, 1 μS driver based on the LTD technology, used for Z-pinch experiments. Important improvements of Sphinx radiation output were recently obtained using a multi-microsecond current prepulse[2]. Total power per unit of length is multiplied by a factor of 6 and FWHM divided by a factor of 2.5. Early breakdown of the wires during the prepulse phase dramatically changes the ablation phase leading to an improvement of axial homogeneity of both the implosion and the final radiating column. As a consequence, the cathode bubble observed on classical shots is definitively removed. The implosion is then centered and zippering effect is reduced, leading to simultaneous x-ray emission of the whole length. A great reproducibility is obtained. Nested arrays were used before to mitigate the Rayleigh-Taylor instabilities during the implosion phase. Further experiments with pre-pulse technique are described here were inner array was removed. The goal of these experiments was to see if long prepulse could give stable enough implosion with single array and at the same time increase the η parameter by reducing the mass of the load. Experimental results of single wire array loads of typical dimension 5 cm in height with implosion time between 700 and 900 ns and diameter varying between 80 and 140 mm are given. Parameters of the loads were varying in term of radius and number of wires. Comparisons with nested wire array loads are done and trends are proposed. Characteristics of both the implosion and the final radiating column are shown. 2D MHD numerical simulations of single wire array become easier as there is no interaction between outer and inner array anymore. A systematic study was done using injection mass model to benchmark simulation with experiments.

  17. Status On Multi-microsecond Prepulse Technique On Sphinx Machine Going From Nested To Single Wire Array For 800 ns Implosion Time Z-pinch

    International Nuclear Information System (INIS)

    Maury, P.; Calamy, H.; Grunenwald, J.; Lassalle, F.; Zucchini, F.; Loyen, A.; Georges, A.; Morell, A.; Bedoch, J. P.

    2009-01-01

    The Sphinx machine [1] is a 6 MA, 1 μS driver based on the LTD technology, used for Z-pinch experiments. Important improvements of Sphinx radiation output were recently obtained using a multi-microsecond current prepulse [2] . Total power per unit of length is multiplied by a factor of 6 and FWHM divided by a factor of 2.5. Early breakdown of the wires during the prepulse phase dramatically changes the ablation phase leading to an improvement of axial homogeneity of both the implosion and the final radiating column. As a consequence, the cathode bubble observed on classical shots is definitively removed. The implosion is then centered and zippering effect is reduced, leading to simultaneous x-ray emission of the whole length. A great reproducibility is obtained. Nested arrays were used before to mitigate the Rayleigh-Taylor instabilities during the implosion phase. Further experiments with pre-pulse technique are described here were inner array was removed. The goal of these experiments was to see if long prepulse could give stable enough implosion with single array and at the same time increase the η parameter by reducing the mass of the load. Experimental results of single wire array loads of typical dimension 5 cm in height with implosion time between 700 and 900 ns and diameter varying between 80 and 140 mm are given. Parameters of the loads were varying in term of radius and number of wires. Comparisons with nested wire array loads are done and trends are proposed. Characteristics of both the implosion and the final radiating column are shown. 2D MHD numerical simulations of single wire array become easier as there is no interaction between outer and inner array anymore. A systematic study was done using injection mass model to benchmark simulation with experiments.

  18. Analysis of machining and machine tools

    CERN Document Server

    Liang, Steven Y

    2016-01-01

    This book delivers the fundamental science and mechanics of machining and machine tools by presenting systematic and quantitative knowledge in the form of process mechanics and physics. It gives readers a solid command of machining science and engineering, and familiarizes them with the geometry and functionality requirements of creating parts and components in today’s markets. The authors address traditional machining topics, such as: single and multiple point cutting processes grinding components accuracy and metrology shear stress in cutting cutting temperature and analysis chatter They also address non-traditional machining, such as: electrical discharge machining electrochemical machining laser and electron beam machining A chapter on biomedical machining is also included. This book is appropriate for advanced undergraduate and graduate mechani cal engineering students, manufacturing engineers, and researchers. Each chapter contains examples, exercises and their solutions, and homework problems that re...

  19. Machine Protection

    International Nuclear Information System (INIS)

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an interlock system providing the glue between these systems. The most recent accelerator, the LHC, will operate with about 3 × 10 14 protons per beam, corresponding to an energy stored in each beam of 360 MJ. This energy can cause massive damage to accelerator equipment in case of uncontrolled beam loss, and a single accident damaging vital parts of the accelerator could interrupt operation for years. This article provides an overview of the requirements for protection of accelerator equipment and introduces the various protection systems. Examples are mainly from LHC, SNS and ESS

  20. A Modernized UDM-600 Dynamometer-Based Setup for the Cutting Force Measurement

    Directory of Open Access Journals (Sweden)

    Ya. I. Shuliak

    2016-01-01

    Full Text Available The article considers development of a modernized UDM-600 dynamometer-based setup for measuring the cutting force components. Modernization of existing equipment to improve the method of recording the cutting force components in the automated mode is of relevance. The measuring setup allows recording the cutting force components in turning and milling, as well as the axial force and the torque in the drilling and milling operations.The article presents a block diagram and a schematic diagram of the setup to measure the cutting force components, and describes a basic principle of measuring units within the modernized setup. The developed setup uses a half-bridge strain gauge measuring circuit to record the cutting forces. To enhance the measuring circuit output voltage is used a 16-channel amplifier of LA-UN16 model with a discretely adjustable gain. To record and process electrical signals is used a data acquisition device of NI USB-6009 model, which enables transmitting the received data to a PC via USB-interface. The data acquisition device has a built-in stabilized DC power supply that is used to power the strain gauge bridges. A developed schematic diagram of the measuring setup allows us to realize this measuring device and implement its modernization.Final processing of recorded data is provided through the software developed in visual programming environment LabVIEW 9.0. The program allows us to show the real-time measuring values of the cutting force components graphically and to record the taken data to a text file.The measuring setup modernization enabled increasing measurement accuracy and reducing time for processing and analysis of experimental data obtained when measuring the cutting force components. The MT2 Department of BMSTU uses it in education and research activities and in experimental efforts and laboratory classes.

  1. Simple optical setup implementation for digital Fourier transform holography

    Energy Technology Data Exchange (ETDEWEB)

    De Oliveira, G N [Pos-graduacao em Engenharia Mecanica, TEM/PGMEC, Universidade Federal Fluminense, Rua Passo da Patria, 156, Niteroi, R.J., Cep.: 24.210-240 (Brazil); Rodrigues, D M C; Dos Santos, P A M, E-mail: pams@if.uff.br [Instituto de Fisica, Laboratorio de Optica Nao-linear e Aplicada, Universidade Federal Fluminense, Av. Gal. Nilton Tavares de Souza, s/n, Gragoata, Niteroi, R.J., Cep.:24.210-346 (Brazil)

    2011-01-01

    In the present work a simple implementation of Digital Fourier Transform Holography (DFTH) setup is discussed. This is obtained making a very simple modification in the classical setup arquiteture of the Fourier Transform holography. It is also demonstrated the easy and practical viability of the setup in an interferometric application for mechanical parameters determination. The work is also proposed as an interesting advanced introductory training for graduated students in digital holography.

  2. Design of a 10 MJ fast discharging homopolar machine

    International Nuclear Information System (INIS)

    Stillwagon, R.E.; Thullen, P.

    1977-01-01

    The design of a fast discharging homopolar machine is described. The machine capacity is 10 MJ with a 30 ms energy delivery time. The salient features of the machine are relatively high terminal voltage, fast discharge time, high power density and high efficiency. The machine integrates several new technologies including high surface speeds, large superconducting magnets and current collection at high density

  3. Machine terms dictionary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1979-04-15

    This book gives descriptions of machine terms which includes machine design, drawing, the method of machine, machine tools, machine materials, automobile, measuring and controlling, electricity, basic of electron, information technology, quality assurance, Auto CAD and FA terms and important formula of mechanical engineering.

  4. The compact and inexpensive arrowhead setup for holographic interferometry

    Energy Technology Data Exchange (ETDEWEB)

    Ladera, Celso L; Donoso, Guillermo, E-mail: clladera@usb.v [Departamento de Fisica, Universidad Simon BolIvar, Apdo. 89000, Caracas 1086 (Venezuela, Bolivarian Republic of)

    2011-07-15

    Hologram recording and holographic interferometry are intrinsically sensitive to phase changes, and therefore both are easily perturbed by minuscule optical path perturbations. It is therefore very convenient to bank on holographic setups with a reduced number of optical components. Here we present a compact off-axis holographic setup that requires neither a collimator nor a beam-splitter, and whose layout is reminiscent of an arrowhead. We show that this inexpensive setup is a good alternative for the study and applications of scientific holography by measuring small displacements and deformations of a body. The arrowhead setup will be found particularly useful for holography and holographic interferometry experiments and projects in teaching laboratories.

  5. Report on the set-up of a holographic interferometer

    International Nuclear Information System (INIS)

    Koster, J.N.

    1977-10-01

    Holographic interferometry is well suited for visualizing temperature, density, pressure and concentration fields in transparent fluids. The holographic real-time interferometer allows a continuous observation of stationary and instationary flow processes. After the explanation of the measuring technique, the problems arising during the interferometer set-up as well as the necessary adjusting operations are described. For heat transfer problems new possibilities for the application of holographic interferometry are revealed. Convection in boxes, temperature fields around heated or cooled bodies, concentration and diffusion processes in two phase-flows, mixtures and solutions as well as melting and freezing processes may be investigated. On the basis of particular examples some applications are presented. (orig.) [de

  6. The live cell irradiation and observation setup at SNAKE

    Energy Technology Data Exchange (ETDEWEB)

    Hable, V. [Angewandte Physik und Messtechnik LRT2, UniBw-Muenchen, 85577 Neubiberg (Germany)], E-mail: volker.hable@unibw.de; Greubel, C.; Bergmaier, A.; Reichart, P. [Angewandte Physik und Messtechnik LRT2, UniBw-Muenchen, 85577 Neubiberg (Germany); Hauptner, A.; Kruecken, R. [Physik Department E12, TU-Muenchen, 85748 Garching (Germany); Strickfaden, H.; Dietzel, S.; Cremer, T. [Department Biologie II, LMU-Muenchen, 82152 Martinsried (Germany); Drexler, G.A.; Friedl, A.A. [Strahlenbiologisches Institut, LMU-Muenchen, 80336 Muenchen (Germany); Dollinger, G. [Angewandte Physik und Messtechnik LRT2, UniBw-Muenchen, 85577 Neubiberg (Germany)

    2009-06-15

    We describe a new setup at the ion microprobe SNAKE (Superconducting Nanoscope for Applied nuclear (Kern-) physics Experiments) at the Munich 14 MV Tandem accelerator that facilitates both living cell irradiation with sub micrometer resolution and online optical imaging of the cells before and after irradiation by state of the art phase contrast and fluorescence microscopy. The cells are kept at standard cell growth conditions at 37 {sup o}C in cell culture medium. After irradiation it is possible to switch from single ion irradiation conditions to cell observation within 0.5 s. First experiments were performed targeting substructures of a cell nucleus that were tagged by TexasRed labeled nucleotides incorporated in the cellular DNA by 55 MeV single carbon ion irradiation. In addition we show first online sequences of short time kinetics of Mdc1 protein accumulation in the vicinity of double strand breaks after carbon ion irradiation.

  7. The new cold neutron tomography set-up at SINQ

    CERN Document Server

    Baechler, S; Cauwels, P; Dierick, M; Jolie, J; Materna, T; Mondelaers, W

    2002-01-01

    A new cold neutron tomography set-up is operational at the neutron spallation source SINQ of the Paul Scherrer Institute (PSI) in Villigen, Switzerland. The detection system is based on a sup 6 LiF/ZnS:Ag conversion screen and a CCD camera. Several tests have been carried out to characterize the quality of the tomography system, such as homogeneity, reproducibility, L/D-ratio and spatial resolution. The high flux and the good efficiency of the detector lead to very short exposure times. Thus, a typical set of tomography scans can be performed in only 20 min. Then, 3D computed tomography objects were calculated using the filtered back-projection reconstruction method. Initial results of various samples show that cold neutron tomography can be a useful tool for industry, geology and dentistry. Furthermore, suitable applications can be found in the field of archaeology.

  8. Magnetic spectrometer of the DEUTERON-2 set-up

    International Nuclear Information System (INIS)

    Ajvazyan, R.V.; Alanakyan, K.V.; Amaryan, M.J.

    1989-01-01

    A magnetic spectrometer of the two-arm DEUTERON-2 set-up of the Erevan Physical Institute is described. It is shown that the rejection factor for electrons and pions is 10 -2 - 10 -3 . The positively charged particles in the momentum range up to 1.5 GeV/c are identified by momentum and time-of-flight measurements. The main characteristics of the spectrometer are: momentum and angular acceptance δp/p = 46%, Δθ = 4 deg, solid angle ΔΩ = 2.75 msr, momentum resolution δp/p = 1.5%, angular resolutions δθ = 0.6 deg, δφ = 2 deg. The intervals of measured momentum and the polar scattering anlge are 0.5-3 GeV/c and 10-30 deg, 68-90 deg respectively. 7 refs.; 11 figs

  9. Characterization of a neutron imaging setup at the INES facility

    Science.gov (United States)

    Durisi, E. A.; Visca, L.; Albertin, F.; Brancaccio, R.; Corsi, J.; Dughera, G.; Ferrarese, W.; Giovagnoli, A.; Grassi, N.; Grazzi, F.; Lo Giudice, A.; Mila, G.; Nervo, M.; Pastrone, N.; Prino, F.; Ramello, L.; Re, A.; Romero, A.; Sacchi, R.; Salvemini, F.; Scherillo, A.; Staiano, A.

    2013-10-01

    The Italian Neutron Experimental Station (INES) located at the ISIS pulsed neutron source (Didcot, United Kingdom) provides a thermal neutron beam mainly used for diffraction analysis. A neutron transmission imaging system was also developed for beam monitoring and for aligning the sample under investigation. Although the time-of-flight neutron diffraction is a consolidated technique, the neutron imaging setup is not yet completely characterized and optimized. In this paper the performance for neutron radiography and tomography at INES of two scintillator screens read out by two different commercial CCD cameras is compared in terms of linearity, signal-to-noise ratio, effective dynamic range and spatial resolution. In addition, the results of neutron radiographies and a tomography of metal alloy test structures are presented to better characterize the INES imaging capabilities of metal artifacts in the cultural heritage field.

  10. Operation and machine studies

    International Nuclear Information System (INIS)

    1992-01-01

    This annual report describes the GANIL (Grand accelerateur national d'ions lourds, Caen, France) operation and the machine studies realized in 1992. Metallic ions have been accelerated during 36 pc of the time; some were produced for the first time at GANIL: 125 Te, 52 Cr with ECR3, 181 Ta with ECR4. The various machine studies are: comparison of lifetimes of carbon sheets, charge exchange of very heavy ions in carbon foils and in the residual gas of the Ganil cyclotrons, commissioning of the new high intensity axial injection system for Ganil, tantalum acceleration with the new injector, a cyclotron as a mass spectrometer; other studies concerned: implementing the new control system, gettering flux measurement, energy deposited by neutrons and gamma rays in the cryogenic system of SISSI; latest developments on multicharged ECR ion sources, and an on-line isotopic separator test bench at Ganil

  11. The achievements of the Z-machine

    International Nuclear Information System (INIS)

    Larousserie, D.

    2008-01-01

    The ZR-machine that represents the latest generation of Z-pinch machines has recently begun preliminary testing before its full commissioning in Albuquerque (Usa). During its test the machine has well operated with electrical currents whose intensities of 26 million Ampere are already 2 times as high as the intensity of the operating current of the previous Z-machine. In 2006 the Z-machine reached temperatures of 2 billions Kelvin while 100 million Kelvin would be sufficient to ignite thermonuclear fusion. In fact the concept of Z-pinch machines was imagined in the fifties but the technological breakthrough that has allowed this recent success and the reborn of Z-machine, was the replacement of gas by an array of metal wires through which the electrical current flows and vaporizes it creating an imploding plasma. It is not well understood why Z-pinch machines generate far more radiation than theoretically expected. (A.C.)

  12. High-resolution continuous flow analysis setup for water isotopic measurement from ice cores using laser spectroscopy

    Science.gov (United States)

    Emanuelsson, B. D.; Baisden, W. T.; Bertler, N. A. N.; Keller, E. D.; Gkinis, V.

    2014-12-01

    Here we present an experimental setup for water stable isotopes (δ18O and δD) continuous flow measurements. It is the first continuous flow laser spectroscopy system that is using Off-Axis Integrated Cavity Output Spectroscopy (OA-ICOS; analyzer manufactured by Los Gatos Research - LGR) in combination with an evaporation unit to continuously analyze sample from an ice core. A Water Vapor Isotopic Standard Source (WVISS) calibration unit, manufactured by LGR, was modified to: (1) increase the temporal resolution by reducing the response time (2) enable measurements on several water standards, and (3) to reduce the influence from memory effects. While this setup was designed for the Continuous Flow Analysis (CFA) of ice cores, it can also continuously analyze other liquid or vapor sources. The modified setup provides a shorter response time (~54 and 18 s for 2013 and 2014 setup, respectively) compared to the original WVISS unit (~62 s), which is an improvement in measurement resolution. Another improvement compared to the original WVISS is that the modified setup has a reduced memory effect. Stability tests comparing the modified WVISS and WVISS setups were performed and Allan deviations (σAllan) were calculated to determine precision at different averaging times. For the 2013 modified setup the precision after integration times of 103 s are 0.060 and 0.070‰ for δ18O and δD, respectively. For the WVISS setup the corresponding σAllan values are 0.030, 0.060 and 0.043‰ for δ18O, δD and δ17O, respectively. For the WVISS setup the precision is 0.035, 0.070 and 0.042‰ after 103 s for δ18O, δD and δ17O, respectively. Both the modified setups and WVISS setup are influenced by instrumental drift with δ18O being more drift sensitive than δD. The σAllan values for δ18O of 0.30 and 0.18‰ for the modified (2013) and WVISS setup, respectively after averaging times of 104 s (2.78 h). The Isotopic Water Analyzer (IWA)-modified WVISS setup used during the

  13. Three-dimensional patient setup errors at different treatment sites measured by the Tomotherapy megavoltage CT

    Energy Technology Data Exchange (ETDEWEB)

    Hui, S.K.; Lusczek, E.; Dusenbery, K. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; DeFor, T. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Biostatistics and Informatics Core; Levitt, S. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; Karolinska Institutet, Stockholm (Sweden). Dept. of Onkol-Patol

    2012-04-15

    Reduction of interfraction setup uncertainty is vital for assuring the accuracy of conformal radiotherapy. We report a systematic study of setup error to assess patients' three-dimensional (3D) localization at various treatment sites. Tomotherapy megavoltage CT (MVCT) images were scanned daily in 259 patients from 2005-2008. We analyzed 6,465 MVCT images to measure setup error for head and neck (H and N), chest/thorax, abdomen, prostate, legs, and total marrow irradiation (TMI). Statistical comparisons of the absolute displacements across sites and time were performed in rotation (R), lateral (x), craniocaudal (y), and vertical (z) directions. The global systematic errors were measured to be less than 3 mm in each direction with increasing order of errors for different sites: H and N, prostate, chest, pelvis, spine, legs, and TMI. The differences in displacements in the x, y, and z directions, and 3D average displacement between treatment sites were significant (p < 0.01). Overall improvement in patient localization with time (after 3-4 treatment fractions) was observed. Large displacement (> 5 mm) was observed in the 75{sup th} percentile of the patient groups for chest, pelvis, legs, and spine in the x and y direction in the second week of the treatment. MVCT imaging is essential for determining 3D setup error and to reduce uncertainty in localization at all anatomical locations. Setup error evaluation should be performed daily for all treatment regions, preferably for all treatment fractions. (orig.)

  14. The experimental setup for studying the molecular composition of nanoscale films and coatings

    International Nuclear Information System (INIS)

    Turiev A M; Butkhuzi T G; Ramonova A G; Magkoev T T; Tsidaeva N I

    2011-01-01

    It is offered the method of measurement and design of the experimental setup, allowing to control the flow of particles from the surface of organic films during annealing by pulsed laser radiation. The method is based on the TOF(Time Of Flight) principle of detecting particles, desorbed from the surface by laser pulses, used for annealing. The principle of registration and the structure (block- scheme) of the experimental setup and its constituent parts are detailed in the work The setup consists of the analytical part, the system of laser irradiation and computer measurement system. The basis of the analytical part of the installation is a TOF(Time Of Flight) mass spectrometer of original construction.

  15. Why are Some Games More Addictive than Others: The Effects of Timing and Payoff on Perseverance in a Slot Machine Game.

    Science.gov (United States)

    James, Richard J E; O'Malley, Claire; Tunney, Richard J

    2016-01-01

    Manipulating different behavioral characteristics of gambling games can potentially affect the extent to which individuals persevere at gambling, and their transition to problematic behaviors. This has potential impact for mobile gambling technologies and responsible gambling interventions. Two laboratory models pertinent to this are the partial reinforcement extinction effect (PREE) and the trial spacing effect. Both of these might speed up or delay the acquisition and extinction of conditioned behavior. We report an experiment that manipulated the rate of reinforcement and inter trial interval (ITI) on a simulated slot machine where participants were given the choice between gambling and skipping on each trial, before perseverative gambling was measured in extinction, followed by measurements of the illusion of control, depression and impulsivity. We hypothesized that longer ITI's in conjunction with the low rates of reinforcement observed in gambling would lead to greater perseverance. We further hypothesized, given that timing is known to be important in displaying illusory control and potentially in persevering in gambling, that prior exposure to longer intervals might affect illusions of control. An interaction between ITI and rate of reinforcement was observed, as low reinforced gamblers with a long ITI gambled for longer. Respondents also displayed extinction and a PREE. Gamblers exposed to a higher rate of reinforcement gambled for longer in acquisition. Impulsivity was associated with extended perseverance in extinction, and more depressed gamblers in the high reinforcement short ITI group persevered for longer. Performance in the contingency judgment failed to support the second hypothesis: the only significant contrast observed was that participants became better calibrated as the task progressed.

  16. Why are some games more addictive than others: The effects of timing and payoff on perseverance in a slot machine game.

    Directory of Open Access Journals (Sweden)

    Richard J. E. eJames

    2016-02-01

    Full Text Available Manipulating different behavioral characteristics of gambling games can potentially affect the extent to which individuals persevere at gambling, and their transition to problematic behaviors. This has potential impact for mobile gambling technologies and responsible gambling interventions. Two laboratory models pertinent to this are the partial reinforcement extinction effect and the trial spacing effect. Both of these might speed up or delay the acquisition and extinction of conditioned behaviour. We report an experiment that manipulated the rate of reinforcement and inter trial interval (ITI on a simulated slot machine where participants were given the choice between gambling and skipping on each trial, before perseverative gambling was measured in extinction, followed by measurements of the illusion of control, depression and impulsivity. We hypothesized that longer ITI’s in conjunction with the low rates of reinforcement observed in gambling would lead to greater perseverance. We further hypothesized, given that timing is known to be important in displaying illusory control and potentially in persevering in gambling, that prior exposure to longer intervals might affect illusions of control. An interaction between ITI and rate of reinforcement was observed, as low reinforced gamblers with a long ITI gambled for longer. Respondents also displayed extinction and a partial reinforcement extinction effect. Gamblers exposed to a higher rate of reinforcement gambled for longer in acquisition. Impulsivity was associated with extended perseverance in extinction, and more depressed gamblers in the high reinforcement short ITI group persevered for longer. Performance in the contingency judgement failed to support the second hypothesis: the only significant contrast observed was that participants became better calibrated as the task progressed.

  17. Analysis of Daily Setup Variation With Tomotherapy Megavoltage Computed Tomography

    International Nuclear Information System (INIS)

    Zhou Jining; Uhl, Barry; Dewit, Kelly; Young, Mark; Taylor, Brian; Fei Dingyu; Lo, Y-C

    2010-01-01

    The purpose of this study was to evaluate different setup uncertainties for various anatomic sites with TomoTherapy (registered) pretreatment megavoltage computed tomography (MVCT) and to provide optimal margin guidelines for these anatomic sites. Ninety-two patients with tumors in head and neck (HN), brain, lung, abdominal, or prostate regions were included in the study. MVCT was used to verify patient position and tumor target localization before each treatment. With the anatomy registration tool, MVCT provided real-time tumor shift coordinates relative to the positions where the simulation CT was performed. Thermoplastic facemasks were used for HN and brain treatments. Vac-Lok TM cushions were used to immobilize the lower extremities up to the thighs for prostate patients. No respiration suppression was administered for lung and abdomen patients. The interfractional setup variations were recorded and corrected before treatment. The mean interfractional setup error was the smallest for HN among the 5 sites analyzed. The average 3D displacement in lateral, longitudinal, and vertical directions for the 5 sites ranged from 2.2-7.7 mm for HN and lung, respectively. The largest movement in the lung was 2.0 cm in the longitudinal direction, with a mean error of 6.0 mm and standard deviation of 4.8 mm. The mean interfractional rotation variation was small and ranged from 0.2-0.5 deg., with the standard deviation ranging from 0.7-0.9 deg. Internal organ displacement was also investigated with a posttreatment MVCT scan for HN, lung, abdomen, and prostate patients. The maximum 3D intrafractional displacement across all sites was less than 4.5 mm. The interfractional systematic errors and random errors were analyzed and the suggested margins for HN, brain, prostate, abdomen, and lung in the lateral, longitudinal, and vertical directions were between 4.2 and 8.2 mm, 5.0 mm and 12.0 mm, and 1.5 mm and 6.8 mm, respectively. We suggest that TomoTherapy (registered) pretreatment

  18. Analysis of daily setup variation with tomotherapy megavoltage computed tomography.

    Science.gov (United States)

    Zhou, Jining; Uhl, Barry; Dewit, Kelly; Young, Mark; Taylor, Brian; Fei, Ding-Yu; Lo, Yeh-Chi

    2010-01-01

    The purpose of this study was to evaluate different setup uncertainties for various anatomic sites with TomoTherapy pretreatment megavoltage computed tomography (MVCT) and to provide optimal margin guidelines for these anatomic sites. Ninety-two patients with tumors in head and neck (HN), brain, lung, abdominal, or prostate regions were included in the study. MVCT was used to verify patient position and tumor target localization before each treatment. With the anatomy registration tool, MVCT provided real-time tumor shift coordinates relative to the positions where the simulation CT was performed. Thermoplastic facemasks were used for HN and brain treatments. Vac-Lok cushions were used to immobilize the lower extremities up to the thighs for prostate patients. No respiration suppression was administered for lung and abdomen patients. The interfractional setup variations were recorded and corrected before treatment. The mean interfractional setup error was the smallest for HN among the 5 sites analyzed. The average 3D displacement in lateral, longitudinal, and vertical directions for the 5 sites ranged from 2.2-7.7 mm for HN and lung, respectively. The largest movement in the lung was 2.0 cm in the longitudinal direction, with a mean error of 6.0 mm and standard deviation of 4.8 mm. The mean interfractional rotation variation was small and ranged from 0.2-0.5 degrees, with the standard deviation ranging from 0.7-0.9 degrees. Internal organ displacement was also investigated with a posttreatment MVCT scan for HN, lung, abdomen, and prostate patients. The maximum 3D intrafractional displacement across all sites was less than 4.5 mm. The interfractional systematic errors and random errors were analyzed and the suggested margins for HN, brain, prostate, abdomen, and lung in the lateral, longitudinal, and vertical directions were between 4.2 and 8.2 mm, 5.0 mm and 12.0 mm, and 1.5 mm and 6.8 mm, respectively. We suggest that TomoTherapy pretreatment MVCT can be used to

  19. Addiction Machines

    Directory of Open Access Journals (Sweden)

    James Godley

    2011-10-01

    Full Text Available Entry into the crypt William Burroughs shared with his mother opened and shut around a failed re-enactment of William Tell’s shot through the prop placed upon a loved one’s head. The accidental killing of his wife Joan completed the installation of the addictation machine that spun melancholia as manic dissemination. An early encryptment to which was added the audio portion of abuse deposited an undeliverable message in WB. Wil- liam could never tell, although his corpus bears the in- scription of this impossibility as another form of pos- sibility. James Godley is currently a doctoral candidate in Eng- lish at SUNY Buffalo, where he studies psychoanalysis, Continental philosophy, and nineteenth-century litera- ture and poetry (British and American. His work on the concept of mourning and “the dead” in Freudian and Lacanian approaches to psychoanalytic thought and in Gothic literature has also spawned an essay on zombie porn. Since entering the Academy of Fine Arts Karlsruhe in 2007, Valentin Hennig has studied in the classes of Sil- via Bächli, Claudio Moser, and Corinne Wasmuht. In 2010 he spent a semester at the Dresden Academy of Fine Arts. His work has been shown in group exhibi- tions in Freiburg and Karlsruhe.

  20. Preliminary Test of Upgraded Conventional Milling Machine into PC Based CNC Milling Machine

    International Nuclear Information System (INIS)

    Abdul Hafid

    2008-01-01

    CNC (Computerized Numerical Control) milling machine yields a challenge to make an innovation in the field of machining. With an action job is machining quality equivalent to CNC milling machine, the conventional milling machine ability was improved to be based on PC CNC milling machine. Mechanically and instrumentally change. As a control replacing was conducted by servo drive and proximity were used. Computer programme was constructed to give instruction into milling machine. The program structure of consists GUI model and ladder diagram. Program was put on programming systems called RTX software. The result of up-grade is computer programming and CNC instruction job. The result was beginning step and it will be continued in next time. With upgrading ability milling machine becomes user can be done safe and optimal from accident risk. By improving performance of milling machine, the user will be more working optimal and safely against accident risk. (author)

  1. An experimental set-up to test heatmoisture exchangers

    NARCIS (Netherlands)

    N. Ünal (N.); J.C. Pompe (Jan); W.P. Holland (Wim); I. Gultuna; P.E.M. Huygen; K. Jabaaij (K.); C. Ince (Can); B. Saygin (B.); H.A. Bruining (Hajo)

    1995-01-01

    textabstractObjectives: The purpose of this study was to build an experimental set-up to assess continuously the humidification, heating and resistance properties of heat-moisture exchangers (HMEs) under clinical conditions. Design: The experimental set-up consists of a patient model, measurement

  2. Set-Up and Punchline as Figure and Ground

    DEFF Research Database (Denmark)

    Keisalo, Marianna Päivikki

    the two that cannot be resolved by appeal to either set-up or punchline, but traps thought between them in an ‘epistemological problem’ as comedian Louis CK put it. For comedians, set-ups and punchlines are basic tools, practical and concrete ways to create and organize material. They are also familiar...

  3. Calibration Procedures in Mid Format Camera Setups

    Science.gov (United States)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  4. CALIBRATION PROCEDURES IN MID FORMAT CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    F. Pivnicka

    2012-07-01

    Full Text Available A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU, the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and

  5. Model Integrasi Penjadwalan Batch dan Penjadwalan Preventive Maintenance dengan Kriteria Minimisasi Biaya Simpan, Biaya Setup, Biaya Pm, serta Biaya Rework pada Mesin Stabil

    Directory of Open Access Journals (Sweden)

    Zahedi Zahedi

    2014-06-01

    Full Text Available This study developed a model of batch scheduling involving the unavailability machine to minimize setup costs, cost of preventive maintenance and the cost of rework in a stable machine. This model is considered necessary in order to understand the effect of the unavailability machine for production runs and to understand the effect on the batch production schedule. The results of this study indicate that the first and last run will not give single batch. Given a hypothetical example of how the model and algorithm developed solve the problem instance. 

  6. Quo vadis, Intelligent Machine?

    Directory of Open Access Journals (Sweden)

    Rosemarie Velik

    2010-09-01

    Full Text Available Artificial Intelligence (AI is a branch of computer science concerned with making computers behave like humans. At least this was the original idea. However, it turned out that this is no task easy to be solved. This article aims to give a comprehensible review on the last 60 years of artificial intelligence taking a philosophical viewpoint. It is outlined what happened so far in AI, what is currently going on in this research area, and what can be expected in future. The goal is to mediate an understanding for the developments and changes in thinking in course of time about how to achieve machine intelligence. The clear message is that AI has to join forces with neuroscience and other brain disciplines in order to make a step towards the development of truly intelligent machines.

  7. A new beam diagnostic system for the MASHA setup

    International Nuclear Information System (INIS)

    Motycak, S.; Kamas, D.; Rodin, A.M.; Novoselov, A.S.; Podshibyakin, A.V.; Belozerov, A.V.; Vedeneyev, V.Yu.; Gulyaev, A.V.; Gulyaeva, A.V.; Salamatin, V.S.; Stepantsov, S.V.; Chernysheva, E.V.; Yukhimchuk, S.A.; Komarov, A.B.; Krupa, L.; Kliman, J.

    2016-01-01

    A new beam diagnostic system based on the PXI standard was developed, tested, and used in the MASHA setup experiment. The beam energy and beam current measurements were carried out using several methods. The online time-of-flight energy measurements were carried out using three pick-up detectors. We used two electronic systems to measure the time between the pick-ups. The first system was based on fast Agilent digitizers (2-channel, 4-GHz sampling rate), and the second one was based on a constant fraction discriminator (CFD) connected to a time-to-digital converter (TDC, 5-ps resolution). A new graphical interface to monitor the electronic devices and to perform the online calculations of energy was developed using MFC C ++. The second system based on microchannel plate (time-of-flight) and silicon detectors for the determination of beam energy and the type of accelerated particles was also used. The beam current measurements were carried out with two different sensors. The first sensor is a rotating Faraday cup placed in front of the target, and the second one is an emission detector installed at the rear of the target. This system is now used in experiments for the synthesis of superheavy elements at the U400M cyclotron of the Flerov Laboratory of Nuclear Reactions (FLNR).

  8. A new beam diagnostic system for the MASHA setup

    Science.gov (United States)

    Motycak, S.; Rodin, A. M.; Novoselov, A. S.; Podshibyakin, A. V.; Krupa, L.; Belozerov, A. V.; Vedeneyev, V. Yu.; Gulyaev, A. V.; Gulyaeva, A. V.; Kliman, J.; Salamatin, V. S.; Stepantsov, S. V.; Chernysheva, E. V.; Yuchimchuk, S. A.; Komarov, A. B.; Kamas, D.

    2016-09-01

    A new beam diagnostic system based on the PXI standard was developed, tested, and used in the MASHA setup experiment. The beam energy and beam current measurements were carried out using several methods. The online time-of-flight energy measurements were carried out using three pick-up detectors. We used two electronic systems to measure the time between the pick-ups. The first system was based on fast Agilent digitizers (2-channel, 4-GHz sampling rate), and the second one was based on a constant fraction discriminator (CFD) connected to a time-to-digital converter (TDC, 5-ps resolution). A new graphical interface to monitor the electronic devices and to perform the online calculations of energy was developed using MFC C++. The second system based on microchannel plate (time-of-flight) and silicon detectors for the determination of beam energy and the type of accelerated particles was also used. The beam current measurements were carried out with two different sensors. The first sensor is a rotating Faraday cup placed in front of the target, and the second one is an emission detector installed at the rear of the target. This system is now used in experiments for the synthesis of superheavy elements at the U400M cyclotron of the Flerov Laboratory of Nuclear Reactions (FLNR).

  9. Quantifying Appropriate PTV Setup Margins: Analysis of Patient Setup Fidelity and Intrafraction Motion Using Post-Treatment Megavoltage Computed Tomography Scans

    International Nuclear Information System (INIS)

    Drabik, Donata M.; MacKenzie, Marc A.; Fallone, Gino B.

    2007-01-01

    Purpose: To present a technique that can be implemented in-house to evaluate the efficacy of immobilization and image-guided setup of patients with different treatment sites on helical tomotherapy. This technique uses an analysis of alignment shifts between kilovoltage computed tomography and post-treatment megavoltage computed tomography images. The determination of the shifts calculated by the helical tomotherapy software for a given site can then be used to define appropriate planning target volume internal margins. Methods and Materials: Twelve patients underwent post-treatment megavoltage computed tomography scans on a helical tomotherapy machine to assess patient setup fidelity and net intrafraction motion. Shifts were studied for the prostate, head and neck, and glioblastoma multiforme. Analysis of these data was performed using automatic and manual registration of the kilovoltage computed tomography and post-megavoltage computed tomography images. Results: The shifts were largest for the prostate, followed by the head and neck, with glioblastoma multiforme having the smallest shifts in general. It appears that it might be more appropriate to use asymmetric planning target volume margins. Each margin value reported is equal to two standard deviations of the average shift in the given direction. Conclusion: This method could be applied using individual patient post-image scanning and combined with adaptive planning to reduce or increase the margins as appropriate

  10. Phase measuring deflectometry. An improved setup for measuring CTA mirror facets

    Energy Technology Data Exchange (ETDEWEB)

    Specovius, Andreas; Eldik, Christopher van; Woernlein, Andre; Ziegler, Alexander [Erlangen Centre for Astroparticle Physics (ECAP) (Germany)

    2016-07-01

    The future Cherenkov Telescope Array (CTA) will consist of up to 100 single telescopes with a total reflecting surface of ∝10.000 m{sup 2} made of numerous mirror facets. Characterizing the surface properties of these facets is quite challenging concerning time and logistics. An efficient way to reliably reconstruct the surface of specular free-forms is Phase Measuring Deflectometry (PMD). PMD is routinely used to characterize the focal distance and point spread function of spherical CTA prototype mirrors. To address the possibility to measure the surface properties of aspherical mirrors, a new PMD setup has recently been built. First experience with this setup is reported.

  11. Designing of monitoring setup for vibration signature analysis of steam turbine driven high capacity rotary screw compressor

    Energy Technology Data Exchange (ETDEWEB)

    Pyne, T; Vinod, J [Birla VXL Ltd., Porbandar (India)

    1998-12-31

    Tracking the behaviour by signature analysis of machines like Screw Compressor having large number of auxiliaries, high power transmissions, variation of process gas properties, changes of load condition, fluctuating revolutions is truly a challenging job. These unavoidable process conditions often disturb the whole setup and there is every possibility to miss important and relevant information. Standards for overall monitoring as well as for peak-amplitudes responsible for root cause identification are not always available because these machines are `custom designed` and manufacturer`s standards are of paramount importance to consider. The health of these machines cannot be assessed by simply comparing with the international standards unlike most common machines such as fans, pumps, motors etc. with minimum number of auxiliaries. There may also be limitations in the features of the instruments used for the purpose. In this presentation, an attempt has been made to setup a monitoring approach for screw compressor which will help the industries initially setting base-line data to implement vibration analysis based off-line predictive maintenance programme either with the help of an analyser or with a latest software. (orig.) 3 refs.

  12. Designing of monitoring setup for vibration signature analysis of steam turbine driven high capacity rotary screw compressor

    Energy Technology Data Exchange (ETDEWEB)

    Pyne, T.; Vinod, J. [Birla VXL Ltd., Porbandar (India)

    1997-12-31

    Tracking the behaviour by signature analysis of machines like Screw Compressor having large number of auxiliaries, high power transmissions, variation of process gas properties, changes of load condition, fluctuating revolutions is truly a challenging job. These unavoidable process conditions often disturb the whole setup and there is every possibility to miss important and relevant information. Standards for overall monitoring as well as for peak-amplitudes responsible for root cause identification are not always available because these machines are `custom designed` and manufacturer`s standards are of paramount importance to consider. The health of these machines cannot be assessed by simply comparing with the international standards unlike most common machines such as fans, pumps, motors etc. with minimum number of auxiliaries. There may also be limitations in the features of the instruments used for the purpose. In this presentation, an attempt has been made to setup a monitoring approach for screw compressor which will help the industries initially setting base-line data to implement vibration analysis based off-line predictive maintenance programme either with the help of an analyser or with a latest software. (orig.) 3 refs.

  13. A dedicated AMS setup for medium mass isotopes at the Cologne FN tandem accelerator

    Science.gov (United States)

    Schiffer, M.; Altenkirch, R.; Feuerstein, C.; Müller-Gatermann, C.; Hackenberg, G.; Herb, S.; Bhandari, P.; Heinze, S.; Stolz, A.; Dewald, A.

    2017-09-01

    AMS measurements of medium mass isotopes, e.g. of 53Mn and 60Fe, are gaining interest in various fields of operation, especially geoscience. Therefore a dedicated AMS setup has been built at the Cologne 10 MV FN tandem accelerator. This setup is designed to obtain a sufficient suppression of the stable isobars at energies around 100 MeV. In this contribution we report on the actual status of the new setup and the first in-beam tests of its individual components. The isobar suppression is done with (dE/dx) techniques using combinations of energy degrader foils with an electrostatic analyzer (ESA) and a time of flight (ToF) system, as well as a (dE/dx),E gas ionization detector. Furthermore, the upgraded ion source and its negative ion yield measurement for MnO- are presented.

  14. Automatic Generation of Setup for CNC Spring Coiler Based on Case-based Reasoning

    Institute of Scientific and Technical Information of China (English)

    KU Xiangchen; WANG Runxiao; LI Jishun; WANG Dongbo

    2006-01-01

    When producing special-shape spring in CNC spring coiler, the setup of the coiler is often a manual work using a trial-and-error method. As a result, the setup of coiler consumes so much time and becomes the bottleneck of the spring production process. In order to cope with this situation, this paper proposes an automatic generation system of setup for CNC spring coiler using case-based reasoning (CBR). The core of the study contains: (1) integrated reasoning model of CBR system;(2) spatial shape describe of special-shape spring based on feature;(3) coiling case representation using shape feature matrix; and (4) case similarity measure algorithm. The automatic generation system has implemented with C++ Builder 6.0 and is helpful in improving the automaticity and efficiency of spring coiler.

  15. Effects of pulse ON and OFF time and electrode types on the material removal rate and tool wear rate of the Ti-6Al-4V Alloy using EDM machining with reverse polarity

    Science.gov (United States)

    Praveen, L.; Geeta Krishna, P.; Venugopal, L.; Prasad, N. E. C.

    2018-03-01

    Electrical Discharge Machining (EDM) is an unconventional metal removal process that is extensively used for removing the difficult-to-machine metal such as Ti alloys, super alloys and metal matrix composites. This paper investigates the effects of pulse (ON/OFF) time on EDM machining characteristics of Ti-6Al-4V alloy using copper and graphite as electrodes in reverse polarity condition. Full factorial design method was used to design the experiments. Two variables (Pulse On and OFF) with three levels are considered. The output variables are the tool wear rate and the material removal rate. The important findings from the present work are: (1) the material removal rate (MRR) increases gradually with an increase of the Pulse ON time whereas the change is insignificant with an increase of the Pulse OFF time, (2) Between copper and graphite electrodes, the copper electrode is proved to be good in terms of MRR, (3) a combination of high pulse ON time and OFF time is desirable for high MRR rate in the Cu electrode whereas for the graphite electrode, a combination of high pulse ON time and low pulse OFF time is desirable for high MRR rate, (4) the tool wear rate (TWR) reduces with the Pulse On or OFF time, the rate of TWR is uniform for the graphite electrode in contrast to abrupt decrease from 25 to 50 μs (pulse ON time) in the copper electrode, (5) In order to keep the TWR as minimum possible, it is desirable to have a combination of high pulse ON time and OFF time for both the copper and the graphite electrode.

  16. An experimental set-up for carbon isotopic analysis of atmospheric ...

    Indian Academy of Sciences (India)

    We present here, an experimental set-up developed for the first time in India for the ... The internal reproducibility (precision) for the δ13C ... interaction of CO2 and water, and reproduce iso- ..... enhanced emission of anthropogenic CO2, varia-.

  17. Protection of Mission-Critical Applications from Untrusted Execution Environment: Resource Efficient Replication and Migration of Virtual Machines

    Science.gov (United States)

    2015-09-28

    in the same LAN ; this setup resembles the typical setup in a virtualized datacenter where protected and backup hosts are connected by an internal LAN ... Virtual Machines 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-10-1-0393 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Kang G. Shin 5d. PROJECT...Distribution A - Approved for Public Release 13. SUPPLEMENTARY NOTES None 14. ABSTRACT Continuous replication and live migration of Virtual Machines (VMs

  18. Slaw extracted proton beam formation and monitoring for the ''QUARTZ'' setup

    International Nuclear Information System (INIS)

    Bushnin, Yu.B.; Gres', V.N.; Davydenko, Yu.P.

    1982-01-01

    The version of optical mode of the beam channel providing with simultaneous operating the experimental setups FODS and ''QUARTZ'' at consecutive usage of the slow extracted proton beam is reported. The ''QUARTZ'' setup beam diagnostics system comprises two subsystems: for measuring beam profile beam timing structure and beam intensity and operates in the beam extraction duration from 20 ns to few seconds at beam intensity from 10 10 to 5x10 12 protons/pulse. The ''QUARTZ'' setup represents a focusing crystal-diffraction spectrometer with 5-meter focal distance and Ge(Li) special construction detector. High efficiency target is applied in the setup. The ''QUARTZ'' setup is designed for studying exotic atoms produced by negative charged heavy particles (π, K, μ, P tilde) and atomic nuclei. Precise energy measurement of X ray transitions in such atoms is performed. For measuring beam geometric parameters 32-channel secondary emission chambers are used. As detector of beam intensity and timing structure of slow extracted beam the secondary emission chamber is employed. The principle circuit of current integrator is given. As data transmission line a 50-pair telephone cable is used. Information conversion into digital form and its subsequent processing is performed in the CAMAC system and the SM-3 computer. The proton beam full intensity measuring system provides with accuracy not worse than +-4.5% in the 10 10 -10 12 proton/sec range. The implemented optical mode of the beam channel and proton beam monitoring system permitted to begin fulfillment of the experimental program on the ''QUARTZ'' setup

  19. Inter-treatment compensation of treatment setup variation to enhance the radiotherapeutic ratio

    International Nuclear Information System (INIS)

    Di, Yan; John, Wong; Michalski, Jeff; Pan, Cheng; Frazier, Arthur; Bosch, Walter; Martinez, Alvaro

    1995-01-01

    Purpose: In radiotherapy, treatment setup error has been one of the major causes of dose variation in the treated volume. With the data acquired from on-line electronic portal imaging, it is now possible not only to adjust the patient setup, but also to modify the treatment plan during the course of clinical treatment based on the setup error measured for each individual patient. In this work, daily clinical portal images were retrospectively analyzed to study (1) the number of initial daily portal images required to give adequate prediction of the systematic and random deviations of treatment setup, and (2) the potential therapeutic gain when the inter-treatment planning modification was established using the setup error of each individual patient. Methods and Materials: Only those patients whose treatment positions had not been adjusted during the course of treatment were selected for the retrospective study. Daily portal images of 27 lung, 25 pelvis, and 12 head and neck (h and n) cancer patients were obtained from two independent clinics with similar setup procedures. The anterior-to-posterior field was analyzed for the pelvis and lung treatments, and the right lateral field for the h and n treatments. Between 13 to 30 daily portal images were acquired for each patient and were analyzed using a 2D alignment tool. Systematic and random deviations of the treatment setup were calculated for each individual patient. The statistical confidence on the convergence of both systematic and random deviations with time were tested to determine the number of initial daily portal images needed to predict these deviations. In addition, a mean deviation for each site was also calculated using the setup errors from all patients. Two treatment planning schemes were simulated to evaluate margin design and prescription dose adjustment. Therapeutic scores were quantified in terms of tumor control probability (TCP) and normal tissue complication probability (NTCP). In the first

  20. DESIGN ANALYSIS OF ELECTRICAL MACHINES THROUGH INTEGRATED NUMERICAL APPROACH

    Directory of Open Access Journals (Sweden)

    ARAVIND C.V.

    2016-02-01

    Full Text Available An integrated design platform for the newer type of machines is presented in this work. The machine parameters are evaluated out using developed modelling tool. With the machine parameters, the machine is modelled using computer aided tool. The designed machine is brought to simulation tool to perform electromagnetic and electromechanical analysis. In the simulation, conditions setting are performed to setup the materials, meshes, rotational speed and the excitation circuit. Electromagnetic analysis is carried out to predict the behavior of the machine based on the movement of flux in the machines. Besides, electromechanical analysis is carried out to analyse the speed-torque characteristic, the current-torque characteristic and the phase angle-torque characteristic. After all the results are analysed, the designed machine is used to generate S block function that is compatible with MATLAB/SIMULINK tool for the dynamic operational characteristics. This allows the integration of existing drive system into the new machines designed in the modelling tool. An example of the machine design is presented to validate the usage of such a tool.